Phase 1 of 6
Scoping & Modality / Latency Constraints
Define the modalities in scope, acquisition-speed SLA, clinical risk tier, PHI boundary, and deployment topology before any model or data decision is locked in.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Modality & Clinical Scope
Identify imaging modalities in scope
Why This Matters
Modalities differ by an order of magnitude in study size and per-case compute — a head CT may be 200-500 slices while a whole-slide pathology image can exceed 10 GB at native resolution, and each drives a different GPU memory and throughput profile. FDA-cleared algorithms are modality-specific, and a site running Aidoc on CT head plus Annalise.ai on chest X-ray plus a pathology foundation model is running three distinct inference pipelines, not one. Scoping modality up front determines whether you can share GPU infrastructure or need modality-dedicated nodes.
Note prompts — click to add
+ Which modalities generate the highest stat-read volume today and where is the scanner bottleneck most painful?+ Have we inventoried every FDA-cleared algorithm our radiologists use today across modality, vendor, and deployment mode?+ What is our 12-month roadmap for adding modalities or foundation-model-based multi-task readers?Confirm which modalities your inference stack must serve at acquisition speed.
Select all that apply
Define per-study inference latency SLA
Why This Matters
Viz.ai has publicly reported flagging critical findings in under 30 seconds, and that speed is the mechanism behind the 66-minute faster time-to-treatment for LVO stroke patients across its 1,600+ hospital deployments. Once inference moves to the scanner edge, the SLA is no longer "as fast as the cloud can return" — it becomes an engineering target the institution sets. Setting it after the infrastructure is built is a 10x harder problem than setting it before.
Note prompts — click to add
+ What is our current AI turnaround time from acquisition to PACS-posted result, and who owns that metric?+ For stroke and PE indications specifically, can we measure minutes-to-treatment attributable to AI flagging?+ Do our radiologists trust AI results enough to act on them, and does latency degrade that trust when it slips?Select the end-to-end latency budget from acquisition complete to AI result posted back to PACS.
Single choice
Trinidy — Cloud-routed inference on a 500-slice CT study typically adds 10-30 seconds of DICOM egress alone before a single slice is scored. Trinidy runs GPU inference co-located with PACS or at the scanner edge, so a full-study read completes before the technologist has finished transferring the next patient.
Classify clinical risk tier per indication
Why This Matters
FDA has cleared more than 900 AI/ML-enabled devices for radiology by early 2026, and 76 percent of the full FDA AI/ML device inventory is in radiology. The vast majority are Class II CADt or CADe, but characterization and diagnostic algorithms carry materially higher liability and monitoring burden. Under EU AI Act (Regulation 2024/1689), diagnostic AI is classified as high-risk Annex III and carries conformity-assessment and post-market monitoring obligations that overlap but are not identical to FDA requirements.
Note prompts — click to add
+ For each AI tool in production, do we know its FDA risk class and intended use statement verbatim?+ If we operate in the EU, have we mapped our AI portfolio to EU AI Act Annex III high-risk categories?+ How do we document that clinicians understand each tool is triage vs. diagnostic?Map each AI use case to its FDA risk class and clinical consequence tier.
Select all that apply
Define PHI boundary and data residency
Why This Matters
HIPAA Privacy and Security Rules (45 CFR Parts 160 and 164) apply to every covered entity and business associate handling PHI, and OCR enforcement actions have materially increased over the last decade with settlements routinely into the seven and eight figures. Each DICOM transmission to a cloud inference endpoint extends the BAA perimeter and the breach-notification surface. Keeping inference on-premises is the most direct way to reduce both — and for EU GDPR, it is often the only defensible posture for Article 9 special-category health data.
Note prompts — click to add
+ Have we inventoried every AI vendor who touches imaging data and re-confirmed each BAA is current?+ Where in our inference path does identified PHI cross a network boundary, and is each hop logged?+ If we lost internet connectivity for six hours, would our AI triage continue operating?Specify where DICOM pixel data, metadata, and inference outputs are permitted to flow.
Select all that apply
Trinidy — HIPAA Security Rule technical safeguards require access control, audit control, and transmission security for any ePHI movement. Trinidy keeps DICOM data, model weights, inference outputs, and audit logs inside the facility perimeter — eliminating BAA-covered cloud egress for every inference call.
Select deployment topology for the inference plane
Choose the physical and logical deployment target for the AI inference stack.
Single choice
Trinidy — For sub-2-second full-study reads on 200-500-slice CT and HIPAA-sovereign residency, cloud inference is physically and regulatorily incompatible. Trinidy is the on-premises GPU substrate — supports H100, L40S, and newer accelerators with multi-model orchestration on the same fabric.
Define vendor-neutrality strategy
Why This Matters
OEM-embedded AI is convenient at purchase but fragments the compliance, audit-trail, and monitoring surface across scanner vendors — each with its own model-versioning cadence and PCCP documentation. A vendor-neutral inference layer lets a health system run Aidoc, Viz.ai, Annalise.ai, Siemens AI-Rad Companion, and internal models on the same monitoring and audit fabric. This matters most when the AI portfolio scales from 3-5 algorithms to the 10-30 concurrent algorithms typical at larger academic centers.
Note prompts — click to add
+ Count the AI algorithms running across all our scanner OEMs today — is audit and monitoring unified or fragmented?+ If we added three new cleared algorithms next year, can each vendor deploy to our infrastructure or do we need three new integrations?+ Who owns the cross-vendor view of AI performance across the radiology department?Clarify how the AI infrastructure will avoid lock-in to a single scanner OEM or AI vendor.
Single choice
Trinidy — GE Healthcare, Siemens Healthineers, and Philips are all embedding inference natively on-device, which creates OEM lock-in for sites that standardize on one vendor. Trinidy is the vendor-neutral inference substrate — cleared algorithms from any vendor deploy to the same fabric.
Establish acquisition-speed throughput requirement
Quantify peak studies per hour across all modalities that the inference stack must not bottleneck.
Single choice
Define availability and operational continuity SLA
Why This Matters
Once AI is embedded in the critical stat-read workflow for stroke or PE, the inference service effectively becomes a patient-safety system — outages do not just degrade a nice-to-have, they remove a clinical safety net that clinicians have come to rely on. Cloud inference depends on the internet link; edge inference depends only on local power and hardware. Clinicians and safety officers should explicitly own the uptime SLA because it becomes part of the medical-device risk file under IEC 62304.
Note prompts — click to add
+ Have we documented what clinicians do when AI is unavailable, and is that workflow rehearsed?+ Is AI inference considered a safety system by our hospital risk committee, and if not, why not?+ What is our measured AI service uptime over the last 12 months, and how did it compare to PACS uptime?Specify uptime, failover, and disaster-recovery behavior required for AI inference.
✓ saved