Phase 1 of 6
Scoping & Modality / Latency Constraints
Define the modalities in scope, acquisition-speed SLA, clinical risk tier, PHI boundary, and deployment topology before any model or data decision is locked in.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Modality & Clinical Scope
Identify imaging modalities in scope
Why This Matters
Modalities differ by an order of magnitude in study size and per-case compute — a head CT may be 200-500 slices while a whole-slide pathology image can exceed 10 GB at native resolution, and each drives a different GPU memory and throughput profile. FDA-cleared algorithms are modality-specific, and a site running Aidoc on CT head plus Annalise.ai on chest X-ray plus a pathology foundation model is running three distinct inference pipelines, not one. Scoping modality up front determines whether you can share GPU infrastructure or need modality-dedicated nodes.
Note prompts — click to add
+ Which modalities generate the highest stat-read volume today and where is the scanner bottleneck most painful?+ Have we inventoried every FDA-cleared algorithm our radiologists use today across modality, vendor, and deployment mode?+ What is our 12-month roadmap for adding modalities or foundation-model-based multi-task readers?
Required
Confirm which modalities your inference stack must serve at acquisition speed.
Select all that apply
CT (non-contrast and contrast — head, chest, abdomen)
MRI (brain, spine, cardiac, musculoskeletal)
X-ray / radiography (chest, musculoskeletal)
Mammography / tomosynthesis
Ultrasound (point-of-care and cart-based)
Digital pathology whole-slide images
PET / nuclear medicine
Fluoroscopy / interventional
required
✓ saved
Define per-study inference latency SLA
Why This Matters
Viz.ai has publicly reported flagging critical findings in under 30 seconds, and that speed is the mechanism behind the 66-minute faster time-to-treatment for LVO stroke patients across its 1,600+ hospital deployments. Once inference moves to the scanner edge, the SLA is no longer "as fast as the cloud can return" — it becomes an engineering target the institution sets. Setting it after the infrastructure is built is a 10x harder problem than setting it before.
Note prompts — click to add
+ What is our current AI turnaround time from acquisition to PACS-posted result, and who owns that metric?+ For stroke and PE indications specifically, can we measure minutes-to-treatment attributable to AI flagging?+ Do our radiologists trust AI results enough to act on them, and does latency degrade that trust when it slips?
Required
Select the end-to-end latency budget from acquisition complete to AI result posted back to PACS.
Single choice
< 2 seconds per study (acquisition-speed — stat reads, stroke, PE)
2 - 10 seconds per study (near-real-time triage)
< 60 seconds per study (routine worklist prioritization)
< 5 minutes per study (batch overnight / retrospective)
Tiered by modality and clinical indication
requirededgetrinidy
TrinidyCloud-routed inference on a 500-slice CT study typically adds 10-30 seconds of DICOM egress alone before a single slice is scored. Trinidy runs GPU inference co-located with PACS or at the scanner edge, so a full-study read completes before the technologist has finished transferring the next patient.
✓ saved
Classify clinical risk tier per indication
Why This Matters
FDA has cleared more than 900 AI/ML-enabled devices for radiology by early 2026, and 76 percent of the full FDA AI/ML device inventory is in radiology. The vast majority are Class II CADt or CADe, but characterization and diagnostic algorithms carry materially higher liability and monitoring burden. Under EU AI Act (Regulation 2024/1689), diagnostic AI is classified as high-risk Annex III and carries conformity-assessment and post-market monitoring obligations that overlap but are not identical to FDA requirements.
Note prompts — click to add
+ For each AI tool in production, do we know its FDA risk class and intended use statement verbatim?+ If we operate in the EU, have we mapped our AI portfolio to EU AI Act Annex III high-risk categories?+ How do we document that clinicians understand each tool is triage vs. diagnostic?
Required
Map each AI use case to its FDA risk class and clinical consequence tier.
Select all that apply
CADt — triage / notification (Class II, most common for Aidoc / Viz.ai)
CADe — detection marker (Class II)
CADx — diagnostic / characterization (Class II or III depending on indication)
Quantification / measurement (Class II)
Workflow / non-diagnostic (often 510(k)-exempt)
High-risk diagnostic under EU AI Act Annex III
Not yet classified — regulatory review pending
required
✓ saved
Define PHI boundary and data residency
Why This Matters
HIPAA Privacy and Security Rules (45 CFR Parts 160 and 164) apply to every covered entity and business associate handling PHI, and OCR enforcement actions have materially increased over the last decade with settlements routinely into the seven and eight figures. Each DICOM transmission to a cloud inference endpoint extends the BAA perimeter and the breach-notification surface. Keeping inference on-premises is the most direct way to reduce both — and for EU GDPR, it is often the only defensible posture for Article 9 special-category health data.
Note prompts — click to add
+ Have we inventoried every AI vendor who touches imaging data and re-confirmed each BAA is current?+ Where in our inference path does identified PHI cross a network boundary, and is each hop logged?+ If we lost internet connectivity for six hours, would our AI triage continue operating?
Required
Specify where DICOM pixel data, metadata, and inference outputs are permitted to flow.
Select all that apply
All PHI must remain on-premises within the facility
On-premises within the health system WAN (multi-site)
Private cloud under BAA with de-identified metadata only
Public cloud under BAA with full DICOM transmission
EU GDPR residency — data must remain in member state
UK GDPR residency required
Canada PIPEDA / provincial health privacy rules
Cross-border permitted under signed BAA and encryption in transit + at rest
requiredtrinidy
TrinidyHIPAA Security Rule technical safeguards require access control, audit control, and transmission security for any ePHI movement. Trinidy keeps DICOM data, model weights, inference outputs, and audit logs inside the facility perimeter — eliminating BAA-covered cloud egress for every inference call.
✓ saved
Select deployment topology for the inference plane
Required
Choose the physical and logical deployment target for the AI inference stack.
Single choice
Scanner-edge GPU (co-located with modality — lowest latency)
Radiology department on-premises GPU cluster adjacent to PACS
Hospital / health system data center GPU cluster
OEM scanner-embedded inference (GE / Siemens / Philips native)
Private cloud / VPC in-region under BAA
Public cloud AI service under BAA
Hybrid: on-prem inference + cloud fine-tuning
requirededgetrinidy
TrinidyFor sub-2-second full-study reads on 200-500-slice CT and HIPAA-sovereign residency, cloud inference is physically and regulatorily incompatible. Trinidy is the on-premises GPU substrate — supports H100, L40S, and newer accelerators with multi-model orchestration on the same fabric.
✓ saved
Define vendor-neutrality strategy
Why This Matters
OEM-embedded AI is convenient at purchase but fragments the compliance, audit-trail, and monitoring surface across scanner vendors — each with its own model-versioning cadence and PCCP documentation. A vendor-neutral inference layer lets a health system run Aidoc, Viz.ai, Annalise.ai, Siemens AI-Rad Companion, and internal models on the same monitoring and audit fabric. This matters most when the AI portfolio scales from 3-5 algorithms to the 10-30 concurrent algorithms typical at larger academic centers.
Note prompts — click to add
+ Count the AI algorithms running across all our scanner OEMs today — is audit and monitoring unified or fragmented?+ If we added three new cleared algorithms next year, can each vendor deploy to our infrastructure or do we need three new integrations?+ Who owns the cross-vendor view of AI performance across the radiology department?
Recommended
Clarify how the AI infrastructure will avoid lock-in to a single scanner OEM or AI vendor.
Single choice
Vendor-neutral inference platform — all cleared algorithms deploy to shared infrastructure
Primary OEM-embedded + vendor-neutral for non-OEM algorithms
OEM-embedded only (single-vendor fleet)
Marketplace / orchestration layer (Aidoc aiOS, Blackford, Nuance Precision Imaging)
Not yet decided
recommendedtrinidy
TrinidyGE Healthcare, Siemens Healthineers, and Philips are all embedding inference natively on-device, which creates OEM lock-in for sites that standardize on one vendor. Trinidy is the vendor-neutral inference substrate — cleared algorithms from any vendor deploy to the same fabric.
✓ saved
Establish acquisition-speed throughput requirement
Required
Quantify peak studies per hour across all modalities that the inference stack must not bottleneck.
Single choice
< 20 studies / hour (small site, low acuity)
20 - 100 studies / hour (community hospital)
100 - 500 studies / hour (regional medical center)
500 - 2,000 studies / hour (large academic center)
> 2,000 studies / hour (multi-site health system)
Not currently measured at peak
required
✓ saved
Define availability and operational continuity SLA
Why This Matters
Once AI is embedded in the critical stat-read workflow for stroke or PE, the inference service effectively becomes a patient-safety system — outages do not just degrade a nice-to-have, they remove a clinical safety net that clinicians have come to rely on. Cloud inference depends on the internet link; edge inference depends only on local power and hardware. Clinicians and safety officers should explicitly own the uptime SLA because it becomes part of the medical-device risk file under IEC 62304.
Note prompts — click to add
+ Have we documented what clinicians do when AI is unavailable, and is that workflow rehearsed?+ Is AI inference considered a safety system by our hospital risk committee, and if not, why not?+ What is our measured AI service uptime over the last 12 months, and how did it compare to PACS uptime?
Required
Specify uptime, failover, and disaster-recovery behavior required for AI inference.
requirededge
✓ saved