Phase 1 of 6
Scoping & Clinical Integration
Define the clinical surface, latency budget, alert economics, and the EHR integration pattern that every downstream architectural decision will inherit.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Clinical Surface & Use-Case Framing
Identify clinical deterioration targets in scope
Why This Matters
Deterioration conditions look superficially similar but have entirely different feature surfaces, label definitions, and escalation pathways. A single generalized "early warning" score underperforms condition-specific heads by a wide margin — Epic's own Sepsis Predictive Model (deployed at 450+ health systems) delivered AUROC of only 0.63 at Michigan Medicine on external validation (Wong et al., JAMA Internal Medicine 2021) because sepsis-specific physiology is dilute when mixed with general deterioration signals. The first scoping error is almost always "score everyone for everything."
Note prompts — click to add
+ Which conditions have enough labeled cases in our EHR history to train a site-specific head?+ Have we separated the clinical question (is this patient deteriorating?) from the operational question (should we trigger RRT?)?+ Who owns alert outcome attribution per condition so we can measure per-condition model ROI?Confirm which deterioration conditions the CDS model must score continuously.
Select all that apply
Define clinical care settings in scope
Why This Matters
Data density, label prevalence, and the physiological baseline all shift materially between settings — an ICU patient already has continuous waveform monitoring while a ward patient may only have vitals every 4 hours. Models trained on ICU populations generalize poorly to ward populations because the ward baseline and the meaning of an abnormal value are different. Scoping care settings up front forces the team to plan for data density variation rather than discover it at validation.
Note prompts — click to add
+ Do we have continuous waveform data in every setting we plan to score, or only in ICU?+ Will we train a single cross-setting model or setting-specific heads, and how have we tested that decision?+ What is our current rapid response activation rate per setting and what does AI need to add on top?Select the care units where the CDS model will score continuously.
Select all that apply
Establish alert latency SLA (event-to-clinician)
Why This Matters
Every hour of antibiotic delay in sepsis increases mortality by roughly 7% per hour (OR ~1.041/hr), so latency is not a performance concern — it is directly titrated against patient outcome. Cloud-hosted CDS adds a latency variance layer that is invisible until a shift change or a mass-casualty arrival reveals it, and the variance is the actual risk (the mean is usually fine). An SLA that specifies only an average and not a p99 is functionally no SLA at all in clinical deployment.
Note prompts — click to add
+ What is our current p99 latency from vital sign posted to alert displayed, measured end-to-end?+ How does our latency profile behave at shift change, code events, and ED surges — the moments we most need it to work?+ Have we instrumented latency separately for each stage (FHIR ingest, feature retrieval, inference, alert delivery)?Select the end-to-end latency budget from vital sign update to clinician-visible alert.
Single choice
Trinidy — Cloud-routed CDS inference adds 200-800ms of unpredictable round-trip latency per scoring cycle, and under shift-change or census surges the tail degrades further. Trinidy runs the full ensemble on-premises against the live EHR stream — sub-second inference stays flat even at peak hospital load.
Define acceptable alert rate and PPV floor
Why This Matters
The Epic Sepsis Predictive Model in external validation alerted on 18% of all hospitalizations while missing 67% of actual sepsis cases — a PPV of roughly 12% (Wong et al., JAMA Internal Medicine 2021). Alert fatigue is not a soft UX concern: clinicians silence, ignore, or bulk-dismiss low-PPV alerts within weeks, which destroys model value even when the AUROC is good. Setting a PPV floor and an alert-per-patient-day ceiling before training anchors the team to the only thing that actually determines whether the model gets used.
Note prompts — click to add
+ What is our current alert dismissal rate on existing CDS, and do we know which alerts get dismissed vs. acted on?+ Who owns the clinical workflow decision to act on an alert, and do they agree on the PPV floor?+ Have we piloted any alert calibration (threshold tuning, rate limiting) on the current Epic model?Select the maximum alert volume per patient-day and minimum positive predictive value the program will accept.
Single choice
Confirm PHI residency and HIPAA minimum necessary scope
Map the PHI flow for training, inference, and audit against the HIPAA Security Rule (45 CFR 164.312) and minimum-necessary standard.
Select all that apply
Trinidy — Hyperscaler clinical AI offerings require PHI egress to vendor cloud environments, creating persistent HIPAA BAA scope and HTI-2 provenance gaps. Trinidy keeps EHR ingestion, feature computation, inference, and audit logs entirely within the institution's perimeter — no PHI leaves the covered entity boundary for any scoring decision.
Determine FDA SaMD classification applicability
Why This Matters
The Cures Act Non-Device CDS exemption requires that the clinician can independently review the basis for the recommendation — a model that outputs a score without reason codes almost certainly fails this test and falls under FDA SaMD oversight. Prenosis Sepsis ImmunoScore received FDA De Novo authorization in April 2024 (AUC 0.84 diagnosis, 0.76 for 30-day mortality) and is the public reference point for what validated, FDA-authorized sepsis AI looks like. Misclassifying the device status is an existential regulatory error — a Class II or III device deployed without clearance is an unapproved medical device.
Note prompts — click to add
+ Does our model output reason codes detailed enough that a clinician can independently review the basis for each recommendation?+ Have we obtained regulatory counsel opinion on SaMD classification, and is it documented?+ If Class II, have we mapped our Predetermined Change Control Plan (PCCP) per the FDA September 2025 finalized guidance?Assess whether the CDS model meets the FDA SaMD framework and the Non-Device CDS criteria under 21st Century Cures Act section 3060.
Single choice
Map ONC HTI-1 / HTI-2 Decision Support Intervention obligations
Why This Matters
ONC HTI-2 entered active enforcement in 2025-2026, and CMS has begun issuing Conditions of Participation citations against health systems that lack compliant algorithmic transparency, bias testing, and per-prediction audit trails for clinical decision support AI. The source attribute disclosure requirement means a certified EHR deploying predictive DSIs must surface the model's intended use, input features, training population, and performance across demographic groups to the user — which implies these artifacts must exist and be current. A CDS program that cannot answer HTI-2 inquiries on demand is operationally out of compliance today, not at some future date.
Note prompts — click to add
+ Do we have the HTI-2 source attribute package documented and queryable for every predictive DSI we use today, including Epic models?+ Who owns HTI-2 artifact maintenance — is it assigned or a shared responsibility?+ Have we performed the HTI-2 required bias / fairness analysis and logged per-prediction provenance?Confirm obligations under ONC HTI-1 (2023) Decision Support Intervention criteria and HTI-2 (2024-2025) algorithmic transparency / source attribute disclosure.
✓ savedSpecify deployment topology for the inference plane
Select the physical/logical deployment target for the scoring ensemble.
Single choice
Trinidy — For HIPAA-sovereign CDS with sub-second scoring across every active bed, cloud inference is both latency-incompatible and creates HTI-2 provenance gaps. Trinidy is the on-premises inference substrate — GPU or CPU options on the same fabric, collocated with the EHR data plane.
Define clinical integration pattern into EHR workflow
Why This Matters
Alert delivery channel is the single largest driver of whether CDS changes clinical behavior. Epic BPA firing rates and Oracle Health advisor rates are already high from baseline rules — adding another BPA without retiring any existing alert generally reduces, not increases, response rate. CDS Hooks is the vendor-neutral HL7 standard and is the correct integration for portable, multi-EHR deployment; sidecar dashboards without EHR integration have historically had the lowest adoption.
Note prompts — click to add
+ What is the current BPA fire-to-action rate for our existing CDS, and where does a new alert fit in that distribution?+ Have we committed to retiring or consolidating existing alerts as part of introducing new AI alerts?+ Does our integration pattern preserve the EHR audit trail or create a parallel one?Select how CDS alerts will surface in the clinician workflow.
Single choice