Phase 1 of 6
Scoping & EHR Integration
Define the clinical scope, forecast horizon, integration surface, and PHI residency posture that will govern every downstream architectural decision for ED, OR, and inpatient capacity forecasting.
0/12
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Clinical Scope & Forecast Horizon
Identify patient flow surfaces in scope
Why This Matters
ED, OR, and inpatient forecasting share patients but not signal distributions — an ED arrival model trained on chief-complaint text will not generalize to OR case-duration forecasting, which is dominated by CPT code, surgeon-specific historical duration, and case-mix. LeanTaaS iQueue maintains distinct models per surface for exactly this reason, and their published +6% case volume / $0.5M per-OR per-year figure (KLAS First Look 2023-2024) is a per-surface outcome, not a single-model outcome. Deciding scope upfront prevents the most common failure mode: one-sizing an inpatient forecasting model onto OR scheduling and watching accuracy collapse.
Note prompts — click to add
+ Which surfaces have the clearest ROI linkage — ED boarding reduction at $9,693-$13,298 per hour, OR cancellation reduction at 15-25%, or avoided ambulance diversion at $50,000+?+ Which surfaces share enough signal to justify shared feature pipelines vs. dedicated models per surface?+ Who owns throughput KPIs for each surface today, and are those owners aligned on a single forecasting program?
Required
Confirm which capacity domains the forecasting system must decision in near-real time.
Select all that apply
Emergency Department (ED) arrivals & boarding
Inpatient med-surg bed demand
ICU / step-down bed demand
Operating Room (OR) case duration & schedule
Post-Anesthesia Care Unit (PACU) / recovery
Labor & Delivery capacity
Observation unit utilization
Discharge readiness / expected departure
Inter-facility transfer coordination
Ambulance diversion decisioning
required
✓ saved
Define forecast horizon and refresh cadence
Why This Matters
The 4-8 hour horizon is the industry consensus because it is the minimum lead time that makes a proactive response possible: earlier-discharge rounds, float-pool mobilization, OR schedule compression, or opening a flex unit. Sub-hour horizons are useful for tactical handoffs (OR turnover, discharge readiness) but too short to change staffing. Multi-day horizons are necessary for seasonal surge (flu, RSV) but should be a separate model with a different training window, not a longer-horizon output of the same model.
Note prompts — click to add
+ What is the shortest lead time our operations team can actually act on — staffing, transfers, ambulance diversion, elective case deferral?+ Do we need tiered horizons (15 min for discharge, 4-8 hr for bed demand, 7 day for surge)?+ What refresh cadence do we commit to — sub-15-minute, hourly, or per-shift?
Required
Select the primary forecast horizon — staffing and transfer decisions need lead time; discharge-readiness is near-real-time.
Single choice
15-60 minute horizon (discharge readiness, OR turnover)
1-4 hour horizon (shift staffing, bed assignment)
4-8 hour horizon (bed demand, transfer coordination) — industry standard
8-24 hour horizon (next-shift staffing, elective case triage)
Multi-day horizon (7-14 day surge planning)
Tiered horizons per surface
required
✓ saved
Define forecast error tolerance and consequence
Why This Matters
Under-forecasting and over-forecasting have wildly different dollar consequences — a missed ED surge forecast that triggers ambulance diversion costs $50,000+ per diversion (ACEP / Advisory Board 2023) and drives permanent patient leakage, while over-staffing a float pool for a surge that does not arrive costs nursing hours at a much smaller unit cost. Symmetric MAPE is rarely the right training objective — asymmetric pinball loss (quantile regression) targeted at the P80 or P90 bed-demand quantile is usually a better match to the operating cost function.
Note prompts — click to add
+ Have we quantified the dollar cost of a 1-bed under-forecast vs. a 1-bed over-forecast at each surface?+ Should we train against asymmetric quantile loss rather than symmetric MAE?+ Who owns the cost assumptions — finance, operations, or the throughput committee?
Required
Specify the acceptable forecast error (MAPE / bed-count MAE) and the cost of over- vs. under-forecasting.
Single choice
Under-forecast penalty >> over-forecast (diversion / boarding cost dominates)
Over-forecast penalty >> under-forecast (staffing cost dominates)
Symmetric — MAPE / MAE-minimizing
Tiered — different asymmetry per surface (ED vs. OR vs. inpatient)
Not currently quantified
required
✓ saved
Quantify throughput revenue ceiling
Why This Matters
Academic Emergency Medicine / PMC studies place the optimal-capacity-strategy revenue envelope at $2.7M-$3.6M net revenue per hospital per year, and Wharton / AEM research shows each 1-hour reduction in ED boarding time drives $9,693-$13,298 in additional daily revenue. Without a dollar-denominated ceiling, capacity programs tend to get measured on operational KPIs (boarding time, LOS) that do not translate cleanly to the P&L, making it harder to defend investment in on-premises inference or facility-specific model training.
Note prompts — click to add
+ What was our measured ED boarding time last quarter, and what dollar uplift does a 1-hour reduction imply here?+ How many ambulance diversions did we record last year, and what is the $50,000+ per diversion figure worth in aggregate?+ Is throughput an executive-tracked KPI alongside boarding time and LWBS rate?
Recommended
Dollarize the upper bound of improvement the program is allowed to capture annually.
Single choice
< $1M annual throughput uplift expected
$1M - $5M annual throughput uplift
$5M - $20M annual throughput uplift
> $20M (multi-hospital system)
Not yet modeled
recommended
✓ saved
EHR, Bed-Mgmt & OR Integration Surface
Identify source EHR and bed management systems
Why This Matters
Epic now ships native capacity AI (the Epic Capacity Management module has been generally available since 2023 with documented 15-25% OR cancellation reduction), Oracle Health has its own capacity forecasting stack, and LeanTaaS iQueue is deployed at 57+ health systems per KLAS. Which EHR and which bed-management system are in scope is the single most load-bearing architectural decision, because it determines whether you are writing a tenant that consumes EHR data via FHIR R4, a model that calls out of the EHR via extensions, or a parallel system that the EHR integrates against. The integration path for Epic (FHIR R4 + Bridges) is fundamentally different from Oracle Health (HL7 v2 + native APIs).
Note prompts — click to add
+ Are we building alongside Epic Capacity Management / Oracle Health native AI, or replacing it?+ Do we have FHIR R4 endpoints exposed today, and are they read-only or write-capable?+ Who owns the HL7 v2 interface engine (Rhapsody / Mirth / Corepoint) and are we authorized to add subscriptions?
Required
Confirm the systems of record the forecasting model must integrate with.
Select all that apply
Epic (Grand Central / Capacity Management / Hyperspace)
Oracle Health / Cerner Millennium
MEDITECH Expanse
Allscripts / Altera Sunrise
TeleTracking (bed management)
Central Logic (transfer center)
Epic OpTime / Cerner SurgiNet (OR schedule)
LeanTaaS iQueue (existing deployment)
In-house bed board
requiredtrinidy
TrinidyTrinidy integrates directly with Epic, Oracle Health (Cerner), and TeleTracking on-premises via FHIR R4 and HL7 v2 — PHI never leaves the institution's perimeter, and the inference node sits on the same network as the EHR.
✓ saved
Specify FHIR R4 resources in scope
Why This Matters
HL7 FHIR R4 is the current US interoperability baseline, and the US Core Encounter, Location, and Appointment profiles are the load-bearing resources for capacity forecasting. The FHIR Subscription resource (R4 with R5 backport patterns) enables event-driven push rather than polling, which is how you reach sub-15-minute refresh without hammering the EHR. HL7 v2 ADT feeds remain essential for real-time census because they predate and co-exist with FHIR — most health systems run both in parallel and will for years.
Note prompts — click to add
+ Does our FHIR server support Subscription push, or are we polling today?+ Are we conformant to US Core 6.1+ Encounter / Location / Appointment profiles?+ What is our fallback when FHIR is down — does HL7 v2 ADT keep census current?
Required
Select the FHIR R4 resources the forecasting model will consume.
Select all that apply
Encounter (US Core — inpatient / ED / observation)
Location (bed / unit / facility)
Appointment (OR schedule)
Patient (demographics, MRN)
Condition (diagnosis / problem list)
Procedure (completed / scheduled)
Observation (vitals, labs, acuity scores)
ServiceRequest (orders)
MedicationAdministration
DiagnosticReport
Subscription (event-driven FHIR push)
required
✓ saved
Specify HL7 v2 message types consumed
Required
Select the HL7 v2 messages the forecasting model subscribes to.
Select all that apply
ADT A01 (admit)
ADT A02 (transfer)
ADT A03 (discharge)
ADT A04 (register outpatient / ED)
ADT A06 / A07 (observation status change)
ADT A08 (update patient)
ADT A11 / A13 (cancel admit / discharge)
SIU (OR schedule messages)
ORU (results — acuity, labs)
MDM (discharge summary)
required
✓ saved
Confirm PHI residency and HIPAA posture
Why This Matters
Patient flow data — census, bed assignments, discharge timing, acuity — is PHI under 45 CFR 160/164 and therefore subject to the full HIPAA Privacy and Security Rule. Cloud deployment is permitted under a Business Associate Agreement, but the BAA does not remove the institution's accountability for breach, access logging, or minimum-necessary. Leading health systems are increasingly deploying capacity AI on-premises specifically because cloud-dependent models degrade during peak load — exactly the moment when forecasting accuracy is most valuable — and because surge events often coincide with regional network degradation.
Note prompts — click to add
+ Does our cloud vendor BAA cover every data flow the forecasting model uses, including training, inference, and audit?+ Have we stress-tested our cloud dependency — what happens to forecasting if the internet link is degraded during a surge event?+ Can we keep inference on-premises while using cloud for non-PHI training or research workloads?
Required
Map PHI handling to HIPAA and state residency constraints before architecture is finalized.
Single choice
On-premises only — no PHI to cloud (fully HIPAA-sovereign)
Private cloud / VPC with executed BAA
Public cloud with BAA and de-identified inference
Hybrid — training in cloud, inference on-premises
Not yet decided
requiredtrinidy
TrinidyPatient movement data is PHI under HIPAA (45 CFR 160/164). Trinidy keeps census, acuity, and discharge-readiness inference on-premises — no PHI crosses the institution's network boundary for any forecasting decision, and the audit trail lives on the same node.
✓ saved
Specify deployment topology
Required
Select the physical/logical deployment target for the forecasting inference plane.
Single choice
On-premises in existing hospital data center
Co-located edge (facility-local inference node)
Private cloud / VPC in-region with BAA
Public cloud with BAA (AWS HealthLake / Azure Health Data / GCP Healthcare API)
Hybrid: on-prem inference + cloud training
Embedded inside EHR vendor (Epic / Oracle Health native)
requirededgetrinidy
TrinidyFor HIPAA-sovereign deployment with surge resilience, on-premises inference is the only architecture that removes cloud dependency from the critical path. Trinidy runs the full forecasting ensemble on-node inside the existing data center — same fabric as the EHR.
✓ saved
Stakeholders & Workflow Integration
Identify clinical consumers of forecast output
Required
Select the roles whose decisions will be driven by forecast output.
Select all that apply
Charge nurse / shift supervisor
Bed coordinator / transfer center
ED medical director
OR manager / surgical scheduler
House supervisor / nursing operations
Hospitalist / attending physician
Case management / discharge planner
Chief Nursing Officer / throughput committee
required
✓ saved
Define alert delivery surface
Why This Matters
Charge nurses and bed coordinators do not switch systems — if the forecast is not in the workflow they already use, it is not used. Epic Hyperspace and Oracle Health PowerChart embed via SMART-on-FHIR and CDS Hooks, which is the dominant pattern for non-Epic-native AI (LeanTaaS, Qventus, Care.ai all embed this way). Alert fatigue is the dominant failure mode — a forecast that triggers too often becomes invisible within two weeks.
Note prompts — click to add
+ Do we embed via SMART-on-FHIR / CDS Hooks, or is a separate dashboard acceptable?+ What is our threshold for escalation alerts, and have we measured alert rate in shadow mode before go-live?+ Who has the authority to retune alert thresholds when clinicians report fatigue?
Required
Specify where forecast output and alerts appear in clinical workflow.
Select all that apply
Embedded in Epic Hyperspace / Oracle Health PowerChart
Nurse station dashboards
Mobile devices (Rover / Vocera / MDM-managed)
Bed-board wall displays (TeleTracking / native)
Email / paging for escalation thresholds
SMS / secure messaging
Dedicated throughput command center display
required
✓ saved
Define human-in-the-loop authority boundary
Why This Matters
Patient flow forecasting is not diagnostic AI — it does not fall under FDA SaMD in most deployments — but it does drive decisions that materially affect care (transfer, diversion, discharge prioritization). The human-in-the-loop boundary is a governance decision, not an engineering decision, and it must be explicit before deployment. Epic and Oracle Health default to advisory output for exactly this reason, and the ONC HTI-2 rule (published 2024, phased compliance through 2026) adds transparency obligations for embedded vendor AI that make the boundary question part of the procurement conversation.
Note prompts — click to add
+ Have clinical leadership, risk, and legal signed off on the authority boundary?+ Is there a documented override path and is it audited?+ Does our deployment fall under ONC HTI-2 embedded-vendor AI transparency obligations?
Required
Specify whether forecast output is advisory, advisory-with-default, or autonomous.
Single choice
Advisory only — clinician must confirm every action
Advisory with default — default action unless overridden
Autonomous for tactical decisions (bed assignment, OR turnover)
Autonomous with human override for operational decisions
Tiered by decision type
required
✓ saved