Phase 1 of 6
Scoping & PHI Residency
Define the clinical surface, encounter latency window, and PHI residency perimeter that govern every subsequent architectural decision for ambient scribing.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Clinical Surface & Encounter Scope
Identify clinical settings in scope for ambient documentation
Why This Matters
Documentation conventions, note templates, and acceptance thresholds vary sharply by setting. DAX Copilot and Abridge published outcomes (e.g. Providence 51.7% documentation time reduction, Northwestern 24% note-time reduction + 11.3% more patients/day) are concentrated in ambulatory primary and specialty care — ED and inpatient rounding have materially different workflow constraints and have not yet demonstrated the same acceptance rates. A single generic model across all settings underperforms on every one of them.
Note prompts — click to add
+ Which setting has the highest documentation burden today and the clearest acceptance criteria for a pilot?+ Do we have specialty-specific note templates per setting, or a generic SOAP template we expect the model to conform to?+ Have we validated that our chosen settings match the settings where DAX / Abridge / Suki published outcomes?
Required
Confirm which care settings the ambient scribe will cover.
Select all that apply
Primary care / internal medicine ambulatory
Specialty ambulatory (cardiology, ortho, derm, etc.)
Behavioral health / psychiatry
Emergency department
Inpatient rounding / progress notes
Telehealth encounters (video + audio)
Procedural / surgical notes
Urgent care / retail clinic
required
✓ saved
Define end-to-end encounter latency SLA
Why This Matters
Clinician acceptance drops sharply when the draft note is not available before the next patient arrives — the documentation debt simply moves to end-of-day charting instead of being eliminated. Nuance DAX Copilot clinical studies show the 76% after-hours charting reduction depends on draft notes being ready during room turnover, not hours later. A 60-second post-encounter target is the industry-validated standard and should be treated as a hard SLA, not an aspiration.
Note prompts — click to add
+ What is our measured end-to-end latency today from stop-recording to draft-ready, and where are the hot spots?+ Does our clinician workflow assume draft availability before the next patient, or is end-of-day acceptable?+ Have we stress-tested concurrent encounter load at department peak, not just average?
Required
Select the latency budget from end-of-encounter to draft note available for clinician review.
Single choice
Streaming — draft visible during encounter (<2s rolling)
< 30s post-encounter (fast turnaround)
< 60s post-encounter (standard ambient scribe target)
< 5 min post-encounter (acceptable batch)
Asynchronous (clinician reviews later in session)
requirededgetrinidy
TrinidyCloud-routed ASR and LLM inference adds 200-800ms per round-trip on top of model compute. Trinidy runs Whisper-class ASR and the note-generation LLM on the same on-prem GPU fabric — draft note ready inside the encounter window with no cross-facility egress.
✓ saved
Define PHI residency perimeter
Why This Matters
HHS/OCR guidance and the Security Rule modernization targeted for May 2026 reclassify AI systems ingesting PHI audio as high-priority audit targets, and every cloud vendor in the pipeline becomes a Business Associate requiring a BAA. Institutions that explicitly prohibit cloud transcription are not outliers — many academic medical centers and federally-funded systems have policy language ruling it out. Getting residency wrong at scoping forces a full re-architecture after the first OCR inquiry.
Note prompts — click to add
+ Have we written down the exact residency boundary for audio, transcript, and draft note before selecting vendors?+ Which of our existing cloud vendors have BAAs that explicitly cover PHI audio ingestion, not just text?+ If OCR audited our ambient documentation pipeline tomorrow, could we produce the BAA chain for every hop?
Required
Specify where encounter audio, transcripts, and generated notes may physically reside.
Select all that apply
Audio must never leave facility network
Transcripts must remain on-premises
Generated notes must remain on-premises until committed to EHR
PHI may traverse BAA-covered vendor cloud
EU GDPR — EEA residency required
State-specific residency (e.g. New York, California, Texas)
Cross-border clinician review permitted under controls
requiredtrinidy
TrinidyUnder HIPAA §164.308 and the HHS/OCR Security Rule modernization effort, encounter audio is the most sensitive PHI category. Trinidy keeps audio capture, ASR, LLM inference, and audit logging inside the facility perimeter — no cross-boundary data flow, no vendor BAA chain to manage.
✓ saved
Quantify documentation burden and target reduction
Why This Matters
Published enterprise outcomes are tightly clustered: Northwestern Medicine 24% note-time reduction, Intermountain 27% time-in-notes reduction, Providence Health 51.7% documentation time reduction and 30.3% burnout decrease. The 50-76% upper bound comes from Nuance DAX Copilot 2023 data and is reproducible under enterprise deployment with specialty-specific templates. Teams that target below 25% typically lose executive sponsorship before acceptance takes hold; teams that assume 76% without specialty fine-tuning miss by half.
Note prompts — click to add
+ Have we instrumented EHR telemetry to measure time-in-notes before pilot start, not after?+ Who owns the P&L line for physician retention driven by documentation burden reduction ($500K-$1M per physician replacement cost)?+ Are we measuring after-hours charting ("pajama time") separately, since it is the strongest burnout correlate?
Required
Baseline current documentation time and define the measurable reduction target.
Single choice
< 25% reduction (conservative early pilot)
25% - 50% (matches Northwestern / Intermountain enterprise results)
50% - 76% (matches DAX Copilot / Providence published outcomes)
Not yet baselined
required
✓ saved
Map CMS AI-generated documentation accountability
Why This Matters
CMS finalized rules in 2025 hold providers accountable for the accuracy of AI-generated notes — the AI vendor is not the signatory, the clinician is. ONC HTI-1 and HTI-2 Decision Support Intervention oversight (active enforcement 2024-2025) extend this to AI-generated patient communications and require source, logic, and intended use transparency on qualifying DSIs. A workflow that commits AI-drafted notes without clinician review is both a regulatory and malpractice exposure.
Note prompts — click to add
+ Does our workflow prevent any AI-drafted note from reaching the legal record without clinician review?+ Is the AI-drafted label visible to every downstream reader of the note (billing, quality, legal)?+ Have we briefed the medical staff committee on the HTI-1 / HTI-2 transparency obligations?
Required
Confirm the clinician accountability model and sign-off workflow for AI-drafted notes.
Select all that apply
Clinician must review and sign every AI-drafted note before EHR commit
Clinician attestation that content reflects encounter
Provenance metadata on every note (model version, draft timestamp)
Explicit labeling of AI-drafted content in EHR
Patient disclosure that ambient AI was used during encounter
Specialty board / medical staff committee sign-off on model use
required
✓ saved
Confirm deployment topology for the inference plane
Required
Select the physical/logical deployment target for ASR and note generation.
Single choice
On-premises facility GPU (per-site or regional)
Hospital-system private data center
Private cloud / VPC in-region under BAA
Public cloud with BAA-covered PHI endpoint
Hybrid: on-prem ASR + BAA cloud LLM
Hybrid: on-prem LLM + BAA cloud ASR
requirededgetrinidy
TrinidyFor PHI-sovereign ambient scribing with sub-60s turnaround at department concurrency, cloud inference is physically and regulatorily fragile. Trinidy is the on-premises inference substrate — GPU on the same fabric as EHR integration, audit log, and specialty fine-tuning.
✓ saved
Define concurrent-encounter capacity target
Required
Specify peak concurrent encounter count the inference fabric must sustain.
Single choice
< 25 concurrent encounters (single clinic / pilot)
25 - 100 (department scale)
100 - 500 (hospital / facility scale)
500 - 2,500 (multi-facility health system — Intermountain scale)
> 2,500 (enterprise — DAX Copilot 600K+ clinician scale)
required
✓ saved
Identify EHR and integration surface
Why This Matters
Abridge is now natively embedded in Epic and DAX Copilot integrates deeply with the Epic EHR as of early 2026. For any ambient scribe to actually reduce documentation burden, the draft must land inside the encounter in the EHR — copy-paste workflows destroy the time savings. HL7 FHIR R4 with US Core DocumentReference and Composition resources is the current integration contract; HL7 v2 alone is insufficient for structured note deposit with provenance.
Note prompts — click to add
+ Is our target EHR on a FHIR R4 release that supports DocumentReference with provenance extensions?+ Have we confirmed with EHR vendor whether SMART on FHIR embedded apps can deposit notes into the encounter without copy-paste?+ If we cannot integrate natively, what is our realistic ceiling on clinician acceptance?
Required
Confirm the EHR system and integration pattern for note deposit.
Select all that apply
Epic — SMART on FHIR / embedded app
Oracle Health (Cerner) — SMART on FHIR
Other FHIR R4-capable EHR
HL7 v2 interface only
Native EHR partnership (Abridge-in-Epic pattern)
Custom EHR / no FHIR endpoint
required
✓ saved