Phase 1 of 6
Scoping & PHI Residency
Define the clinical surface, encounter latency window, and PHI residency perimeter that govern every subsequent architectural decision for ambient scribing.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Clinical Surface & Encounter Scope
Identify clinical settings in scope for ambient documentation
Why This Matters
Documentation conventions, note templates, and acceptance thresholds vary sharply by setting. DAX Copilot and Abridge published outcomes (e.g. Providence 51.7% documentation time reduction, Northwestern 24% note-time reduction + 11.3% more patients/day) are concentrated in ambulatory primary and specialty care — ED and inpatient rounding have materially different workflow constraints and have not yet demonstrated the same acceptance rates. A single generic model across all settings underperforms on every one of them.
Note prompts — click to add
+ Which setting has the highest documentation burden today and the clearest acceptance criteria for a pilot?+ Do we have specialty-specific note templates per setting, or a generic SOAP template we expect the model to conform to?+ Have we validated that our chosen settings match the settings where DAX / Abridge / Suki published outcomes?Confirm which care settings the ambient scribe will cover.
Select all that apply
Define end-to-end encounter latency SLA
Why This Matters
Clinician acceptance drops sharply when the draft note is not available before the next patient arrives — the documentation debt simply moves to end-of-day charting instead of being eliminated. Nuance DAX Copilot clinical studies show the 76% after-hours charting reduction depends on draft notes being ready during room turnover, not hours later. A 60-second post-encounter target is the industry-validated standard and should be treated as a hard SLA, not an aspiration.
Note prompts — click to add
+ What is our measured end-to-end latency today from stop-recording to draft-ready, and where are the hot spots?+ Does our clinician workflow assume draft availability before the next patient, or is end-of-day acceptable?+ Have we stress-tested concurrent encounter load at department peak, not just average?Select the latency budget from end-of-encounter to draft note available for clinician review.
Single choice
Trinidy — Cloud-routed ASR and LLM inference adds 200-800ms per round-trip on top of model compute. Trinidy runs Whisper-class ASR and the note-generation LLM on the same on-prem GPU fabric — draft note ready inside the encounter window with no cross-facility egress.
Define PHI residency perimeter
Why This Matters
HHS/OCR guidance and the Security Rule modernization targeted for May 2026 reclassify AI systems ingesting PHI audio as high-priority audit targets, and every cloud vendor in the pipeline becomes a Business Associate requiring a BAA. Institutions that explicitly prohibit cloud transcription are not outliers — many academic medical centers and federally-funded systems have policy language ruling it out. Getting residency wrong at scoping forces a full re-architecture after the first OCR inquiry.
Note prompts — click to add
+ Have we written down the exact residency boundary for audio, transcript, and draft note before selecting vendors?+ Which of our existing cloud vendors have BAAs that explicitly cover PHI audio ingestion, not just text?+ If OCR audited our ambient documentation pipeline tomorrow, could we produce the BAA chain for every hop?Specify where encounter audio, transcripts, and generated notes may physically reside.
Select all that apply
Trinidy — Under HIPAA §164.308 and the HHS/OCR Security Rule modernization effort, encounter audio is the most sensitive PHI category. Trinidy keeps audio capture, ASR, LLM inference, and audit logging inside the facility perimeter — no cross-boundary data flow, no vendor BAA chain to manage.
Quantify documentation burden and target reduction
Why This Matters
Published enterprise outcomes are tightly clustered: Northwestern Medicine 24% note-time reduction, Intermountain 27% time-in-notes reduction, Providence Health 51.7% documentation time reduction and 30.3% burnout decrease. The 50-76% upper bound comes from Nuance DAX Copilot 2023 data and is reproducible under enterprise deployment with specialty-specific templates. Teams that target below 25% typically lose executive sponsorship before acceptance takes hold; teams that assume 76% without specialty fine-tuning miss by half.
Note prompts — click to add
+ Have we instrumented EHR telemetry to measure time-in-notes before pilot start, not after?+ Who owns the P&L line for physician retention driven by documentation burden reduction ($500K-$1M per physician replacement cost)?+ Are we measuring after-hours charting ("pajama time") separately, since it is the strongest burnout correlate?Baseline current documentation time and define the measurable reduction target.
Single choice
Map CMS AI-generated documentation accountability
Why This Matters
CMS finalized rules in 2025 hold providers accountable for the accuracy of AI-generated notes — the AI vendor is not the signatory, the clinician is. ONC HTI-1 and HTI-2 Decision Support Intervention oversight (active enforcement 2024-2025) extend this to AI-generated patient communications and require source, logic, and intended use transparency on qualifying DSIs. A workflow that commits AI-drafted notes without clinician review is both a regulatory and malpractice exposure.
Note prompts — click to add
+ Does our workflow prevent any AI-drafted note from reaching the legal record without clinician review?+ Is the AI-drafted label visible to every downstream reader of the note (billing, quality, legal)?+ Have we briefed the medical staff committee on the HTI-1 / HTI-2 transparency obligations?Confirm the clinician accountability model and sign-off workflow for AI-drafted notes.
Select all that apply
Confirm deployment topology for the inference plane
Select the physical/logical deployment target for ASR and note generation.
Single choice
Trinidy — For PHI-sovereign ambient scribing with sub-60s turnaround at department concurrency, cloud inference is physically and regulatorily fragile. Trinidy is the on-premises inference substrate — GPU on the same fabric as EHR integration, audit log, and specialty fine-tuning.
Define concurrent-encounter capacity target
Specify peak concurrent encounter count the inference fabric must sustain.
Single choice
Identify EHR and integration surface
Why This Matters
Abridge is now natively embedded in Epic and DAX Copilot integrates deeply with the Epic EHR as of early 2026. For any ambient scribe to actually reduce documentation burden, the draft must land inside the encounter in the EHR — copy-paste workflows destroy the time savings. HL7 FHIR R4 with US Core DocumentReference and Composition resources is the current integration contract; HL7 v2 alone is insufficient for structured note deposit with provenance.
Note prompts — click to add
+ Is our target EHR on a FHIR R4 release that supports DocumentReference with provenance extensions?+ Have we confirmed with EHR vendor whether SMART on FHIR embedded apps can deposit notes into the encounter without copy-paste?+ If we cannot integrate natively, what is our realistic ceiling on clinician acceptance?Confirm the EHR system and integration pattern for note deposit.
Select all that apply