Phase 1 of 6
Scoping & OR Integration
Define the surgical modalities in scope, the end-to-end latency budget from camera to overlay, and the OR integration constraints that govern every downstream architectural decision.
0/7
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Surgical Modality & Clinical Scope
Identify surgical modalities in scope for intraoperative AI
Why This Matters
Different surgical modalities have fundamentally different latency budgets, anatomy class distributions, and regulatory pathways — a model tuned for lap chole critical view of safety will not transfer to robotic microsurgery without retraining. Lap chole alone represents ~750,000 US cases per year with a 0.3–0.7% bile duct injury incidence, so scoping to this procedure first maximizes near-term clinical impact. Robotic and microsurgical modalities demand sub-50ms frame latency that changes the entire inference architecture decision.
Note prompts — click to add
+ Which procedure type gives us the largest measurable clinical impact in the first 12 months?+ Do robotic and laparoscopic cases share enough visual feature overlap to justify a common backbone?+ Have we scoped which modalities we intend to support in the initial 510(k) submission versus later supplements?Confirm which procedure types your decision support model will run against in the OR.
Select all that apply
Define end-to-end frame latency budget (camera to overlay)
Why This Matters
25 fps laparoscopic video requires under 40ms of inference per frame to avoid visible overlay lag, and published benchmarks show current on-device surgical AI achieving only 11–27 fps — meaning most fielded systems drop frames or run at displayed rates below the video source. Any cloud round-trip adds 50–500ms, which is longer than the instrument-motion window between useful decision points. Latency set correctly at the start has 10x the architectural leverage of latency optimized after deployment.
Note prompts — click to add
+ What is our measured p99 frame latency end-to-end including frame grabber, preprocessing, inference, and overlay render?+ At what frame rate does the overlay visibly lag behind the surgeon's instrument tip, and have we measured it with surgeons in the loop?+ Have we stress-tested under multi-camera / multi-stream load at 2x expected concurrent OR count?Select the P99 latency target your inference pipeline must hold frame-to-frame.
Single choice
Trinidy — NEXUS OS deploys inference GPU directly in the OR stack — latency is physical proximity, not cloud SLA. Sub-40ms per-frame inference at 25 fps is achievable on an OR-local H100 or L40S with TensorRT, while cloud round-trip alone adds 50–500ms and is architecturally incompatible with surgical AI.
Select OR video integration path
Why This Matters
Direct HDMI or SDI taps off the laparoscopic tower give sub-5ms frame capture with no middleware jitter, while routing through an OR integration stack typically adds 20–80ms depending on vendor and codec. Robotic platforms like da Vinci expose video via TileProTM or Firefly and each has documented integration SLAs that must be part of the latency budget. The capture path is a one-way architectural decision — changing it post-510(k) typically requires a supplemental submission.
Note prompts — click to add
+ Have we profiled end-to-end latency from camera sensor to inference input on the exact OR hardware we will ship on?+ Do we have written integration confirmation from Medtronic, Olympus, Stryker, or Intuitive for our target video path?+ What is our plan for ORs that use different video stacks than our primary reference site?Confirm how the inference node taps the surgical video bus.
Single choice
Define clinical decision support use cases for scoring
Why This Matters
The GoNoGoNet prospective study (Surgical Endoscopy 2023) showed AI decision support changed anatomical annotations in 27% of cases, and 70% of those changes represented safer dissection decisions — this is the dominant evidence base for clinical benefit in laparoscopic general surgery. Critical view of safety AI has shown bile duct injury reduction of up to 85% in published work, with BDI annual US cost exceeding $1B and mean plaintiff award $508,341. Scoping by decision type rather than by model architecture keeps the 510(k) indications clean.
Note prompts — click to add
+ Which decision type has the strongest published evidence for clinical benefit in our chosen procedures?+ Are bleeding / thermal detection models in scope for initial 510(k) or reserved for a later supplement?+ How will we separate advisory overlays from safety-critical alerts in the UI and in the regulatory filing?Select the specific intraoperative decisions the model will support.
Select all that apply
Establish temporal consistency / frame-to-frame stability requirement
Why This Matters
Flickering or jumping overlays between frames are actively distracting in surgery and erode surgeon trust faster than occasional false negatives. A per-frame detector without a temporal consistency head produces unstable segmentation masks even at high per-frame accuracy, so the ensemble must include temporal modeling (sequence transformer, temporal CNN, or Kalman-filtered tracker). This is the single most common reason early-stage surgical AI fails pilot adoption.
Note prompts — click to add
+ How have we measured overlay stability (IoU between consecutive frames, flicker count per minute) in our pilot?+ Does our model architecture include a temporal head, or are we relying on per-frame detection only?+ What threshold of frame-to-frame flicker do our surgeon advisors consider unacceptable?Define the tolerance for overlay flicker between frames.
Single choice
Define operating environment constraints
Confirm the environmental, sterility, and infrastructure constraints of the target OR.
Select all that apply
Trinidy — Trinidy OR-local deployment tolerates network-isolated operation — inference continues without any network connectivity, satisfying ORs that run in air-gapped mode during procedures. No cloud dependency means no degradation when the hospital network flaps.
Specify deployment topology for the inference plane
Select the physical deployment target for surgical AI inference.
Single choice
Trinidy — For sub-40ms per-frame inference and HIPAA-sovereign surgical video, cloud inference is physically and regulatorily incompatible. NEXUS OS runs the full pipeline on an OR-local GPU — the surgical video never leaves the facility perimeter.