Phase 1 of 6
Scoping & OR Integration
Define the surgical modalities in scope, the end-to-end latency budget from camera to overlay, and the OR integration constraints that govern every downstream architectural decision.
0/7
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Surgical Modality & Clinical Scope
Identify surgical modalities in scope for intraoperative AI
Why This Matters
Different surgical modalities have fundamentally different latency budgets, anatomy class distributions, and regulatory pathways — a model tuned for lap chole critical view of safety will not transfer to robotic microsurgery without retraining. Lap chole alone represents ~750,000 US cases per year with a 0.3–0.7% bile duct injury incidence, so scoping to this procedure first maximizes near-term clinical impact. Robotic and microsurgical modalities demand sub-50ms frame latency that changes the entire inference architecture decision.
Note prompts — click to add
+ Which procedure type gives us the largest measurable clinical impact in the first 12 months?+ Do robotic and laparoscopic cases share enough visual feature overlap to justify a common backbone?+ Have we scoped which modalities we intend to support in the initial 510(k) submission versus later supplements?
Required
Confirm which procedure types your decision support model will run against in the OR.
Select all that apply
Laparoscopic cholecystectomy (critical view of safety)
Laparoscopic colorectal / general surgery
Robotic surgery (Intuitive da Vinci integration)
Gynecologic laparoscopy
Bariatric surgery
Thoracic / minimally invasive
Open surgery with video capture
Microsurgery / neurosurgery (sub-50ms required)
required
✓ saved
Define end-to-end frame latency budget (camera to overlay)
Why This Matters
25 fps laparoscopic video requires under 40ms of inference per frame to avoid visible overlay lag, and published benchmarks show current on-device surgical AI achieving only 11–27 fps — meaning most fielded systems drop frames or run at displayed rates below the video source. Any cloud round-trip adds 50–500ms, which is longer than the instrument-motion window between useful decision points. Latency set correctly at the start has 10x the architectural leverage of latency optimized after deployment.
Note prompts — click to add
+ What is our measured p99 frame latency end-to-end including frame grabber, preprocessing, inference, and overlay render?+ At what frame rate does the overlay visibly lag behind the surgeon's instrument tip, and have we measured it with surgeons in the loop?+ Have we stress-tested under multi-camera / multi-stream load at 2x expected concurrent OR count?
Required
Select the P99 latency target your inference pipeline must hold frame-to-frame.
Single choice
< 40ms per frame (25 fps laparoscopic baseline)
< 33ms per frame (30 fps standard surgical display)
< 17ms per frame (60 fps robotic / high-FOV)
< 20ms per frame (microsurgery / neurosurgery)
< 100ms acceptable (advisory overlay, not safety-critical)
requirededgetrinidy
TrinidyNEXUS OS deploys inference GPU directly in the OR stack — latency is physical proximity, not cloud SLA. Sub-40ms per-frame inference at 25 fps is achievable on an OR-local H100 or L40S with TensorRT, while cloud round-trip alone adds 50–500ms and is architecturally incompatible with surgical AI.
✓ saved
Select OR video integration path
Why This Matters
Direct HDMI or SDI taps off the laparoscopic tower give sub-5ms frame capture with no middleware jitter, while routing through an OR integration stack typically adds 20–80ms depending on vendor and codec. Robotic platforms like da Vinci expose video via TileProTM or Firefly and each has documented integration SLAs that must be part of the latency budget. The capture path is a one-way architectural decision — changing it post-510(k) typically requires a supplemental submission.
Note prompts — click to add
+ Have we profiled end-to-end latency from camera sensor to inference input on the exact OR hardware we will ship on?+ Do we have written integration confirmation from Medtronic, Olympus, Stryker, or Intuitive for our target video path?+ What is our plan for ORs that use different video stacks than our primary reference site?
Required
Confirm how the inference node taps the surgical video bus.
Single choice
Direct HDMI / SDI tap off laparoscopic tower (lowest latency)
Pass-through frame grabber in the video chain
OR integration stack (Caresyntax, Olympus Easy Suite, Stryker SDC)
da Vinci TileProTM / Firefly video input
PACS / video archive replay (post-op only, not intraoperative)
DICOM video over network (higher latency)
required
✓ saved
Define clinical decision support use cases for scoring
Why This Matters
The GoNoGoNet prospective study (Surgical Endoscopy 2023) showed AI decision support changed anatomical annotations in 27% of cases, and 70% of those changes represented safer dissection decisions — this is the dominant evidence base for clinical benefit in laparoscopic general surgery. Critical view of safety AI has shown bile duct injury reduction of up to 85% in published work, with BDI annual US cost exceeding $1B and mean plaintiff award $508,341. Scoping by decision type rather than by model architecture keeps the 510(k) indications clean.
Note prompts — click to add
+ Which decision type has the strongest published evidence for clinical benefit in our chosen procedures?+ Are bleeding / thermal detection models in scope for initial 510(k) or reserved for a later supplement?+ How will we separate advisory overlays from safety-critical alerts in the UI and in the regulatory filing?
Required
Select the specific intraoperative decisions the model will support.
Select all that apply
Critical view of safety (CVS) verification for cholecystectomy
Anatomical structure segmentation (bile duct, arteries, nerves)
Go / no-go dissection zone classification
Instrument detection and tracking
Bleeding / thermal spread detection
Surgical phase recognition
Adverse event / near-miss flagging
Post-op video indexing and debrief
required
✓ saved
Establish temporal consistency / frame-to-frame stability requirement
Why This Matters
Flickering or jumping overlays between frames are actively distracting in surgery and erode surgeon trust faster than occasional false negatives. A per-frame detector without a temporal consistency head produces unstable segmentation masks even at high per-frame accuracy, so the ensemble must include temporal modeling (sequence transformer, temporal CNN, or Kalman-filtered tracker). This is the single most common reason early-stage surgical AI fails pilot adoption.
Note prompts — click to add
+ How have we measured overlay stability (IoU between consecutive frames, flicker count per minute) in our pilot?+ Does our model architecture include a temporal head, or are we relying on per-frame detection only?+ What threshold of frame-to-frame flicker do our surgeon advisors consider unacceptable?
Required
Define the tolerance for overlay flicker between frames.
Single choice
Overlay stable within one frame of anatomical motion
Smoothed across 3–5 frame window
Frame-by-frame without temporal constraint
Not yet specified
required
✓ saved
Define operating environment constraints
Required
Confirm the environmental, sterility, and infrastructure constraints of the target OR.
Select all that apply
OR-local GPU chassis on mobile cart
Rack-mounted GPU in OR equipment room
Must tolerate network-isolated operation during procedure
Must meet IEC 60601-1 medical electrical safety
Must operate within OR HVAC / acoustic constraints
Redundant inference node required (no single point of failure)
Interfaces with hospital EMR for patient context
requirededgetrinidy
TrinidyTrinidy OR-local deployment tolerates network-isolated operation — inference continues without any network connectivity, satisfying ORs that run in air-gapped mode during procedures. No cloud dependency means no degradation when the hospital network flaps.
✓ saved
Specify deployment topology for the inference plane
Required
Select the physical deployment target for surgical AI inference.
Single choice
OR-local GPU (NVIDIA H100 / L40S / RTX 6000 Ada)
Hospital data center GPU (shared across ORs over dedicated fiber)
Edge node co-located with OR integration stack
Private cloud / VPC in-region (higher latency)
Hybrid: on-prem inference + cloud training
requirededgetrinidy
TrinidyFor sub-40ms per-frame inference and HIPAA-sovereign surgical video, cloud inference is physically and regulatorily incompatible. NEXUS OS runs the full pipeline on an OR-local GPU — the surgical video never leaves the facility perimeter.
✓ saved