Phase 1 of 6
Scoping & Surveillance Surface
Define the surveillance domains, detection SLA, alert volume expectations, analyst tiering, and the regulatory regimes that will govern every downstream detection, explanation, and escalation decision.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Surveillance Domains & Mandate
Identify surveillance domains in scope
Why This Matters
Each surveillance domain carries a different data topology, latency envelope, regulator, and enforcement exposure — market-abuse detection is millisecond-sensitive on order-book events, AML monitoring is second-scale on settlement flow, and UEBA is minute-scale on access logs. The common mistake is shoehorning all three into a single "anomaly detection" product and silently under-serving the latency-sensitive domains. SEC FY2024 enforcement hit $8.2B and the $600M+ off-channel-comms wave specifically penalized firms whose surveillance never saw WhatsApp traffic at all — scope decisions made here become enforcement exposure downstream.
Note prompts — click to add
+ Which domains currently have zero ML coverage and are operating purely on rules that were authored before 2022?+ Do we have a single pane of glass across market / AML / cyber, or three disconnected programs?+ Is off-channel comms surveillance in scope given the SEC's 2024 enforcement pattern against RIAs and broker-dealers?Select every surveillance domain the detection platform must cover.
Select all that apply
Define detection-to-alert latency SLA per domain
Why This Matters
FINRA's 2024 report explicitly called out that AI models detecting order-withdrawal patterns <1ms after placement are now the surveillance standard for spoofing — and that inadequate surveillance technology is itself a Rule 3110 compliance violation, not just a detection failure. A detection latency that looks reasonable for AML (seconds) will miss 99% of spoofing because the manipulative orders have already been cancelled. Set per-domain SLAs explicitly or the slowest domain dictates the architecture for all of them.
Note prompts — click to add
+ What is our current p99 detection latency on order-book events vs. the 1ms cancel window spoofers actually use?+ Do we sample our order flow for surveillance, or see every event — and is that consistent with FINRA 3110 reasonableness?+ Have we benchmarked our AML alert latency against the 30-day SAR filing clock, and is detection-to-SAR really on a 30-day glide path?Select the P99 detection latency target for each domain class — spoofing is sub-second, AML can tolerate seconds to minutes.
Single choice
Trinidy — Spoofing detection requires sub-millisecond order-book pattern matching — order placement and cancellation can be <1ms apart. Trinidy collocates detection inference on the exchange / order-management fabric so the surveillance system sees the same event stream at the same speed as the matching engine, not a delayed tap.
Set expected alert volume and analyst capacity
Why This Matters
Rule-based AML systems generate 90–95% false positives — meaning 95 of every 100 analyst-reviewed alerts are legitimate. NICE Actimize benchmarks show ML can cut that to 10–30% with 4× true-positive detection, but only if the alert budget is set deliberately. Without a capacity-bound alert budget, teams tune detection upward, bury analysts, and miss real cases that arrive mixed in with the noise. The right number is not the lowest false-positive rate — it is the highest true-positive rate that fits analyst capacity.
Note prompts — click to add
+ What is our current alert-per-analyst-per-day load, and what fraction results in a filed SAR or escalation?+ Have we measured the cost of an unworked alert vs. a late-filed SAR in FinCEN penalty exposure?+ Does our surveillance team have a documented tolerance for missed detections, or is it implicit?Quantify expected alerts per day and the analyst headcount available to work them — this ratio determines acceptable false-positive rate.
Single choice
Define analyst tiering and escalation ladder
Why This Matters
The tiering ladder is load-bearing in every FINRA 3110 examination: examiners ask who dispositioned the alert, who approved the escalation, and what authority they held. A program without documented tiers inevitably concentrates both investigation and filing authority in too few people, which becomes a person-dependency risk as well as a supervisory failure. Clear tiering also directly enables ML uplift — Nasdaq's GenAI surveillance POC showed a 33% reduction in investigation time, but only because tier-1 disposition was a defined workflow that could be accelerated.
Note prompts — click to add
+ Is the disposition-to-escalation decision documented as a workflow, or does it vary by analyst?+ Who holds SAR-filing authority, and is that role resourced for the 30-day clock?+ Can we measure mean-time-to-disposition per tier, and is it trending with alert volume?Map alerts to analyst tiers with clear disposition authority and escalation paths.
Select all that apply
Map applicable regulatory regimes
Why This Matters
The regulatory stack for surveillance expanded materially in 2023–2025: SEC Item 1.05 of Form 8-K requires cyber incident disclosure within four business days of materiality determination (effective Dec 18 2023), EU DORA became enforceable January 17 2025 with prescriptive ICT incident-reporting windows, and the EU AI Act classifies most surveillance systems as high-risk with transparency and human-oversight obligations. MiFID II penalties rose 143% year-over-year in 2024 to EUR 44.5M on surveillance and reporting failures alone. Treating this as a single checkbox is how firms become the enforcement headline.
Note prompts — click to add
+ Have we re-mapped our surveillance program against the post-2023 regime stack, or are we still running a pre-DORA control inventory?+ Who owns 8-K Item 1.05 materiality determination, and does that process have a documented 4-business-day path from detection?+ For DORA incident reporting, can our anomaly detection system generate the audit-ready incident record within the prescribed window?Select every regime whose surveillance, reporting, or disclosure obligations apply to the deployed model.
Select all that apply
Confirm data residency and cross-border surveillance constraints
Map trade, customer, and surveillance data to jurisdictional residency constraints before architecting a global platform.
Select all that apply
Trinidy — Global surveillance often means conflicting residency: GDPR restricts EU trade data egress, China PIPL restricts mainland data egress, and India RBI requires domestic localization. Trinidy's per-jurisdiction inference nodes score locally and only forward the surveillance finding — not the underlying data — so the global analyst view is possible without a cross-border data transfer violation.
Define deployment topology for the surveillance inference plane
Select the physical / logical deployment target for streaming inference and GNN workloads.
Single choice
Trinidy — Surveillance of exchange or broker-dealer order flow is a low-latency problem when spoofing is in scope and a heavy-compute problem when GNN mule-detection is in scope. Trinidy supports both on a single on-premises fabric — FPGA / GPU mix, co-located with the venue's event stream, fully within the institution's regulatory perimeter.
Quantify enforcement-exposure budget
Why This Matters
SEC FY2024 enforcement totaled $8.2B, the off-channel-comms wave alone crossed $600M in 2024 and $3B cumulative 2021–2024, and FINRA ran 552 cases totaling $59M in 2024 (a 22% case-volume increase year-over-year). IBM's 2024 Cost of a Data Breach report puts the financial-sector average at $6.08M per incident. Naming the downside figure in one place, signed off by the Chief Compliance Officer, changes how the program is scoped and funded — without it, surveillance budget is the first line cut in every down quarter.
Note prompts — click to add
+ What is our documented worst-case enforcement exposure for a missed surveillance case, and who signed off on that number?+ Are our largest three recent peer-firm enforcement actions mapped to gaps in our own surveillance coverage?+ Is surveillance budget sized against expected enforcement exposure, or against a fixed percentage of ops spend?Translate the surveillance mandate into a dollar-denominated downside number so model investment can be sized against it.
✓ saved