Phase 1 of 6
Scoping & Mission Constraints
Define the enclave boundary, detection latency envelope, classification scope, and mission tolerance that will govern every architectural decision downstream.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Enclave, Classification & Mission Scope
Identify network enclaves and classification boundaries in scope
Why This Matters
Enclave classification drives almost every downstream architectural decision — IL5 and IL6 cloud availability, CNSSI 1253 control baselines, and the fact that SIPRNet and JWICS are physically air-gapped from commercial cloud. CNSSI 1253 categorizes national security systems using NIST SP 800-53 Rev 5 controls, and the overlays differ materially by enclave. A detection stack that works on NIPRNet is not portable to JWICS without re-architecting the data and model supply chain.
Note prompts — click to add
+ Which enclaves require on-prem inference because no authorized cloud region exists at their classification?+ Have we mapped the CNSSI 1253 overlay per enclave before selecting a SIEM or EDR vendor?+ Are there coalition networks with release caveats that prevent common model artifacts across enclaves?Confirm which networks your detection stack will cover — each has distinct egress rules and tooling constraints.
Select all that apply
Define detection-to-containment latency SLA
Why This Matters
Mandiant M-Trends 2025 reports the global median APT dwell time is 11 days, down from 197 days a decade ago, with the reduction driven primarily by AI-assisted detection — but the window between initial compromise and lateral movement is often minutes, not days. Sub-second scoring is the difference between isolating a compromised endpoint and watching the adversary pivot. The SLA must be set before infrastructure is chosen because post-hoc infrastructure changes have an order of magnitude less leverage.
Note prompts — click to add
+ What is our measured mean-time-to-detect today, and where in the pipeline is the dominant latency?+ Do we have the authority and playbook coverage to execute automated containment, or is the SLA constrained by human-in-the-loop policy?+ Have we stress-tested detection latency under a simulated noisy-neighbor or beacon burst?Select the wall-clock target from anomaly observation to automated response action.
Single choice
Trinidy — Routing classified telemetry to a cloud scoring service is not an option on SIPRNet or JWICS, and is architecturally risky even on NIPRNet for nation-state detection. Trinidy runs the full detection, classification, and response decision loop on-node inside the enclave — sub-second from event to containment playbook with no egress.
Establish acceptable false-positive rate by asset class
Why This Matters
SIEM alert fatigue is the documented root cause of missed intrusions in the majority of public post-incident reports — analysts close tickets in bulk when the signal-to-noise ratio collapses. The CISA Continuous Diagnostics and Mitigation (CDM) program explicitly prioritizes tuning over coverage for this reason. A uniform FP target mis-prices noise tolerance: a false positive on a nuclear command node is not the same cost as one on a general office endpoint.
Note prompts — click to add
+ What is our current alerts-per-analyst-per-shift and our measured close-without-investigation rate?+ Have we defined an asset criticality tier so the SOC can allocate cognitive load differently by tier?+ Is our FP rate degrading over time as new telemetry sources onboard without threshold retuning?Alert fatigue is the dominant SOC failure mode — define tolerable FP rates per asset tier.
Single choice
Define Zero Trust maturity target (DoD ZT Ref Arch v2.0)
Why This Matters
The DoD Zero Trust Reference Architecture v2.0 (July 2022) and the DoD Zero Trust Strategy (November 2022) set 91 target-level activities by FY2027 and 152 advanced activities by FY2032 across seven pillars (User, Device, Network/Environment, Application & Workload, Data, Automation & Orchestration, Visibility & Analytics). AI-driven continuous behavioral validation is explicitly called out in the Visibility & Analytics and Automation & Orchestration pillars — the detection model is not adjacent to ZT, it is a load-bearing component.
Note prompts — click to add
+ Which ZT pillars does this model serve — User behavior, Device posture, Data exfil detection, or Network analytics?+ Who owns the ZT Strategy artifact mapping and are model outputs listed as named evidence sources?+ Does our PDP/PEP consume model confidence scores as continuous signal, or only binary alerts?Select the target Zero Trust posture — the model must produce signals ZT infrastructure consumes.
Single choice
Confirm data residency and egress constraints
Classified telemetry cannot leave the enclave — map residency before architecture is finalized.
Select all that apply
Trinidy — DFARS 252.204-7012 and the DoD Cloud Computing SRG (IL5/IL6) create hard residency boundaries. Trinidy keeps model training, inference, and audit telemetry entirely inside the enclave — no cross-domain flow for any detection decision, and no cloud vendor ingress for model updates.
Map assessment framework obligations
Why This Matters
The CMMC 2.0 DFARS final rule took effect 10 November 2025, with Level 2 third-party certification required for contract awards by November 2026 — affecting the 300,000+ contractors in the Defense Industrial Base. NIST SP 800-171 Rev 3 is the control catalog underneath CMMC Level 2. CISA BOD 22-01 requires federal civilian agencies to remediate Known Exploited Vulnerabilities on published timelines, and DoD components treat the KEV catalog as a minimum floor. AI-powered continuous monitoring has become the practical mechanism for meeting these at scale; making the control-mapping explicit up front prevents an expensive retrofit later.
Note prompts — click to add
+ Have we mapped each model output and audit record to a specific NIST 800-53 Rev 5 control family?+ Does our CMMC assessment plan list the detection model as an in-scope asset or as supporting tooling?+ Are we ingesting the CISA KEV catalog as a feature / rule input, or only as an ops ticket source?Which cyber assessment frameworks must the detection stack demonstrably support?
Select all that apply
Define automated response authority boundary
Why This Matters
Automated containment is the speed advantage against nation-state actors — but an over-eager SOAR playbook can take down a mission system in the middle of an operation. The CDAO and JAIC have published guidance emphasizing meaningful human control for consequential cyber actions on weapons and command systems. The correct answer is almost always tiered: aggressive autonomy on general IT, human-in-the-loop on mission and OT.
Note prompts — click to add
+ Have we documented the response authority matrix per asset tier and briefed the approving authority?+ Does the SOAR playbook include a safe-abort that a watchfloor supervisor can trigger under 10 seconds?+ Is there a legal review on record for autonomous action against attacker infrastructure (active defense)?Specify what actions the model is authorized to execute autonomously vs. what requires human approval.
Select all that apply
Specify deployment topology for the detection plane
Select the physical/logical deployment target for the detection and response stack.
Single choice
Trinidy — For SIPRNet, JWICS, and air-gapped classified enclaves, cloud-hosted SIEM and ML inference are physically incompatible. Trinidy is the on-premises substrate for classified threat-hunting — the same model artifacts and playbooks deploy to IL5, IL6, and air-gapped sites with no egress for training, inference, or telemetry.