Phase 1 of 6
Scoping & Priority-Preemption Constraints
Define the mission-critical service surface, Band 14 priority posture, air-gap envelope, and CJIS/FedRAMP scope that constrain every downstream model, data, and infrastructure decision.
0/10
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Mission Surface & Agency Scope
Identify public safety agencies and mission profiles in scope
Why This Matters
The mix of agency types determines which compliance regime governs the deployment — CJIS Security Policy for law enforcement, HIPAA-adjacent handling for EMS patient data, and NENA i3 for next-generation 911 integration. A multi-agency EOC deployment inherits the strictest regime across every connected agency, which is almost always CJIS. Scoping this incorrectly early forces expensive re-architecture once the assessor arrives.
Note prompts — click to add
+ Which agencies will actually touch the inference runtime, and which merely consume downstream outputs?+ Is CJIS applicability determined by data type (CHRI) or by agency, and have we documented the boundary?+ Do any participating agencies operate under state-specific extensions to CJIS (e.g., California, Texas, New York)?Confirm the agency types and mission profiles the AI system must serve — this gates CJIS applicability and procurement path.
Select all that apply
Define target AI workloads and their latency envelope
Why This Matters
Public safety AI is not a single workload — it is a portfolio with latency budgets ranging from frame-rate video analytics (~33ms per frame) to CAD triage (seconds) to situational awareness (tens of seconds). Bundling them onto a single model or a single inference queue leads to the highest-priority, lowest-latency workload being blocked behind slow fusion jobs the moment an incident ramps up. Workload-level SLA isolation is the single most impactful design decision.
Note prompts — click to add
+ Which workloads are on the critical response path and which are post-event / analytical?+ Do we have per-workload latency budgets or a single end-to-end SLA?+ What happens to non-critical workloads when an incident spikes critical-path demand?Select the mission workloads the Trinidy node must support — each has a different latency budget and failure cost.
Select all that apply
Trinidy — Video analytics on 4K body-cam and CCTV feeds cannot round-trip to cloud over a congested RAN and meet an incident-command decision window. Trinidy co-locates the inference runtime on the tower or in the agency EOC so the 1–10 second requirement holds even when backhaul is severed.
Confirm FirstNet Band 14 priority and preemption posture
Why This Matters
Priority and Preemption on FirstNet Band 14 is the mechanism that keeps public safety data flowing when commercial congestion would otherwise drop packets, and it is the fundamental reason a public safety AI deployment on FirstNet outperforms one on any commercial carrier. An AI architecture that assumes best-effort transport wastes the PP guarantee; one that explicitly marks inference traffic as mission-critical inherits the SLA. Verify PP eligibility with the FirstNet Authority before designing for it.
Note prompts — click to add
+ Are our AI data paths tagged with the QCI values that trigger PP on Band 14?+ Have we reviewed PP eligibility with FirstNet Authority for each connected endpoint?+ What is our fallback behavior for endpoints that lose PP status mid-incident?Band 14 carries Priority and Preemption (PP) for Primary Users — the model's data path must honor and exploit this.
Single choice
Select Mission Critical Services (MCX) scope
Why This Matters
3GPP MCX is the standardized carrier of mission-critical voice, data, and video on FirstNet and equivalent public-safety LTE/5G deployments, and AI models that ingest MCPTT transcripts or MCVideo frames must honor the MCX signaling model or they will break under load. Vendors including Motorola Solutions APX, Harris/L3Harris, Sonim, and ESChat each implement MCX with documented conformance to the 3GPP TS — picking a model integration point that does not match your radio ecosystem forces brittle protocol translation.
Note prompts — click to add
+ Which MCX services are actually deployed on our agency radios today vs. on the roadmap?+ Does our AI consume MCX streams directly or does it sit behind a media gateway?+ Are we integrated with an MCX-conformant application server (Motorola, L3Harris, ESChat) or rolling our own?Map required 3GPP Mission Critical Services — MCPTT, MCData, MCVideo — each has distinct 3GPP TS and integration points.
Select all that apply
Define air-gap and disconnected operation requirements
Why This Matters
The 96-hour autonomous requirement in AT&T's FirstNet Authority-approved architecture is not aspirational — it is the observed backhaul restoration window across major incidents including hurricane response, wildfire, and grid-outage events. A deployment that assumes cloud reachability will silently fail at the moment it is most needed, and recovery from a cloud-dependent architecture under incident conditions is effectively impossible. Design the air-gap envelope first, then layer cloud-augmented features on top where safe.
Note prompts — click to add
+ What is our agency's documented longest backhaul outage in the last five years?+ Which model features require cloud reachability, and can they gracefully degrade?+ Do we exercise air-gap mode on a routine cadence, or only discover failures during real incidents?Specify how long the AI must operate with backhaul severed and what capabilities degrade vs. remain fully functional.
Single choice
Trinidy — FirstNet Authority-approved architecture requires a 96-hour minimum autonomous operation window for edge AI nodes. Trinidy runs the full inference stack locally with pre-cached models — no cloud-hosted control plane dependency, so a severed backhaul does not degrade inference.
Map FedRAMP and authorization boundary
Why This Matters
FedRAMP High is the baseline for any cloud-hosted service touching federal law enforcement data, and it aligns to the NIST SP 800-53 High control baseline. Public-sector procurement rarely tolerates a "we'll inherit the cloud vendor's FedRAMP" story when the actual inference runtime, model weights, and audit log are in your product. Clarifying the authorization boundary early decides whether cloud-hosted components can exist at all.
Note prompts — click to add
+ Is the inference runtime inside the FedRAMP boundary or explicitly excluded?+ Which NIST SP 800-53 High controls are inherited vs. customer-implemented?+ What is our authorization package status and next continuous-monitoring milestone?Confirm the FedRAMP or equivalent authorization posture required by participating federal, state, and local agencies.
Single choice
Confirm CJIS Security Policy applicability and scope
Why This Matters
The current CJIS Security Policy (v5.9 series, with 2024 updates toward a 6.x track) treats AI systems that process CJI as in-scope for all 13 policy areas — including Advanced Authentication, physical protection, and media protection. Inference runtimes that log full input payloads can bring previously out-of-scope infrastructure into CJIS scope by accident, and remediation under a state CJIS audit is measured in months.
Note prompts — click to add
+ Have we inventoried every location model inputs, outputs, and logs may land?+ Is our inference runtime subject to CJIS Advanced Authentication for all operator access?+ Who is our CJIS Systems Officer (CSO) of record and have they reviewed the AI deployment?Define whether the AI touches Criminal Justice Information (CJI) and which CJIS controls apply.
Select all that apply
Specify FIPS 140-2/140-3 cryptographic module requirement
Why This Matters
CJIS and NIST SP 800-53 High both require FIPS 140-validated cryptography for controlled information, and NIST is actively migrating new validations from 140-2 to 140-3. Deploying a model runtime that relies on non-validated cryptographic libraries (e.g., a default OpenSSL build without a FIPS provider) can fail the audit even if every other control is in place. Module validation is also not transitive — a FIPS-validated container runtime does not automatically validate the model-serving library inside it.
Note prompts — click to add
+ Are the cryptographic libraries in our inference stack on the NIST CMVP validated list?+ Do we have a plan to migrate from 140-2 to 140-3 as 140-2 validations sunset?+ Who owns FIPS module lifecycle across the model, runtime, and key-management layers?Identify where FIPS-validated cryptography is required — at rest, in transit, and for key custody.
Select all that apply
Define deployment topology for inference capacity
Why This Matters
Inference topology is the single decision that determines whether the system is available during the events it was built for. Tower-hosted inference survives backhaul cuts and commercial-network congestion because it shares a failure domain with the RAN it serves. Cloud-hosted inference, even in FedRAMP-authorized government cloud, is on the far side of a link that is the first thing to degrade under disaster conditions — which is why DCOLT deployable units and tower edge keep appearing in incident after-action reports as the components that stayed up.
Note prompts — click to add
+ Is our inference runtime inside the tower failure domain or behind backhaul?+ For multi-agency deployments, where does the inference physically live relative to each agency?+ Have we validated topology by cutting backhaul in a drill and measuring what the AI can still do?Select the physical and logical placement of the inference runtime relative to the tower and the agency.
Single choice
Trinidy — Tower-hosted inference with Trinidy placed inside the FirstNet-eligible cell site gives the RAN and the AI model the same failure domain — when the tower stays up, the model stays up. Commercial cloud placed on the far side of backhaul is not substitutable.
Confirm hardware ruggedization and mission-assurance requirements
Specify the environmental and availability requirements for the inference hardware platform.
Select all that apply