Phase 1 of 6
Scoping & Priority-Preemption Constraints
Define the mission-critical service surface, Band 14 priority posture, air-gap envelope, and CJIS/FedRAMP scope that constrain every downstream model, data, and infrastructure decision.
0/10
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Mission Surface & Agency Scope
Identify public safety agencies and mission profiles in scope
Why This Matters
The mix of agency types determines which compliance regime governs the deployment — CJIS Security Policy for law enforcement, HIPAA-adjacent handling for EMS patient data, and NENA i3 for next-generation 911 integration. A multi-agency EOC deployment inherits the strictest regime across every connected agency, which is almost always CJIS. Scoping this incorrectly early forces expensive re-architecture once the assessor arrives.
Note prompts — click to add
+ Which agencies will actually touch the inference runtime, and which merely consume downstream outputs?+ Is CJIS applicability determined by data type (CHRI) or by agency, and have we documented the boundary?+ Do any participating agencies operate under state-specific extensions to CJIS (e.g., California, Texas, New York)?
Required
Confirm the agency types and mission profiles the AI system must serve — this gates CJIS applicability and procurement path.
Select all that apply
Municipal / county law enforcement (CJIS in scope)
State police / highway patrol (CJIS in scope)
Fire / EMS / rescue (CJIS typically out of scope)
Public Safety Answering Points (PSAPs) — NENA i3
Emergency Operations Centers (EOCs) — multi-agency
Federal law enforcement (FBI CJIS + FedRAMP High)
DoD / National Guard coordination (separate accreditation)
Critical infrastructure operators (utility / transit police)
required
✓ saved
Define target AI workloads and their latency envelope
Why This Matters
Public safety AI is not a single workload — it is a portfolio with latency budgets ranging from frame-rate video analytics (~33ms per frame) to CAD triage (seconds) to situational awareness (tens of seconds). Bundling them onto a single model or a single inference queue leads to the highest-priority, lowest-latency workload being blocked behind slow fusion jobs the moment an incident ramps up. Workload-level SLA isolation is the single most impactful design decision.
Note prompts — click to add
+ Which workloads are on the critical response path and which are post-event / analytical?+ Do we have per-workload latency budgets or a single end-to-end SLA?+ What happens to non-critical workloads when an incident spikes critical-path demand?
Required
Select the mission workloads the Trinidy node must support — each has a different latency budget and failure cost.
Select all that apply
CAD dispatch triage / priority scoring (< 2s decision)
Body-cam / CCTV video analytics (real-time frame rate)
Radio transcription + keyword spotting (MCPTT streams)
Next-Gen 911 (NENA i3) call triage and transcription
Situational awareness / multi-source sensor fusion
License plate recognition (LPR) at incident perimeter
Crowd-density / gunshot-detection analytics
Drone / aerial platform CV inference
requirededgetrinidy
TrinidyVideo analytics on 4K body-cam and CCTV feeds cannot round-trip to cloud over a congested RAN and meet an incident-command decision window. Trinidy co-locates the inference runtime on the tower or in the agency EOC so the 1–10 second requirement holds even when backhaul is severed.
✓ saved
Confirm FirstNet Band 14 priority and preemption posture
Why This Matters
Priority and Preemption on FirstNet Band 14 is the mechanism that keeps public safety data flowing when commercial congestion would otherwise drop packets, and it is the fundamental reason a public safety AI deployment on FirstNet outperforms one on any commercial carrier. An AI architecture that assumes best-effort transport wastes the PP guarantee; one that explicitly marks inference traffic as mission-critical inherits the SLA. Verify PP eligibility with the FirstNet Authority before designing for it.
Note prompts — click to add
+ Are our AI data paths tagged with the QCI values that trigger PP on Band 14?+ Have we reviewed PP eligibility with FirstNet Authority for each connected endpoint?+ What is our fallback behavior for endpoints that lose PP status mid-incident?
Required
Band 14 carries Priority and Preemption (PP) for Primary Users — the model's data path must honor and exploit this.
Single choice
Primary User with Priority + Preemption (P&P) on Band 14
Extended Primary User (verified public safety agency)
Secondary / First Priority (state-authorized non-PU)
Commercial AT&T with FirstNet MegaRange only (no PP)
Mixed — some endpoints PP, others commercial
requirededge
✓ saved
Select Mission Critical Services (MCX) scope
Why This Matters
3GPP MCX is the standardized carrier of mission-critical voice, data, and video on FirstNet and equivalent public-safety LTE/5G deployments, and AI models that ingest MCPTT transcripts or MCVideo frames must honor the MCX signaling model or they will break under load. Vendors including Motorola Solutions APX, Harris/L3Harris, Sonim, and ESChat each implement MCX with documented conformance to the 3GPP TS — picking a model integration point that does not match your radio ecosystem forces brittle protocol translation.
Note prompts — click to add
+ Which MCX services are actually deployed on our agency radios today vs. on the roadmap?+ Does our AI consume MCX streams directly or does it sit behind a media gateway?+ Are we integrated with an MCX-conformant application server (Motorola, L3Harris, ESChat) or rolling our own?
Required
Map required 3GPP Mission Critical Services — MCPTT, MCData, MCVideo — each has distinct 3GPP TS and integration points.
Select all that apply
MCPTT — Mission Critical Push-to-Talk (3GPP TS 23.379)
MCData — Mission Critical Data (3GPP TS 23.282)
MCVideo — Mission Critical Video (3GPP TS 23.281)
ProSe / Direct Mode (off-network device-to-device)
Group communications (temporary / dynamic groups)
Location services (LCS) for first-responder tracking
None — AI overlay only, no MCX integration
required
✓ saved
Define air-gap and disconnected operation requirements
Why This Matters
The 96-hour autonomous requirement in AT&T's FirstNet Authority-approved architecture is not aspirational — it is the observed backhaul restoration window across major incidents including hurricane response, wildfire, and grid-outage events. A deployment that assumes cloud reachability will silently fail at the moment it is most needed, and recovery from a cloud-dependent architecture under incident conditions is effectively impossible. Design the air-gap envelope first, then layer cloud-augmented features on top where safe.
Note prompts — click to add
+ What is our agency's documented longest backhaul outage in the last five years?+ Which model features require cloud reachability, and can they gracefully degrade?+ Do we exercise air-gap mode on a routine cadence, or only discover failures during real incidents?
Required
Specify how long the AI must operate with backhaul severed and what capabilities degrade vs. remain fully functional.
Single choice
< 4 hours autonomous (backhaul outage tolerant)
24 hours autonomous (standard incident window)
72 hours autonomous (multi-day disaster)
96+ hours autonomous (FirstNet-compliant spec)
Indefinite — fully air-gapped permanent deployment
requirededgetrinidy
TrinidyFirstNet Authority-approved architecture requires a 96-hour minimum autonomous operation window for edge AI nodes. Trinidy runs the full inference stack locally with pre-cached models — no cloud-hosted control plane dependency, so a severed backhaul does not degrade inference.
✓ saved
Map FedRAMP and authorization boundary
Why This Matters
FedRAMP High is the baseline for any cloud-hosted service touching federal law enforcement data, and it aligns to the NIST SP 800-53 High control baseline. Public-sector procurement rarely tolerates a "we'll inherit the cloud vendor's FedRAMP" story when the actual inference runtime, model weights, and audit log are in your product. Clarifying the authorization boundary early decides whether cloud-hosted components can exist at all.
Note prompts — click to add
+ Is the inference runtime inside the FedRAMP boundary or explicitly excluded?+ Which NIST SP 800-53 High controls are inherited vs. customer-implemented?+ What is our authorization package status and next continuous-monitoring milestone?
Required
Confirm the FedRAMP or equivalent authorization posture required by participating federal, state, and local agencies.
Single choice
FedRAMP High (federal law enforcement / DoD adjacent)
FedRAMP Moderate (most federal civilian agencies)
StateRAMP (state-level agencies, varies)
CJIS Security Policy only (no FedRAMP)
Agency-specific ATO (self-issued Authority to Operate)
No federal authorization required (municipal only)
required
✓ saved
Confirm CJIS Security Policy applicability and scope
Why This Matters
The current CJIS Security Policy (v5.9 series, with 2024 updates toward a 6.x track) treats AI systems that process CJI as in-scope for all 13 policy areas — including Advanced Authentication, physical protection, and media protection. Inference runtimes that log full input payloads can bring previously out-of-scope infrastructure into CJIS scope by accident, and remediation under a state CJIS audit is measured in months.
Note prompts — click to add
+ Have we inventoried every location model inputs, outputs, and logs may land?+ Is our inference runtime subject to CJIS Advanced Authentication for all operator access?+ Who is our CJIS Systems Officer (CSO) of record and have they reviewed the AI deployment?
Required
Define whether the AI touches Criminal Justice Information (CJI) and which CJIS controls apply.
Select all that apply
Model trains on CJI (Criminal Justice Information)
Model inference touches CJI in real time
Model outputs are used to make CJI-derived decisions
Logs / telemetry may incidentally contain CJI
CJI fully segregated from AI — no CJIS scope
Advanced Authentication (AA) required for model operators
Fingerprint-based CHRI handling in scope
required
✓ saved
Specify FIPS 140-2/140-3 cryptographic module requirement
Why This Matters
CJIS and NIST SP 800-53 High both require FIPS 140-validated cryptography for controlled information, and NIST is actively migrating new validations from 140-2 to 140-3. Deploying a model runtime that relies on non-validated cryptographic libraries (e.g., a default OpenSSL build without a FIPS provider) can fail the audit even if every other control is in place. Module validation is also not transitive — a FIPS-validated container runtime does not automatically validate the model-serving library inside it.
Note prompts — click to add
+ Are the cryptographic libraries in our inference stack on the NIST CMVP validated list?+ Do we have a plan to migrate from 140-2 to 140-3 as 140-2 validations sunset?+ Who owns FIPS module lifecycle across the model, runtime, and key-management layers?
Required
Identify where FIPS-validated cryptography is required — at rest, in transit, and for key custody.
Select all that apply
FIPS 140-2 validated modules for data at rest
FIPS 140-3 validated modules (preferred for new deployments)
FIPS 140-2/3 for TLS and VPN in transit
Hardware Security Module (HSM) for key custody
Self-encrypting drives (SED) with FIPS-validated firmware
Not currently required — commercial-grade acceptable
required
✓ saved
Define deployment topology for inference capacity
Why This Matters
Inference topology is the single decision that determines whether the system is available during the events it was built for. Tower-hosted inference survives backhaul cuts and commercial-network congestion because it shares a failure domain with the RAN it serves. Cloud-hosted inference, even in FedRAMP-authorized government cloud, is on the far side of a link that is the first thing to degrade under disaster conditions — which is why DCOLT deployable units and tower edge keep appearing in incident after-action reports as the components that stayed up.
Note prompts — click to add
+ Is our inference runtime inside the tower failure domain or behind backhaul?+ For multi-agency deployments, where does the inference physically live relative to each agency?+ Have we validated topology by cutting backhaul in a drill and measuring what the AI can still do?
Required
Select the physical and logical placement of the inference runtime relative to the tower and the agency.
Single choice
Tower-hosted edge (co-located with FirstNet RAN)
Agency EOC / dispatch center on-premises
Vehicle / deployable (DCOLT-class mobile cell site)
Regional data center (state fusion center)
FedRAMP-authorized government cloud (AWS GovCloud / Azure Gov)
Hybrid: tower edge + regional aggregation
requirededgetrinidy
TrinidyTower-hosted inference with Trinidy placed inside the FirstNet-eligible cell site gives the RAN and the AI model the same failure domain — when the tower stays up, the model stays up. Commercial cloud placed on the far side of backhaul is not substitutable.
✓ saved
Confirm hardware ruggedization and mission-assurance requirements
Required
Specify the environmental and availability requirements for the inference hardware platform.
Select all that apply
Extended temperature range (-40°C to +70°C)
MIL-STD-810 shock, vibration, humidity
NEBS Level 3 (telecom central office)
Redundant DC power (-48V) with battery backup
Dual AC + generator with automatic transfer
IP65+ enclosure for outdoor cabinet deployment
Physical tamper detection and response
Seismic Zone 4 anchoring
requirededge
✓ saved