Phase 1 of 6
Scoping & Biometric Mission
Define the biometric mission, classification posture, privacy envelope, and coalition sharing boundaries that will govern every subsequent architectural decision.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Mission Use Cases & Operational Surface
Identify biometric mission use cases in scope
Why This Matters
Use cases differ by an order of magnitude in latency envelope, acceptable false-match rate, and legal authority, and cannot share a single matching pipeline without compromise. Checkpoint screening and forensic latent matching both use the same enrolled database but have entirely different operator-in-the-loop expectations — one has two seconds before queue congestion becomes a vulnerability, the other has hours. The most common architectural mistake is one-sizing a checkpoint flow onto forensic exploitation and inheriting the wrong thresholds.
Note prompts — click to add
+ Which mission use cases share enough feature overlap to justify a shared pipeline vs. dedicated flows?+ Have we inventoried every mission we support today plus what the combatant command is adding in the next 12 months?+ Who owns the per-mission identification accuracy attribution so we can measure matcher ROI separately by flow?
Required
Confirm which operational mission sets the matching stack must serve.
Select all that apply
Tactical checkpoint / entry control point (ECP) identity resolution
Base access control (installation / facility entry)
Detainee enrollment and re-identification
HVI (high-value individual) targeting confirmation
Forensic exploitation (latent print / DOCEX from captured material)
Persistent entity tracking across ISR feeds
Partner-nation screening (vetting local forces / refugees)
Counter-intelligence / insider-threat identification
Coalition interoperability queries (Five Eyes / NATO)
required
✓ saved
Define end-to-end identification latency SLA
Why This Matters
The sub-second checkpoint window is not a soft target — it is enforced by physical queue dynamics, and a 5-second matcher creates the queue that becomes the security vulnerability. Every millisecond spent on uplink to an ABIS query is a millisecond unavailable for template extraction, fusion, and operator confirmation. Decisions made after the SLA is set have 10× less leverage than decisions that set the SLA correctly the first time.
Note prompts — click to add
+ What is our current p99 end-to-end identification latency and where are the hot spots — extraction, 1:N search, or uplink?+ What is our timeout fallback when the back-end ABIS is unreachable, and does it effectively turn matching off?+ Have we stress-tested at 5× peak checkpoint volume to locate the latency cliff before an operation finds it?
Required
Select the P99 latency budget the matching stack must hold under tactical load.
Single choice
< 500ms (handheld ECP — aggressive)
< 1s (standard checkpoint / vehicle queue)
< 3s (base access control with operator confirmation)
< 30s (forensic / analyst-in-the-loop)
Tiered by mission (mixed SLA)
requirededgetrinidy
TrinidyCloud-routed matching alone consumes 30–150ms of network round-trip before a 1:N search begins — often the entire checkpoint window, and impossible in DDIL (denied, degraded, intermittent, limited) tactical bandwidth. Trinidy runs 1:N matching on-node with sub-second response against millions of enrolled identities, keeping p99 predictable even at a vehicle queue.
✓ saved
Establish acceptable false-match rate by mission tier
Why This Matters
NIST FRVT 2023 consistently reports top-performer accuracy rates exceeding 99.7% on mugshot-quality imagery at FMR 1e-6, but operational imagery degrades this substantially — pose, lighting, partial occlusion, and standoff distance each reduce measured accuracy. A single uniform threshold almost always misprices the cost curve in at least one mission, and the cost of a false match at HVI-targeting tier is categorically different from the cost at base-access tier.
Note prompts — click to add
+ What is our measured FMR on operational imagery vs. NIST FRVT published figures?+ Have we quantified the operational cost of a false match per mission tier, including detention and legal review?+ Are we tracking FMR separately for each mission or rolling it up into a single dashboard number?
Required
Define FMR tolerance across mission tiers — a checkpoint false match has a very different cost than a forensic false match.
Single choice
FMR 1e-6 (NIST FRVT mugshot-grade — HVI / targeting)
FMR 1e-5 (standard watchlist matching)
FMR 1e-4 (access control / screening)
FMR 1e-3 (lead generation — analyst confirms)
Not yet measured at the mission-tier level
required
✓ saved
Define false non-match (FNMR) tolerance by mission
Required
Missed-match tolerance — a missed HVI match has vastly different consequence than a missed routine enrollment hit.
Single choice
< 1% FNMR at the operating FMR (high-recall targeting)
1% – 5% FNMR (standard checkpoint)
5% – 10% FNMR (precision-prioritized)
> 10% (conservative — analyst triage downstream)
Not currently measured
required
✓ saved
Map classification posture for the matching enclave
Why This Matters
E.O. 13526 governs classification of national security information and ICD 710 governs classification marking — both are binding on any biometric database that draws from intelligence sources. Routing classified enrolled templates through a commercial cloud inference endpoint is a classification spill at worst and a policy violation at best. Classification boundary is a first-order architectural decision, not an afterthought, and cross-domain solutions (CDS) must be scoped before any data plane design.
Note prompts — click to add
+ Have we formally classified the enrolled template database and every feature vector derived from it?+ Where in the matching path does classified data exist vs. unclassified telemetry, and can we push the boundary earlier?+ Is our matching runtime inside the enclave, and if so does it have a current ATO at the required classification?
Required
Confirm which classification levels the matching stack must operate at.
Select all that apply
UNCLASSIFIED (base access / administrative)
CUI / FOUO (controlled unclassified)
SECRET (standard DoD mission)
TOP SECRET (sensitive targeting)
TS / SCI (compartmented intelligence)
SAP (special access program)
Multi-level — matching must span classifications
requiredtrinidy
TrinidyEnrolled biometric databases are often classified up to TOP SECRET with SAP/SCI compartments. Trinidy deploys inside the existing classified enclave — no data crosses the cross-domain boundary to a commercial cloud for matching or audit.
✓ saved
Confirm Privacy Act / SORN applicability
Why This Matters
The Privacy Act of 1974 restricts federal agency collection, retention, and sharing of records on US persons, and DoD 5400.11 operationalizes this inside the Department. A biometric system that may incidentally encounter US persons — refugee processing, domestic installation access, joint operations with US law-enforcement — must have a published SORN or a documented exclusion, and the architecture must enforce the filter in the data plane rather than relying on operator discipline. Privacy Act violations carry criminal penalties and are a reliable source of Inspector General findings.
Note prompts — click to add
+ Have we published a SORN covering every population this system enrolls or queries?+ Does our architecture enforce the US-person filter technically, or only through operator procedure?+ Who owns Privacy Act compliance review for this system and when was the last re-attestation?
Required
Map biometric collection to Privacy Act and System of Records Notice constraints when US persons may be encountered.
Select all that apply
Privacy Act of 1974 applies (US person collection possible)
Published SORN required for this system
DoD 5400.11 DoD Privacy Program controls in place
US persons out of scope — foreign-only mission
Mixed population — operational filter required
Incidental collection policy documented
required
✓ saved
Confirm partner-nation sharing authorities
Why This Matters
CJCSI 5221.01D governs the delegation of authority to commanders to disclose classified military information to foreign governments and international organizations, and it is the authority document most often cited in coalition biometric sharing. Bilateral biometric sharing agreements — including Five Eyes — specify exactly which modalities, which metadata, and which retention periods apply, and a matching pipeline that aggregates across partner sources must enforce these distinctions at the data plane. Getting this wrong creates diplomatic exposure alongside operator risk.
Note prompts — click to add
+ Have we inventoried every partner-nation feed and the authority under which each record may be queried and shared?+ Does our matching pipeline tag records with originating-partner handling caveats at enrollment time?+ Who in the J2/J3 chain signs off on new partner data flows before they reach production?
Required
Map which enrolled and matched data may be shared with which coalition partners under which authorities.
Select all that apply
Five Eyes (AUS / CAN / NZL / GBR) bilateral biometric sharing
NATO partner sharing under existing MOUs
Host-nation sharing under bilateral SOFA
CJCSI 5221.01D disclosure authority delegated to commander
No partner-nation sharing permitted
Shared up to REL TO FVEY classification marking
Mixed — per-partner data handling rules
required
✓ saved
Specify deployment topology for the matching plane
Required
Select the physical / logical deployment target for the matching stack.
Single choice
Tactical edge (handheld / forward operating base)
Theater-level sovereign data center
CONUS classified enclave (SIPR / JWICS)
DoD IL5 / IL6 cloud (e.g., C2S, JWCC)
Hybrid: tactical-edge matching + CONUS reference database
requirededgetrinidy
✓ saved
Confirm EU AI Act exposure for coalition deployments
Why This Matters
EU AI Act Annex III classifies remote biometric identification as high-risk, with specific obligations on fundamental-rights impact assessment, logging, and human oversight. The Act does not apply to military activities per se, but joint-use infrastructure, NATO interoperability scenarios, and non-combat coalition operations (humanitarian assistance, peacekeeping) can all draw some or all of a biometric stack into scope. The legal gray area is best resolved with coalition legal counsel before deployment, not after.
Note prompts — click to add
+ Have we mapped every EU-partner deployment and asked whether Annex III obligations attach?+ Does our logging and human-oversight architecture meet the Act's requirements in case we need to demonstrate compliance?+ Who in coalition legal owns the AI Act applicability determination for new partner deployments?
Recommended
Remote biometric identification is Annex III high-risk under the EU AI Act when deployed with or inside EU partner infrastructure.
recommended
✓ saved