Phase 1 of 6
Scoping & Latency Constraints
Define the payment rails, latency budget, decline tolerance, and PCI scope that will govern every subsequent architectural decision.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Payment Rails & Authorization Surface
Identify payment rails in scope for authorization scoring
Why This Matters
Rails differ by an order of magnitude in latency envelope and fraud profile, and cannot share a single scoring model without compromise. Card-not-present and FedNow both carry irrevocable loss risk but have entirely different feature surfaces, while ACH has a multi-day claw-back window that shifts the entire cost function. The most common architectural mistake is one-sizing a CNP fraud model onto FedNow after launch — the training data simply does not transfer.
Note prompts — click to add
+ Which rails share enough feature overlap to justify a shared model vs. dedicated sub-models?+ Have we inventoried every rail we authorize on today plus what product is adding in the next 12 months?+ Who owns the rail-by-rail fraud loss attribution so we can measure per-rail model ROI?
Required
Confirm which rails your scoring model must decision in real time.
Select all that apply
Card-present (chip / contactless / magstripe)
Card-not-present / e-commerce (PAN + EMV 3DS2)
Network tokenized CNP (Visa / Mastercard tokens)
FedNow instant payments (US — 20 second SLA)
RTP (The Clearing House — sub-second)
SEPA Instant Credit Transfer (10 second SLA)
UK Faster Payments (FPS)
ACH (same-day and standard)
Wire (FedWire / CHIPS — ISO 20022)
Pay-by-bank / open banking (PSD2)
required
✓ saved
Define end-to-end authorization latency SLA
Why This Matters
The 100ms card authorization window is not a soft target — it is enforced by issuer and scheme timeouts, and breaches cascade to rules-based fallback decisioning with no AI signal at all. Every millisecond spent on network egress is a millisecond unavailable for model inference, feature retrieval, and ensemble arbitration. Infrastructure decisions made after the SLA is set have 10× less leverage than decisions that set the SLA correctly the first time.
Note prompts — click to add
+ What is our current p99 authorization latency and where are the hot spots — feature retrieval, model inference, or network?+ What is our timeout fallback behavior, and does it effectively turn our ensemble off?+ Have we stress-tested at 5× peak volume to locate the latency cliff before a real event finds it?
Required
Select the P99 latency budget your scoring ensemble must hold under peak load.
Single choice
< 50ms (issuer-side authorization — aggressive)
< 100ms (standard card authorization window)
< 500ms (instant payments — FedNow / RTP / SEPA Inst)
< 2s (e-commerce checkout with 3DS challenge)
Tiered by rail (mixed SLA)
requirededgetrinidy
TrinidyCloud-routed inference alone consumes 30–150ms of network round-trip before a score is computed — often the entire authorization window. Trinidy runs the ensemble on-node with sub-millisecond inference, keeping p99 predictable even under burst conditions.
✓ saved
Establish acceptable false decline rate by segment
Why This Matters
Industry research consistently finds that false declines cost merchants and issuers roughly 13× more than fraud itself once lifetime value, customer acquisition cost, and cross-sell impact are properly accounted for. High-income customers are 2× more likely to be false-declined because their patterns (travel, high-ticket, new merchant, new device) mimic fraud signals — creating outsized churn risk in the most valuable segment. A single uniform threshold almost always misprices the decline curve in at least one segment.
Note prompts — click to add
+ What is our measured approval rate delta between returning customers and first-time buyers on a new device?+ Have we quantified the LTV of false-declined customers who never return (typically 33–40% do not)?+ Are we tracking false declines differently for our top-decile customers vs. the general book?
Required
Define your false decline tolerance across customer segments and product lines.
Single choice
< 0.5% false decline rate (premium / low-risk segments)
0.5% – 2% (standard retail baseline)
2% – 5% (current industry median)
> 5% (aggressive fraud posture, high churn risk)
Not yet measured at the segment level
required
✓ saved
Define decline budget / revenue-at-risk ceiling
Why This Matters
Authorization rate improvement is the highest-ROI metric in payment infrastructure — every 1% lift at $10B in volume recovers $100M in previously lost transactions, and at Visa scale a 1% uplift is worth $150B industry-wide. Framing the scoring program as a revenue protection function with a measurable budget changes how product, risk, and engineering prioritize latency and accuracy tradeoffs. Without a dollar-denominated ceiling, model teams tend to optimize recall in ways the business cannot afford.
Note prompts — click to add
+ What was our total declined authorization volume last year and what percentage do we believe were false?+ Who owns the P&L line for false-decline-driven churn and lost interchange?+ Is authorization rate an executive-tracked KPI alongside fraud loss rate?
Required
Quantify the dollar volume of declines the model is permitted to contribute annually.
Single choice
< $10M annual decline volume tolerated
$10M – $100M
$100M – $1B
> $1B (large issuer / acquirer scale)
Not currently budgeted at the model level
required
✓ saved
Map the PCI DSS v4.0 scope boundary
Why This Matters
PCI DSS v4.0 became mandatory on 31 March 2024, with 51 new and evolved controls that take full effect in March 2025 — including explicit requirements around cryptographic agility, targeted risk analysis, and stronger authentication for all non-console access to the CDE. Routing cardholder data through a public cloud inference endpoint places that endpoint and its operator in PCI scope, and is the fastest way to expand your audit surface from a handful of servers to an entire SaaS estate. Scope reduction is a first-order architectural decision, not an afterthought.
Note prompts — click to add
+ Have we formally re-attested our PCI scope against the v4.0 control set rather than assuming v3.2.1 mapping still holds?+ Where in the authorization path does raw PAN exist vs. network token, and can we push the boundary earlier?+ Is our ML inference runtime in the CDE, and if so do we have the v4.0 controls for non-console access in place?
Required
Confirm which components of the authorization path handle cardholder data under PCI scope.
Select all that apply
Model inference runtime handles PAN (in scope)
Feature store stores tokenized PAN only (reduced scope)
Feature store stores raw PAN (full scope)
Device fingerprint service touches PAN (in scope)
Training pipeline uses hashed / tokenized PAN (reduced scope)
Logs or telemetry may contain PAN fragments (in scope)
Cardholder data environment fully air-gapped from training
requiredpci
✓ saved
Confirm data residency and cross-border constraints
Required
Map cardholder and transaction data to jurisdictional constraints before architecture is finalized.
Select all that apply
PCI DSS CDE must remain on-premises
EU GDPR — data must remain in EU
UK GDPR — UK residency required
India RBI data localization
Brazil LGPD / Central Bank rules
China PIPL / CBIRC
Regional issuer rail residency (domestic-only processors)
Cross-border permitted under SCCs / approved vendors
requiredtrinidy
TrinidyPCI DSS v4.0, EU GDPR, and UK GDPR all create tension with cloud-hosted inference. Trinidy keeps model scoring, feature computation, and audit logging entirely within the institution's own perimeter — no cross-border data flow for any authorization decision.
✓ saved
Define chargeback and liability-shift exposure
Why This Matters
EMV 3DS 2.2+ (mandatory since September 2024 on Visa and Mastercard) shifts chargeback liability to the issuer when the transaction is authenticated — but at the cost of a ~30% abandonment rate on challenged flows, which CMSPI estimates costs European merchants €86B per year. The authorization model's output is implicitly also a 3DS2 challenge decision, and optimizing for authorization rate without accounting for challenge-driven abandonment mis-prices the tradeoff. Chargeback outcomes also arrive with a 30–120 day lag, which must feed back into training.
Note prompts — click to add
+ Does our scoring model's output directly gate 3DS2 challenge, or is that a separate decision?+ What is our current frictionless-to-challenge ratio and our measured abandonment rate on challenges?+ How long does chargeback outcome data take to reach our training pipeline as a label?
Recommended
Specify the model's accountability for chargeback outcomes and 3DS2 liability shift logic.
Single choice
Issuer bears chargeback — approve aggressively with EMV 3DS2 liability shift
Acquirer / merchant bears — precision-focused authorization
Shared exposure — model outputs also drive 3DS2 challenge decision
Not yet modeled — chargeback outcomes not fed back to scoring
recommended
✓ saved
Specify deployment topology for the inference plane
Required
Select the physical/logical deployment target for the scoring ensemble.
Single choice
On-premises issuer data center (FPGA / GPU)
Co-located edge (acquirer gateway)
Private cloud / VPC in-region
Public cloud with PCI-attested inference endpoint
Hybrid: on-prem inference + cloud training
requirededgetrinidy
TrinidyFor issuer-side sub-100ms authorization and PCI-scope residency, cloud inference is physically and regulatorily incompatible. Trinidy is the on-premises inference substrate — FPGA and GPU options on the same deployment fabric.
✓ saved