Phase 1 of 6
Scoping & Risk Boundaries
Define products in scope, decision latency, acceptable adverse-selection rate, and the regulatory perimeter that will govern every pricing and underwriting decision.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Products & Decision Surface
Identify products and rails in scope for dynamic pricing
Why This Matters
Each product sits on a different regulatory footing — a decision flow that conflates them will either over-comply (slow, expensive) or under-comply (Reg Z / TILA exposure on BNPL after the CFPB May 2024 interpretive rule, ECOA adverse-action on every credit denial, HMDA reporting on mortgage). The Upstart benchmark — 43% more approvals at equivalent loss rates, 91% fully automated — was achieved on a narrowly-scoped personal loan product, not a mixed book. Scoping precisely is the difference between a production-quality model and a committee-scoped one.
Note prompts — click to add
+ Which products share enough feature overlap to justify a shared model vs. dedicated heads?+ Has legal confirmed Reg Z / TILA applicability for every product on the list — particularly BNPL post-May 2024?+ Do any products (e.g. small business) fall outside Reg B's individual-consumer scope and need a separate governance track?
Required
Confirm which credit / insurance / deposit products will route through the real-time pricing or underwriting model.
Select all that apply
Auto loans (prime / near-prime / subprime)
Auto refinance
Mortgage (purchase / refi / HELOC)
Unsecured personal loans
Credit cards (originations + line management)
Buy Now Pay Later (BNPL)
Small business lending
Student / education lending
P&C insurance underwriting (auto / home)
Deposit pricing (promotional CD / savings)
required
✓ saved
Define end-to-end decision latency SLA
Why This Matters
BNPL specifically must return a decision inside the merchant checkout page-load window (~2 seconds end-to-end), which leaves under 500ms for the scoring + explanation round trip after network and rendering are accounted for. Every millisecond spent on a cross-region feature store or cloud inference hop is a millisecond that cannot be spent on SHAP computation (30–50% compute overhead with KernelExplainer) or GenAI explanation generation. SLA decisions made at scoping time have 10× the leverage of optimization work done after architecture is frozen.
Note prompts — click to add
+ What is our current p95 decision latency and where is the time being spent — bureau pull, feature retrieval, scoring, or explanation?+ What is our fallback behavior when the latency SLA is breached — decline, approve, or degrade to a rules engine?+ Have we measured latency at 5× peak load (end-of-month refinance surge, holiday BNPL)?
Required
Select the P95 / P99 latency budget that the scoring + explanation pipeline must hold under peak load.
Single choice
< 500ms (digital lending / BNPL checkout within page-load window)
< 1 second (insurance quote flows / refinance pre-qual)
< 3 seconds (mortgage prequalification)
< 30 seconds (traditional digital application)
Tiered by product (mixed SLA)
requirededgetrinidy
TrinidyThe fast-path scoring model can clear in <10ms, but the ECOA-compliant GenAI explanation layer adds 200–800ms — and cloud round-trips on either hop burn the end-to-end budget. Trinidy co-locates both stages on-node so the full scored + explained decision fits under 500ms with deterministic p99.
✓ saved
Establish acceptable adverse-selection / default-rate bounds
Why This Matters
This is the single most consequential business decision in the scoping phase because it determines the model's reward function and every calibration choice downstream. Upstart's publicly disclosed benchmark is 43% more approvals at equivalent loss rates or 53% lower defaults at constant approvals — you cannot get both at once, and a model that is not told which axis to optimize will split the difference in a way no stakeholder signed off on. Framing the program in portfolio-loss terms also makes the fair-lending conversation tractable (see phase 4).
Note prompts — click to add
+ Who owns the P&L line that absorbs loan losses, and have they signed the loss-rate envelope in writing?+ Do we have a measured lift expectation against the current FICO-only or traditional-scorecard baseline?+ What is our plan for the 26M credit-invisible + 19M unscorable Americans the CFPB has identified — in or out of scope?
Required
Define the loss rate envelope the pricing model is authorized to operate within across the approval band.
Single choice
Hold current loss rate while increasing approvals (Upstart pattern — 43% more approvals at same loss)
Hold current approval rate while cutting loss (53% default reduction pattern)
Blended — measured lift on both dimensions
Expand into credit-invisible population at elevated near-term loss tolerance
Not yet modeled at the portfolio level
required
✓ saved
Define acceptable false-decline / adverse-action rate by segment
Why This Matters
Every false decline generates an ECOA adverse-action notice, and every notice is evidence the CFPB can cite if the reason codes do not meet the "specific principal reasons" standard clarified in 2023. A uniform threshold also is the fastest way to fail the four-fifths rule on protected classes — Zest AI's documented pattern is adversarial debiasing that lifts protected-class approvals 30% while holding loss rates. Scoping this at the segment level now is cheaper than debugging a disparate-impact finding in year two.
Note prompts — click to add
+ Have we measured our current false-decline rate by segment, or only in aggregate?+ Do our adverse-action notice templates pass CFPB specificity review — or are they generic "credit score too low" boilerplate?+ Who in compliance owns the segment-level threshold ceiling?
Required
Quantify the tolerable false-decline rate across credit bands, geographies, and protected-class segments.
Select all that apply
Prime returning customers (< 0.5%)
Near-prime first-time (2–5%)
Thin-file / credit-invisible (measured separately)
Protected-class four-fifths floor (ECOA / Fair Housing Act)
Geographic / state-specific thresholds
Uniform threshold (not segment-differentiated)
required
✓ saved
Map the regulatory perimeter for this deployment
Why This Matters
A U.S. lender operating in the EU must satisfy both EU AI Act and U.S. ECOA/Reg B simultaneously, and the two regimes do not fully overlap — EU Act requires human oversight mechanisms that go beyond ECOA's adverse-action notice. SR 11-7 is necessary but not sufficient for EU AI Act conformity. The 23-state NAIC adoption and Colorado AI Act create a third enforcement front specifically on insurance pricing. Scoping the regulatory perimeter wrong at the start means rebuilding governance in year two under an enforcement deadline.
Note prompts — click to add
+ Has legal mapped every product-geography combination against this list, or are we relying on a single federal view?+ When does our next EU AI Act Annex III conformity milestone fall (Aug 2026 transparency obligations)?+ Are we tracking state-level AI acts (CO, CA, NY) as they evolve, or reacting after the fact?
Required
Confirm every regulation that will apply to the model across product, geography, and channel.
Select all that apply
ECOA / Reg B (all consumer credit)
Reg Z / TILA (closed-end + BNPL post-May 2024)
FCRA (bureau-driven decisioning + adverse action)
HMDA Regulation C (mortgage reporting)
CFPB 1033 open banking (consumer financial data)
SR 11-7 (federally supervised bank model risk)
NAIC AI Model Bulletin (23 US states + DC)
EU AI Act Annex III (high-risk credit scoring)
Colorado AI Act (SB205, effective Feb 2026)
NYC Local Law 144 (automated employment — analogous precedent)
State insurance fair-pricing statutes
required
✓ saved
Confirm data residency and cross-border constraints
Required
Map applicant, bureau, and open-banking data to jurisdictional constraints before architecture is finalized.
Select all that apply
US-only deployment
EU GDPR — data must remain in EU
UK GDPR — UK residency required
Canada PIPEDA / OSFI
Australia Privacy Act / APRA
Cross-border permitted under SCCs / approved vendors
Residency requirements unmapped today
requiredtrinidy
TrinidyCFPB 1033 open banking data, EU GDPR, and UK GDPR all restrict where consumer financial data can be stored and processed. Trinidy keeps scoring, feature computation, SHAP attribution, and the GenAI explanation layer inside the institution's own perimeter — no cross-border inference hop for any decision.
✓ saved
Define the hybrid inference topology (fast path + explanation path)
Why This Matters
The two-stage inference pattern — fast gradient-boosted score in <10ms plus GenAI explanation in 200–800ms — is functionally required for any AI lending program post-2023 CFPB guidance. If the two stages live on different infrastructure with different audit logs, the ECOA adverse-action record has a seam in it, and that seam is exactly where examiners probe. The topology choice made at scoping time determines whether audit is native or a year-two retrofit.
Note prompts — click to add
+ Is our adverse-action audit trail a single record spanning score + explanation, or stitched from two systems?+ What is our p99 explanation-layer latency today, and does it meet our end-to-end SLA?+ Do both stages share the same feature snapshot, or can SHAP attribute to features the GenAI explanation did not see?
Required
Decide how the millisecond scoring stage and the sub-second explanation stage are physically and logically separated.
Single choice
On-premises hybrid (scoring + explanation both local)
Co-located edge (scoring) + on-prem explanation layer
On-prem scoring + cloud GenAI explanation
Fully cloud-hosted (both stages)
Not yet architected
requirededgetrinidy
TrinidyNEXUS OS runs the fast-path scoring model and the GenAI explanation layer on the same inference fabric with a single audit record — one entry covering score + reason codes + GenAI explanation, satisfying ECOA adverse-action requirements by design rather than stitched together after the fact.
✓ saved
Specify human-in-the-loop override policy
Why This Matters
EU AI Act Article 14 mandates effective human oversight for high-risk AI, and Annex III explicitly names creditworthiness evaluation. "Rubber-stamp" human review where the underwriter can only affirm the model's output does not satisfy the Article 14 standard — the human must have the authority and information to disagree. Upstart runs at 91% automation, which means 9% is genuine human review, not theater.
Note prompts — click to add
+ Can our underwriters meaningfully overturn a model decision, or are they reviewing SHAP output they cannot act on?+ What is our documented escalation path for a model-vs-human disagreement?+ How is the human reviewer's decision fed back into training data?
Recommended
Define when a model decision must be reviewable by a human underwriter under EU AI Act Article 14 and OCC/CFPB expectations.
Select all that apply
Every adverse action reviewable on consumer request
Auto-approve above a confidence threshold, human review below
Mandatory human review for protected-class edge cases
Mandatory human review for high-dollar (mortgage, jumbo) decisions
Full human review — AI advisory only
No formal override policy today
recommended
✓ saved