Phase 1 of 6
Scoping & Dispute Profile
Profile your dispute portfolio — volume, reason-code mix, friendly-fraud share, and issuer vs. acquirer side — before modeling decisions are made.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Dispute Volume & Economic Exposure
Quantify annual dispute volume in scope
Why This Matters
US chargeback volume reached ~105M disputes representing ~$11B in 2024 — up from $7.2B in 2019 — and global volume is projected to grow from 238M to 337M transactions by 2026 (+41%). The all-in cost per dispute is $128 for merchants and the cost multiplier has climbed to $3.75–$4.61 per $1 of chargeback loss. A dispute automation program sized to last year's volume will be underbuilt within 18 months at current growth rates.
Note prompts — click to add
+ What is our year-over-year dispute volume growth, and does our automation plan assume it continues?+ Have we estimated the all-in cost per dispute (labor, chargeback fee, loss, reputational) at our institution?+ Who owns the P&L line for dispute operating cost vs. dispute loss, and are those numbers reconciled?Establish the dispute count the automation program must handle per year.
Single choice
Profile dispute reason-code distribution
Why This Matters
Reason-code distribution is the single most important input to model design because the decision logic, evidence requirements, and representment economics differ wildly across codes. Visa 10.4 disputes are eligible for Compelling Evidence 3.0 (CE3.0) defense using prior non-disputed transaction history — an AI-tractable retrieval problem. Visa 13.1 "not received" disputes turn on delivery confirmation and tracking artifacts. Treating all reason codes with a single policy means under-investing in the codes that dominate your portfolio and over-automating long-tail codes that do not justify the engineering.
Note prompts — click to add
+ What are our top 5 reason codes by volume and by dollar loss, and are they the same?+ Have we mapped reason-code-specific win rates to confirm where automation will have the most leverage?+ Which codes in our mix are CE3.0-eligible, and what share of our fraud disputes could that defense recover?Identify the dominant Visa and Mastercard reason codes driving your dispute mix.
Select all that apply
Estimate friendly fraud (first-party misuse) share
Why This Matters
The MRC 2024 Chargeback Field Report puts friendly fraud at 75–79% of total chargebacks, and 72% of merchants reported a year-over-year increase in 2024. Buyer's remorse alone drives 65.3% of friendly-fraud cases (Chargeflow 2024). Friendly fraud and true third-party fraud require opposite responses: third-party fraud should be conceded quickly with process improvement upstream, while friendly fraud is the exact scenario Visa CE3.0 and Mastercard First-Party Trust were designed to let merchants contest. Misdiagnosing the ratio means the automation either concedes winnable disputes or wastes evidence packaging on unwinnable ones.
Note prompts — click to add
+ How do we currently classify a dispute as friendly vs. third-party, and is that label fed back into training?+ Does our win rate on represented disputes distinguish friendly-fraud wins from fulfillment-dispute wins?+ Have we piloted the Mastercard First-Party Trust program or the Visa CE3.0 defense on our friendly-fraud inventory?Separate true third-party fraud from friendly/first-party fraud — they have fundamentally different optimal responses.
Single choice
Specify issuer-side vs. acquirer-side perspective
Why This Matters
Issuer-side automation is bounded by Regulation E's 10-business-day provisional credit and 45/90-day final resolution clocks, and by Reg Z's 60-day cardholder claim window — so the model must prioritize fast, compliant first-response over recovery optimization. Acquirer-side automation operates inside Visa VCR and Mastercard MCOM representment windows and is measured primarily on merchant win rate and net recovery. A single automation stack cannot optimize both without explicit role segmentation.
Note prompts — click to add
+ Which side are we building for, and does our team have the domain expertise for the other side if we expand later?+ Have we mapped our regulatory clocks (Reg E, Reg Z) against our representment windows (VCR, MCOM)?+ Do we have clean system boundaries between claim intake and representment, or is this one blended queue today?Disputes look very different from the issuer side (customer claim intake) vs. the acquirer/merchant side (representment defense).
Single choice
Benchmark chargeback-to-transaction ratio
Why This Matters
The Visa Acquirer Monitoring Program (VAMP) ratio effective June 2025 uses total disputes (TC40 fraud alerts + TC15 chargebacks) divided by total sales, with a 2.2% threshold that drops to 1.5% for North America, EU, and APAC from April 1, 2026. Merchants and acquirers above threshold face escalating fees and potential network termination. A rising chargeback ratio is both a regulatory risk and a signal that the automation program should prioritize intake triage and upstream fraud prevention, not just representment.
Note prompts — click to add
+ What is our 12-month trailing chargeback ratio and how close are we to the April 2026 1.5% threshold?+ Do we track TC40 fraud alerts separately, or only TC15 chargebacks, when computing internal ratios?+ Have we segmented our ratio by MCC, acquirer BIN, and geography to identify where the pressure is concentrated?Your chargeback ratio relative to network thresholds determines how much risk you carry from monitoring programs.
Single choice
Define target dispute-resolution cycle time
Why This Matters
Adyen's July 2025 representment timeline tightening reduced the US/Canada window to 9 days and other regions to 18 days, with a new five-tier fee structure that penalizes slow responses. Manual teams typically need 3–7 days just to assemble evidence packages — which means any manual process now misses windows on a meaningful share of representable disputes. Same-day AI evidence packaging is no longer optional; it is a prerequisite for executing Visa CE3.0 defenses and Mastercard First-Party Trust challenges at portfolio scale.
Note prompts — click to add
+ What is our current p50 and p95 dispute turnaround time, and how many disputes miss network windows today?+ Have we modeled the revenue impact of moving from 5-day to same-day evidence packaging across our win-rate curve?+ Is our current dispute management platform architected for same-day AI packaging, or was it designed for a manual era?Set the end-to-end SLA the automation must hit from intake to resolution.
Single choice
Trinidy — Evidence assembly, rule interpretation, and representment packaging can run entirely on-node with Trinidy — compressing a typical 3–7 day manual evidence cycle to same-day without routing dispute artifacts through third-party cloud APIs.
Map dispute-data residency and cross-border constraints
Dispute evidence often contains PII, card data, and regulated communications — residency rules constrain where it can be processed.
Select all that apply
Trinidy — Document intelligence on customer communications, delivery artifacts, and transaction receipts can include PII and PCI-scope data. Trinidy keeps the full evidence-processing pipeline inside the institution's perimeter — no dispute artifact ever transits a third-party cloud.
Define deployment topology for the dispute pipeline
Why This Matters
A dispute pipeline spans three inference types — document intelligence, LLM rule interpretation, and scoring — and each has different latency, residency, and cost profiles. Splitting them across vendor APIs typically means every dispute artifact transits multiple cloud perimeters, which is both a PCI scope expansion and an audit-trail fragmentation problem. A single deployment fabric for all three inference types is the simplest architecture and usually the cheapest at steady state.
Note prompts — click to add
+ Have we mapped which inference components can physically run in-perimeter vs. which require external APIs?+ What is the PCI scope impact of each topology option we are considering?+ Who owns the audit trail when evidence transits three vendor APIs between intake and decision?Select where document intelligence, rule interpretation, and decision logic will run.
Single choice