Phase 1 of 6
Scoping & Forecast Horizon Constraints
Define the capex decision, forecast horizons, site hierarchy, and accuracy tolerances that will govern every subsequent modeling choice. Capacity planning forecasts drive multi-year capital commitments, so framing the problem precisely is the highest-leverage step.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Capex Decision & Forecast Surface
Identify capex decisions the forecast will gate
Why This Matters
A forecast that informs tower lease commitments has a different accuracy and horizon profile than one that informs a spectrum refarm or small cell densification program. Lumping every capex decision into one forecast produces a model that under-serves all of them. GSMA Intelligence tracks global telecom capex around $300B per year, and the operator-level allocation between macro, densification, and spectrum is the single highest-stakes planning decision made annually.
Note prompts — click to add
+ Which capex line items are currently sized by spreadsheet extrapolation rather than ML inference?+ Do we have separate forecast models per capex class, or is one model used across all?+ What is the blast radius of a 10% demand forecast error on each capex line — macro vs. small cell vs. spectrum?
Required
Confirm which capital decisions the model must inform with quantified demand.
Select all that apply
Macro tower new-build / co-location
Small cell densification (urban / suburban)
RAN upgrade (4G to 5G NR, mMIMO, Massive MIMO)
Spectrum refarm (low-band / mid-band / C-band / mmWave)
Backhaul / fronthaul capacity (fiber / microwave)
Core network capacity (5G SA / packet core)
Edge compute / MEC site selection
FWA (fixed wireless access) coverage expansion
Decommissioning / site consolidation
required
✓ saved
Define forecast horizons and granularity
Why This Matters
Forecast error compounds with horizon — ML models trained on operator data typically deliver ±8–12% accuracy at 12 months vs. ±30% for spreadsheet extrapolation, but the spread at 36 months is much wider. Multi-horizon models that share a feature base but optimize per-horizon losses consistently outperform a single long-horizon model. Granularity matters equally: a national forecast that is accurate in aggregate can still misplace every individual upgrade if site-level variance is ignored.
Note prompts — click to add
+ What is our current forecast accuracy at 12 months, measured retrospectively against actuals?+ Do we produce site-level forecasts, or only market-level with top-down allocation?+ How do we reconcile 30-day tactical forecasts with 12-month capex planning forecasts — are they the same model or independent?
Required
Select the forecast horizons the model must produce, plus the spatial and temporal granularity.
Select all that apply
30-day rolling (operations / congestion hot-spots)
90-day tactical (near-term capacity relief)
6-month (equipment procurement cycle)
12-month (annual capex planning)
3-year (long-range spectrum / site strategy)
Per-site / per-sector granularity
Per-cluster / market granularity
Per-MSA / national aggregation
required
✓ saved
Establish target forecast accuracy by horizon
Why This Matters
Industry analysts estimate 15–25% of annual RAN capex is misallocated due to forecasting errors — on a Tier 1 operator this is $500M+ annually. Setting an explicit accuracy target per horizon is the only way to link model quality back to capex efficiency in defensible P&L terms. An accuracy target also forces a conversation about which features are available per horizon — 30-day forecasts can use live PM counters, 3-year forecasts cannot.
Note prompts — click to add
+ Have we computed WAPE and quantile loss on last year's forecast vs. measured traffic, per site?+ What accuracy threshold would materially change our capex approval process?+ Do we evaluate forecast accuracy at the site level or only aggregate, and does that match how we spend capex?
Required
Define MAPE / WAPE / quantile-loss targets per horizon that the model must hit to be considered capex-grade.
Single choice
< 8% WAPE at 12 months (best-in-class operator ML)
8–12% WAPE at 12 months (typical ML target)
12–20% WAPE at 12 months (improvement over spreadsheet baseline)
> 20% WAPE at 12 months (exploratory / research)
Not yet measured with a holdout
required
✓ saved
Quantify capex-at-risk budget the forecast protects
Why This Matters
Framing the forecast as a revenue-protection and capex-efficiency function with a dollar-denominated scope changes how network planning, finance, and engineering prioritize features and horizons. Without an explicit capex-at-risk envelope, model teams optimize accuracy in ways the business cannot monetize — and a 2% WAPE improvement looks identical to a 20% WAPE improvement on a dashboard. The cost of inference is a rounding error against a single misplaced macro upgrade cycle.
Note prompts — click to add
+ What was last year's capex that was explicitly gated or reprioritized by a forecast output?+ Who owns the P&L line for stranded capex from forecast error — finance, network planning, or deployment?+ Is forecast accuracy a board-reported KPI alongside capex-to-revenue ratio?
Required
Tie the forecast's dollar impact to an annual capex envelope it is permitted to move.
Single choice
< $50M annual capex gated by forecast
$50M – $250M
$250M – $1B
$1B – $5B (Tier 1 national operator)
> $5B (global / multi-market)
Not currently budgeted at the forecast level
required
✓ saved
Define confidence interval and quantile policy
Why This Matters
A point forecast gives planners no basis to distinguish a site where demand is tightly bounded from one with wide uncertainty, which is precisely the information capex committees need. Quantile regression models (LightGBM quantile, DeepAR, TFT) output explicit P10/P50/P90 at training cost comparable to a point forecast. The conversation shifts from "is the forecast right?" to "how much headroom do we build for the P90 case?" — which is the right question for capital commitment.
Note prompts — click to add
+ Do our current forecasts ship as point estimates or as quantile bands?+ Are planners trained to consume P90 upside for capacity sizing and P10 downside for decommissioning?+ What is our policy for sites where the P10-to-P90 spread is wider than the planning action threshold?
Required
Planners should consume uncertainty ranges, not false-precision point estimates. Specify the quantiles the model must output.
Single choice
Point forecast only (discouraged)
P50 + P90 (median + upside)
P10 / P50 / P90 (downside / median / upside)
P5 / P25 / P50 / P75 / P95 (full quantile spread)
Parametric full distribution (mean + variance per site)
required
✓ saved
Map site hierarchy and reconciliation constraints
Why This Matters
Unreconciled hierarchical forecasts produce the frustrating outcome that the sum of site forecasts does not equal the market forecast, and finance cannot tell which number is correct. Reconciliation approaches (MinT, hierarchical forecasting with Hyndman-style reconciliation) make all levels coherent while preserving accuracy at each level. This is a first-order architectural choice, not a post-processing step.
Note prompts — click to add
+ Do our current forecasts reconcile across sector / site / market, or does each team produce its own inconsistent number?+ Who arbitrates when bottom-up and top-down forecasts disagree today?+ What is the hierarchical level at which capex decisions are actually made, and does our forecast match that level?
Recommended
Confirm the sector to site to cluster to market hierarchy the forecast must respect and reconcile.
Select all that apply
Sector level (α/β/γ per site)
Site / tower level
Cluster / grid-square level
Market / MSA level
National aggregate
Hierarchical reconciliation required (top-down + bottom-up)
No reconciliation — independent forecasts per level
recommended
✓ saved
Confirm data residency and competitive data constraints
Required
Traffic topology, competitive signals, and subscriber behavior are commercially sensitive — specify where inference and training may run.
Select all that apply
On-premises / private cloud only (data sovereignty)
Hybrid — cloud training, on-prem inference
Cloud training and inference with contractual data isolation
EU GDPR residency required
National / sovereign data residency (government or critical infrastructure)
Competitive intelligence data cannot leave network operations center
No residency constraint
requiredtrinidy
TrinidyNetwork topology, PM counter history, and competitive intelligence data are among an operator's most proprietary assets. Trinidy keeps training and inference entirely within the operator's perimeter — no traffic data, competitive signals, or site economics leave the infrastructure boundary.
✓ saved
Define capex approval integration point
Why This Matters
A forecast that requires manual re-entry into a capex approval tool will be selectively used, selectively overridden, and impossible to audit retrospectively. Integration via standard APIs (TMF Open APIs for network planning, vendor-specific REST for Ericsson Intelligent Automation Platform, Nokia MantaRay, Cisco Crosswork) preserves the full decision chain from model output to approved capital commitment — which is also what SR 11-7-style model governance requires.
Note prompts — click to add
+ Where in the capex approval workflow does the forecast currently enter, and is that entry auditable?+ If a planner overrides the forecast, do we capture the override and reason for model feedback?+ Have we inventoried which planning tools (NetAct, MantaRay, IAP, Crosswork, Netcracker) we need to feed?
Recommended
Specify how forecast output lands in the capital approval workflow.
Single choice
REST API feed into planning tool (Ericsson IAP / Nokia MantaRay / Cisco Crosswork)
Batch export into Amdocs / Netcracker OSS planning module
Direct feed into financial planning system (capex workbench)
Dashboard-only — planners consume visually
Spreadsheet hand-off (not recommended)
recommended
✓ saved