Phase 1 of 6
Scoping & Spectrum Constraints
Define the bands, licenses, inference cadence, and regulatory envelope that will govern every optimization decision the model is allowed to recommend.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Bands, Licenses & Regulatory Surface
Inventory licensed bands in scope for optimization
Why This Matters
Each band sits under a different FCC part and carries different incumbent-protection, power, and emission obligations — a single optimization model that ignores the band-specific rules will eventually recommend an illegal configuration. C-band post-auction 107 carries ongoing coordination obligations with satellite incumbents above 3.98 GHz, and CBRS under Part 96 is not a license you "own" at all but a dynamic assignment from a Spectrum Access System. Treating all licensed bands as equivalent is the fastest way to generate a regulatory incident.
Note prompts — click to add
+ Which of our bands sit under Part 24, Part 27, Part 96, and Part 101 — and is that mapping current in our model configuration?+ Do we have documented incumbent-protection obligations on C-band and 2.5 GHz that the optimizer must respect?+ Is the SAS-managed CBRS spectrum treated as a first-class, dynamically changing input rather than a static license?
Required
Confirm which spectrum holdings the model is allowed to recommend allocation changes on.
Select all that apply
600 MHz / 700 MHz low-band (FCC Part 27)
850 MHz cellular / 1.9 GHz PCS (FCC Part 24)
AWS-1 / AWS-3 (FCC Part 27)
C-band 3.7–3.98 GHz (FCC Part 27, post-auction 107)
CBRS 3.55–3.7 GHz — GAA / PAL (FCC Part 96)
2.5 GHz BRS / EBS
mmWave — 24/28/37/39 GHz (FCC Part 30)
Microwave backhaul (FCC Part 101)
Unlicensed 5/6 GHz (FCC Part 15)
required
✓ saved
Define inference cadence and oscillation guardrails
Why This Matters
Industry deployments (Ericsson, Nokia ReefShark, Huawei MetaAAU, Viavi) converge on a 1–5 second cadence because faster loops oscillate — the model reacts to its own previous action before the RAN has stabilized — and slower loops miss dynamic traffic shifts. GSMA's 2023 spectrum-management AI survey found 61% of operators cited the 1–5 second window as their operational target. Your cadence must be declared explicitly and guarded by damping logic before the first production deployment.
Note prompts — click to add
+ What is our target inference cadence per parameter class, and have we validated it does not oscillate?+ Do we have hysteresis or minimum-dwell guardrails on each recommendation to prevent flapping?+ How do we detect oscillation in production — counter-metric drift, or waiting for an ops ticket?
Required
Select the optimization loop interval that balances responsiveness against parameter-change oscillation.
Single choice
Sub-second (scheduler-level DSS — 3GPP TS 38.300)
1 second (per-sector beamforming / power)
1–5 seconds (standard AI optimization loop)
5–30 seconds (slow parameter tuning)
Minutes (SON-style batch reallocation)
Tiered by parameter class
requirededgetrinidy
TrinidyCloud-routed RAN optimization introduces 50–200ms of uncontrolled jitter per inference cycle — enough to destabilize a 1-second loop. Trinidy runs the inference on-node at the cell site or edge cluster so the 1–5 second cadence is deterministic under all network conditions.
✓ saved
Specify closed-loop vs. recommendation-only operating mode
Why This Matters
O-RAN Alliance guidance and most operator policies treat closed-loop RAN automation as a spectrum of auto-apply scopes, not a binary — power and tilt inside a pre-validated envelope behave very differently from a reallocation that changes a licensed-band boundary. Declaring the mode up front forces a governance conversation that otherwise surfaces only after the first production incident. A model that can only recommend is also a model whose audit trail is dramatically simpler.
Note prompts — click to add
+ Which parameter classes are we comfortable auto-applying, and who signs off on that envelope?+ Is there a kill-switch that reverts the entire optimizer to recommend-only in under 30 seconds?+ Do we log every auto-applied change with full before/after state for post-incident review?
Required
Define which classes of recommendations may auto-apply vs. require human approval.
Select all that apply
Full closed-loop — auto-apply all recommendations
Auto-apply power / tilt within pre-approved envelope
Auto-apply beamforming weights only
Auto-apply carrier aggregation / DSS ratio
Recommend-only — human operator approves every change
Tiered by parameter risk class
required
✓ saved
Confirm FCC RF exposure compliance envelope
Why This Matters
FCC 47 CFR 1.1307 and 2.1093 set the general-population and occupational RF exposure limits that every licensed transmitter must respect, and the 2019 FCC reassessment (FCC 19-126) reaffirmed these limits through the 100 GHz band. An AI optimizer that recommends a power increase or a beamforming tilt change is adjusting parameters the compliance calculation depends on — which means the exposure envelope must be a hard constraint in the model, not a post-hoc check. A violation is an operational incident and an FCC enforcement matter at the same time.
Note prompts — click to add
+ Is the RF exposure envelope encoded as a hard constraint on every power and tilt recommendation?+ Have we validated that the worst-case beam direction still sits inside the 1.1307 limit?+ Who owns the re-certification when the optimizer's envelope changes, and how fast can we do it?
Required
Ensure the optimizer cannot recommend configurations that exceed 47 CFR 1.1307 / 2.1093 RF exposure limits.
required
✓ saved
Map incumbent / federal protection obligations
Why This Matters
CBRS under FCC Part 96 is the most automated example — every transmission requires a live grant from one of the certified Spectrum Access Systems (Google, Federated Wireless, CommScope), and the Environmental Sensing Capability network can revoke that grant within seconds to protect DoD radar. Post-auction 107 C-band carriers still have coordination obligations with fixed-satellite-service earth stations above 3.98 GHz. An optimizer that treats these as static license data rather than live, changing constraints will eventually recommend a configuration the SAS will refuse — or worse, one that causes incumbent interference.
Note prompts — click to add
+ Is the live SAS grant state a real-time input to the optimizer, or a cached assumption?+ Have we documented every incumbent coordination obligation per band and mapped it to a model constraint?+ What happens to in-flight recommendations when an ESC event revokes a CBRS grant?
Required
Document the bands where federal or incumbent protection (NTIA, SAS, ESC, satellite) creates non-negotiable exclusion zones.
Select all that apply
CBRS — SAS grant enforcement (Google / Federated / CommScope)
CBRS — ESC coastal DoD radar protection
C-band — FSS earth-station coordination post-auction 107
2.5 GHz — educational / tribal priority windows
6 GHz — AFC for standard-power Wi-Fi coexistence
NTIA-coordinated federal sharing bands
International border coordination (Canada / Mexico)
No incumbent constraint applicable
required
✓ saved
Confirm international / ITU-R harmonization constraints
Recommended
Identify bands where ITU-R Radio Regulations or ETSI harmonization limit parameter choices.
Select all that apply
ITU-R Radio Regulations — regional footnotes applicable
ETSI EN 300/301 series — EU market conformity
3GPP Release 17 band definitions
3GPP Release 18 / 19 DSS and CA extensions
Cross-border coordination agreements
Purely domestic US deployment — no ITU impact
recommended
✓ saved
Define multi-vendor RAN scope
Why This Matters
Proprietary vendor optimization suites typically cost $200K–$500K per 1,000 sites per year and only manage their own vendor's equipment — which is why most tier-1 operators today run three or four parallel optimization stacks that cannot coordinate across a border cell. The open-RAN / SMO trend under O-RAN WG3 reframes optimization as a Near-RT RIC xApp that runs against any compliant vendor. Declaring multi-vendor scope up front is the single decision that determines whether your optimizer is a cost center or a capacity multiplier.
Note prompts — click to add
+ How many vendor-specific optimization suites are we paying for today, and what do they cost per site per year?+ Are border cells between vendors a known capacity problem that a unified optimizer would close?+ Do our RAN contracts allow third-party Near-RT RIC xApps, or do they lock optimization to the vendor SMO?
Required
Confirm which vendors' equipment the optimizer must manage — vendor lock-in cost is proportional to how narrowly this is scoped.
Select all that apply
Ericsson RAN
Nokia RAN
Samsung RAN
Huawei RAN (non-US)
Mavenir / Rakuten Symphony open RAN
ZTE (non-US)
Mixed vendor estate
Single-vendor deployment
requiredtrinidy
TrinidyProprietary optimization suites (Ericsson Intelligent Automation Platform, Nokia MantaRay, Samsung SMO) only manage their own vendor's RAN. Trinidy runs operator-trained models against normalized O-RAN interfaces so a single optimizer spans mixed Ericsson / Nokia / Samsung / Mavenir / Rakuten estates.
✓ saved
Specify inference deployment topology
Required
Select the physical deployment target for the optimization inference plane.
Single choice
On-site at the cell / sector (lowest latency)
Edge aggregation point / C-RAN hub
Near-RT RIC layer (O-RAN WG3)
Operator regional data center
Public cloud (AWS / Azure / GCP)
Hybrid — on-prem inference + cloud training
requirededgetrinidy
TrinidyFor 1-second inference cadence and sovereign control of spectrum telemetry, cloud inference is incompatible with the latency and data-residency envelope. Trinidy deploys at the site, at the edge aggregation point, and at the Near-RT RIC layer — all on the same fabric, all operator-owned.
✓ saved
Define throughput / capacity target for the optimizer
Why This Matters
Industry case studies converge on 15–30% throughput improvement as the realistic capacity gain from AI-driven optimization on existing spectrum — Ericsson reported 30% average-user throughput uplift at T-Mobile US DSS cells, Elisa Finland reported 40% spectral efficiency on Nokia ReefShark RL beamforming, and Huawei MetaAAU produced a 4.2 dB SINR gain on China Mobile. Setting a measurable target early forces the program to be benchmarked against a dollar-denominated equivalent — the capacity either justifies deferring the next spectrum purchase or it does not.
Note prompts — click to add
+ What is our dollar-equivalent value of a 20% capacity gain vs. acquiring new spectrum in our top market?+ Is the capacity target tracked alongside fraud-free optimizer recommendations, or is it the only KPI?+ Who owns the P&L line that counts deferred spectrum purchases as a benefit of the program?
Recommended
Set the measurable capacity gain the program is accountable for delivering.
Single choice
5–15% spectral efficiency gain (conservative)
15–30% throughput gain on existing spectrum (industry baseline)
> 30% cell-edge throughput (aggressive, MIMO-driven)
Defer one auction cycle / 12–24 months of new spectrum
Reduce cell-edge dropped-call rate by measurable delta
Not yet measured at the program level
recommended
✓ saved