Projection traces and replay
Overview
When router replay captures routing records, each record can include a structured projection_trace field (JSON) in addition to projections (matched output names) and projection_scores (aggregated numeric scores).
The trace explains how partition reduction, weighted scores, and mapping thresholds behaved for that request—so operators and dashboard users can debug routing without inferring internals from scalar scores alone.
Key Advantages
- Replay records stay self-describing: the same persistence path carries both aggregate scores and structured explainability JSON.
- Partition contender lists, softmax winners, mapping boundary distance, and per-input score contributions surface in one object.
- Version
1in the payload leaves room for additive fields without rewriting older consumers.
What Problem Does It Solve?
Matched projection names (projections) and numeric summaries (projection_scores) answer what was chosen, but they do not preserve why a partition picked a winner or how close a mapping was to the next threshold band.
projection_trace closes that gap for audits, support, and insights views without extra query-time inference.
When to Use
- You run router replay (memory, Redis, or PostgreSQL) and want explainability columns on each record.
- You use the dashboard Insights drill-down for replay-backed flows and need collapsible projection detail.
- You are building tooling that validates projection behavior against real traffic—not only against static config.
Configuration
Explainability payloads are emitted when projections are evaluated; storage depends on replay backend configuration:
- Enable replay with the persistence settings described in Router replay configuration.
- For PostgreSQL, ensure migrations include column
projection_trace(JSONB) alongsideprojectionsandprojection_scores.
There is no separate “trace on/off” switch—tracing is implicit whenever projections run and the recorder persists the enriched SignalResults/Record.