Window & cadence
The score is a rolling 30-day view updated nightly at 00:15 UTC. Each day we process operational telemetry from the past month, store a snapshot, and publish the latest score to AvvA.aero.
Inputs & weights
Six signals capture the reliability of our public experience and operations.
| Input | Source | Weight |
|---|---|---|
| Service uptime | External monitors ping avva.aero and /support every minute. | 0.25 |
| Messaging delivery success | OpenPhone/Twilio callback statuses. | 0.20 |
| Workflow success | Make.com scenario run outcomes. | 0.15 |
| DB/API availability | Supabase authentication & heartbeat checks. | 0.15 |
| SLA compliance | First-response time vs. CONFIG_SLA_MINUTES target. | 0.15 |
| Incident MTTR | Mean time to restore for tracked incidents. | 0.10 |
Normalization & scoring
- Percent-based inputs (uptime, messaging, workflow, DB/API) use tiers: 99.9%→1.0, 99.5%→0.8, 99.0%→0.5, 98.0%→0.2, below 98%→0.0.
- SLA compliance uses: 95%→1.0, 90%→0.8, 80%→0.5, 70%→0.2, below 70%→0.0.
- MTTR minutes uses: ≤30→1.0, 60→0.8, 120→0.5, 240→0.2, >240→0.0.
- Score =
10 × Σ(weightᵢ × normalizedᵢ), clamped to 0–10.
If data completeness for an input drops below 90%, we proportionally down-weight that component for the period and surface the adjustment in the published JSON.
Audit & transparency
All underlying events are stored in append-only raw tables (uptime checks, messaging events, workflow runs, Supabase heartbeats, first-response events, and incidents). Aggregated daily snapshots and the published score are written to dedicated tables with service-role access. The public endpoint /api/reliability exposes only the latest score, window dates, component breakdown, and data completeness.