What you're looking at
Your data centre sensors already collect temperature, humidity, and power readings every
30 seconds. Right now, you get an alert when a single reading crosses a fixed limit —
for example, when exhaust temperature exceeds 38°C.
The problem: by the time a single reading breaches a threshold,
the fault has been developing for hours or days. A failing fan doesn't suddenly hit 38°C
— it drifts upward slowly, half a degree per hour, while every individual reading stays
"within limits."
What Sentinel does: instead of watching individual readings, it learns
the normal pattern across all six sensors together. It understands that when
intake temperature rises, exhaust should rise proportionally. When one power phase
creeps up while others don't, something is wrong — even though no single reading has
breached any limit.
This dashboard shows four common failure modes on synthetic (simulated) data:
- Fan degradation — gradual thermal drift as airflow reduces
- PDU overload — one power phase creeping up (asymmetry)
- Hot spot — exhaust temperature rising while intake stays normal
- HVAC drift — humidity becoming unstable as cooling degrades
For each fault, the "Lead time" shows how many hours earlier Sentinel
detects the problem compared to a traditional single-channel threshold alert.
The bottom line
Sentinel gives you hours of advance warning before a problem
becomes an emergency. No new hardware — it works on the sensor data you already collect.
Software-only intelligence on your existing infrastructure.
How it works
Sentinel encodes multivariate sensor windows into sparse representations using a
stack of signal processing primitives developed by Sparse Supernova.
1. Windowing & Encoding
Six sensor channels (temp_in, temp_out, humidity, current L1/L2/L3) are sampled every
30 seconds. A sliding window of 10 samples (5 minutes) is flattened to a 60-dimensional
feature vector, normalised per-channel against the site's baseline statistics.
2. USL Layer Sizing
The Universal Saturation Law (FRAI = D / (D + 1/dim)) predicts the
optimal hidden layer width from the data's intrinsic drift parameter D. For typical
data centre sensors, D ≈ 0.19, yielding a hidden layer of ~128 units at 95% FRAI.
This replaces trial-and-error architecture search with a single analytical formula.
3. RBM Feature Extraction
A Restricted Boltzmann Machine (60 → 128 hidden units) learns the joint distribution
of normal sensor behaviour via contrastive divergence. Training is accelerated by
USAD gating — a conformal anomaly detector that skips CD updates on
already well-modelled samples, reducing training compute by 40–70%.
4. Anomaly Detection
At inference, each window's reconstruction error through the RBM is computed as a
nonconformity score. Scores are smoothed with an EMA (α=0.1) and compared against a
conformal threshold (75th percentile of calibration scores). Detection requires 30
consecutive anomalous windows (15 minutes sustained) to avoid false positives from noise.
5. Classification
A softmax head (128 → 5 classes) classifies the fault type: normal, fan degradation,
PDU overload, hot spot, or HVAC drift. Trained on synthetic data; re-trains on
site-specific labelled incidents when available.
Primitive Stack
| USL | FRAI = D/(D+1/dim) — architecture sizing from data complexity |
| USAD | Conformal anomaly detection — distribution-free finite-sample guarantees |
| SatConform | Governance envelope — monitors for representation phase transitions |
| KK V3 | Spike-ready encoding — neuromorphic deployment path via TTFS |
Zero npm dependencies · Pure ESM · Node ≥ 18 · ~69K parameters · <3ms encode latency