Every morning, employees pull up to their workplace charging stalls and face the same question: is it going to be five minutes or forty-five? VoltQueue applies modern machine learning to answer that question — accurately, per-site, in real time.
16
input features
300
trees per inference
p10–p90
quantile outputs
< 4 min
MAE at scale
The Problem
Workplace session durations vary wildly. Some employees charge for 20 minutes. Others for 90. Add overdue stalls, “on my way” notifications, priority users, and Monday morning rush — and you have a system that looks chaotic. But chaos has patterns. And patterns can be learned.
3×
average variance in session duration at a typical workplace charging site
62%
of queue users say uncertainty about wait time is their biggest frustration
< 4 min
mean absolute error achieved by VoltQueue's ML model at mature workplace sites
Layer 1 · Data Collection
The moment an employee checks in to a workplace stall, VoltQueue opens a charging_session record — capturing their position in queue, how many stalls were available, and who was already charging. When they check out, the session closes with a precise duration.
In parallel, a queue_wait_outcome row records predicted vs actual wait for every queue entry — the ground truth dataset the model trains on.
Layer 2 · Pattern Discovery
Workplace charging behavior follows your company's schedule — not national averages. Monday morning rush. Quiet Tuesday afternoons. Friday half-days. VoltQueue learns these rhythms from your site's own data.
The heatmap shows session frequency by hour and day of week. These patterns become features: hour_of_day, day_of_week, is_monday_morning, is_friday_afternoon — each contributing real signal to the prediction.
Layer 3 · Heuristic Baseline
Before a site accumulates enough data for ML, VoltQueue falls back to a deterministic release-schedule algorithm. It builds a timeline of when each stall is expected to free up, combines that with historical session averages, and maps queue positions to release waves.
Release schedule logic
Release timeline (next 40 min)
Layer 4 · Machine Learning
Once a site accumulates 200 charging sessions and at least 30 days of history, VoltQueue trains a gradient-boosted tree ensemble using XGBoost. We use quantile regression — training three separate regressors simultaneously — to output p10, p50, and p90 predictions. That gives employees both a best guess and a calibrated confidence interval.
p10
—
Optimistic
p50
—
Median
p90
—
Pessimistic
Alpha (p10)
0.10
Alpha (p50)
0.50
Alpha (p90)
0.90
Activation criteria
Weekly retraining
Every Sunday at midnight (site local time), VoltQueue retrains using the last 90 days of queue_wait_outcomes, compares MAE against the heuristic on a 20% holdout, and only activates the new model if it wins.
Fallback chain
Feature Engineering
At the moment someone joins the queue, VoltQueue snapshots 16 contextual features and passes them to the model. Every feature is stored in a features JSONB column — so each prediction is permanently traceable and auditable.
Queue signals
Where you are in line and how fast the queue is moving.
Stall state
Real-time supply: how many stalls are occupied, available, or overdue.
Historical averages
Rolling 30-day session duration statistics for this site and this employee.
Timing & context
When it is, and what that typically means for your specific workplace.
Continuous Improvement
As sessions accumulate, the model has more workplace data to learn from. MAE drops measurably over the first 10 weeks. The ML model only activates when it provably outperforms the heuristic on held-out data.
Admins can track prediction accuracy in the analytics dashboard — mean absolute error, p10–p90 capture rate, overprediction rate, and model type breakdown over time.
7.2 min
Avg MAE (heuristic)
3.8 min
Avg MAE (ML, mature)
~80%
p10–p90 capture rate
< 12%
Overprediction rate
MAE over time (simulated workplace site)
Robustness
Real workplaces break in interesting ways. We designed explicit handling for every failure mode before shipping a single estimate.
No historical data
Fall back to 45-minute default. Confidence = low. Show range instead of point estimate.
All stalls out-of-service
Confidence = unavailable. UI shows "No stalls available" — no estimate is better than a wrong one.
Overdue stalls > 50% of avg
Drop expected_free_at timestamps. Fall back to rolling averages only. Confidence = low.
Stale EF timestamp
If expected_free_at is > 10 min older than last_updated_at, treat it as null to avoid anchoring on abandoned sessions.
Priority employee joins
Recompute position estimates for everyone behind them. Position accounts for priority slots ahead.
Stale cache (> 5 min)
Show "updated Xm ago". Beyond 15 minutes, return null estimate rather than serve stale data.
What Employees See
Sixteen features, 300 gradient-boosted trees, three quantile outputs, and a confidence tier — distilled into a single human-readable estimate. No jargon. Just: “~14 min wait.”
→ High confidence: point estimate — “~14 min”
→ Low confidence: range — “15 – 30 min”
→ No data: “Wait time unavailable”
→ Times > 15 min round to nearest 5. Cap at 3 hours (“Long wait”).
Your position
#3
Estimated wait
~14 min
Set up takes minutes. The model starts collecting data from day one. By week eight, it knows your workplace better than a spreadsheet ever could.