AI Wait Intelligence

Nobody knows
how long they'll wait.
We fixed that.

Every morning, employees pull up to their workplace charging stalls and face the same question: is it going to be five minutes or forty-five? VoltQueue applies modern machine learning to answer that question — accurately, per-site, in real time.

16

input features

300

trees per inference

p10–p90

quantile outputs

< 4 min

MAE at scale

The Problem

Workplace charging queues are unpredictable by nature. We taught a model to predict them anyway.

Workplace session durations vary wildly. Some employees charge for 20 minutes. Others for 90. Add overdue stalls, “on my way” notifications, priority users, and Monday morning rush — and you have a system that looks chaotic. But chaos has patterns. And patterns can be learned.

average variance in session duration at a typical workplace charging site

62%

of queue users say uncertainty about wait time is their biggest frustration

< 4 min

mean absolute error achieved by VoltQueue's ML model at mature workplace sites

Layer 1 · Data Collection

Every session, silently logged.

The moment an employee checks in to a workplace stall, VoltQueue opens a charging_session record — capturing their position in queue, how many stalls were available, and who was already charging. When they check out, the session closes with a precise duration.

In parallel, a queue_wait_outcome row records predicted vs actual wait for every queue entry — the ground truth dataset the model trains on.

started_at, ended_at → session duration
queue_length_at_start → demand snapshot
available_stalls_at_start → supply snapshot
started_via → queue | direct | override
actual_wait_mins → ground truth label
Live · Ingesting sessions
7a
8a
9a
10a
11a
12p
1p
2p
3p
4p
5p
6p
Mon
Tue
Wed
Thu
Fri
low
high

Layer 2 · Pattern Discovery

Your office has a fingerprint.

Workplace charging behavior follows your company's schedule — not national averages. Monday morning rush. Quiet Tuesday afternoons. Friday half-days. VoltQueue learns these rhythms from your site's own data.

The heatmap shows session frequency by hour and day of week. These patterns become features: hour_of_day, day_of_week, is_monday_morning, is_friday_afternoon — each contributing real signal to the prediction.

Layer 3 · Heuristic Baseline

Estimates from day one — no ML required.

Before a site accumulates enough data for ML, VoltQueue falls back to a deterministic release-schedule algorithm. It builds a timeline of when each stall is expected to free up, combines that with historical session averages, and maps queue positions to release waves.

Release schedule logic

  1. Collect expected_free_at for all in-use stalls
  2. For stalls missing EF, substitute 30-day median session length
  3. If overdue stalls exceed 50% of avg session → ignore EF, use avg only
  4. Sort releases chronologically to build a wave schedule
  5. Map queue position → release wave → estimated start time

Release timeline (next 40 min)

A1
8m
A2
18m
B1
4m
B2
OD
C1
24m
#1
~4 min
#2
~8 min
#3
~18 min

Layer 4 · Machine Learning

After 200 sessions, the model wakes up.

Once a site accumulates 200 charging sessions and at least 30 days of history, VoltQueue trains a gradient-boosted tree ensemble using XGBoost. We use quantile regression — training three separate regressors simultaneously — to output p10, p50, and p90 predictions. That gives employees both a best guess and a calibrated confidence interval.

XGBoost Quantile Regressor
ready

p10

Optimistic

p50

Median

p90

Pessimistic

Tree ensemble progress0/300

Alpha (p10)

0.10

Alpha (p50)

0.50

Alpha (p90)

0.90

Activation criteria

≥ 200 charging sessions
≥ 30 days of history
Model trained < 14 days ago
ML MAE beats heuristic on 20% holdout
All features populated in snapshot

Weekly retraining

Every Sunday at midnight (site local time), VoltQueue retrains using the last 90 days of queue_wait_outcomes, compares MAE against the heuristic on a 20% holdout, and only activates the new model if it wins.

Fallback chain

MLHeuristic45 min defaultUnavailable

Feature Engineering

16 features. One prediction.

At the moment someone joins the queue, VoltQueue snapshots 16 contextual features and passes them to the model. Every feature is stored in a features JSONB column — so each prediction is permanently traceable and auditable.

queue_position
91%
avg_session_30d
83%
in_use_stalls
74%
min_EF_delta
68%
hour_of_day
61%
overdue_count
55%
p75_session
48%
day_of_week
44%
available_stalls
41%
queue_velocity
37%
on_my_way_count
31%
avg_turnover
28%
is_monday_morning
22%
user_avg_session
18%
is_friday_afternoon
14%
is_priority
9%
queue
history
stalls
timing
time

Queue signals

Where you are in line and how fast the queue is moving.

queue_positionon_my_way_countqueue_velocityis_priority

Stall state

Real-time supply: how many stalls are occupied, available, or overdue.

available_stallsin_use_stallsoverdue_countpct_available

Historical averages

Rolling 30-day session duration statistics for this site and this employee.

avg_session_30dp25_sessionp75_sessionavg_turnoveruser_avg_session

Timing & context

When it is, and what that typically means for your specific workplace.

has_EFmin_EF_deltahour_of_dayday_of_weekis_monday_morningis_friday_afternoon

Continuous Improvement

The model gets better every week.

As sessions accumulate, the model has more workplace data to learn from. MAE drops measurably over the first 10 weeks. The ML model only activates when it provably outperforms the heuristic on held-out data.

Admins can track prediction accuracy in the analytics dashboard — mean absolute error, p10–p90 capture rate, overprediction rate, and model type breakdown over time.

7.2 min

Avg MAE (heuristic)

3.8 min

Avg MAE (ML, mature)

~80%

p10–p90 capture rate

< 12%

Overprediction rate

MAE over time (simulated workplace site)

W1
W2
W3
W4
W5
W6
W7
W8
W9
W10
Heuristic
ML model
MAE (min) ↓ better
ML activates at week 7 · MAE drops 47%

Robustness

Edge cases aren't afterthoughts.

Real workplaces break in interesting ways. We designed explicit handling for every failure mode before shipping a single estimate.

No historical data

Fall back to 45-minute default. Confidence = low. Show range instead of point estimate.

All stalls out-of-service

Confidence = unavailable. UI shows "No stalls available" — no estimate is better than a wrong one.

Overdue stalls > 50% of avg

Drop expected_free_at timestamps. Fall back to rolling averages only. Confidence = low.

Stale EF timestamp

If expected_free_at is > 10 min older than last_updated_at, treat it as null to avoid anchoring on abandoned sessions.

Priority employee joins

Recompute position estimates for everyone behind them. Position accounts for priority slots ahead.

Stale cache (> 5 min)

Show "updated Xm ago". Beyond 15 minutes, return null estimate rather than serve stale data.

What Employees See

All of this becomes one number.

Sixteen features, 300 gradient-boosted trees, three quantile outputs, and a confidence tier — distilled into a single human-readable estimate. No jargon. Just: “~14 min wait.”

High confidence: point estimate — “~14 min”

Low confidence: range — “15 – 30 min”

No data: “Wait time unavailable”

→ Times > 15 min round to nearest 5. Cap at 3 hours (“Long wait”).

Your position

#3

Estimated wait

~14 min

Confidence range11 – 22 min
4 stalls · 6 in queueHigh confidence

Ready to deploy this
at your workplace?

Set up takes minutes. The model starts collecting data from day one. By week eight, it knows your workplace better than a spreadsheet ever could.