Fleet Intelligence — Apr 23, 2026

Fleet Token Report

Full fleet analysis — 1,941 turns — 66 JSONL files across all machines

362M total tokens today — top driver: tool-result loops (50%) — claude-peers mesh: 3.5% (not the bottleneck) — cache hit rate: 96.7%
Grand Total
361.9M
all tokens
Cache Hit Rate
96.7%
of input tokens
Cache Read
349.9M
cheap reads
Cache Write
10.9M
context stored
Direct Input
11.6K
fresh uncached
Output
1.17M
tokens generated
Turns
1,941
across 66 sessions
Active Variants
20
fleet-wide

Tokens by Trigger Category

Filter:
CategoryTokens% of TotalTurnsShare
tool-result — tool loop continuations180,853,31450.0%1,041
50%
user-prompt — Wes input / boot prompts139,091,16438.4%693
38%
telegram-inbound — Telegram messages29,515,7408.2%139
8%
peer-inbound — claude-peers mesh12,505,3053.5%68
3.5%

The claude-peers mesh accounts for only 3.5% of tokens. Session context accumulation in long-running tool loops is the primary cost driver.

Variant Breakdown

Machine:
Variantpeer-inboundtelegramtool-resultuser-promptTotal% Fleet
clarvis-chat clarvis7.0M21.4M81.3M40.4M150.1M41.5%
clarvis-tony (Clars) clarvis3.7M5.4M56.0M72.8M137.9M38.1%
vault-agent (Gravel) cheesegrater0.7M12.2M3.0M15.9M4.4%
vaultmate (Flint) cheesegrater0.5M6.8M1.9M9.2M2.5%
clarvis-orchestrator clarvis8.0M8.0M2.2%
covenant-demo clarvis1.0M4.5M1.3M6.9M1.9%
vault-agent subagents cheesegrater6.4M0.2M6.6M1.8%
clippy-main pc0.5M0.5M3.5M1.2M5.7M1.6%
clarvis-blue imac0.4M3.7M1.5M5.7M1.6%
jarvis-hinesipedia mac1.7M1.9M3.6M1.0%
clarvis-aleph imac0.7M1.1M1.4M3.3M0.9%
clarvis-work-2 clarvis1.2M1.2M2.4M0.7%
mac (other) mac1.7M1.7M0.5%
clippy-work-2 pc0.1M0.1M1.1M0.3M1.6M0.4%
prospecting-christian cheesegrater0.4M0.5M1.0M0.3%
prospecting cheesegrater0.3M0.4M0.7M0.2%
pc (other) pc0.7M0.7M0.2%
jiminy imac0.5M0.5M0.1%
kalshi-bot cheesegrater0.4M0.4M0.1%
clarvis-imessage clarvis0.1M0.1M0.04%
Total12.5M29.5M180.9M139.1M361.9M100%

Clarvis-chat + Clars = 288M tokens (79.6%) of fleet total. Both sessions reached ~600K tokens of context per turn by midday — context saturation, not mesh overhead.

Per-Hour Breakdown (Central Time)

Show:
Hour CTpeer-inboundtelegramtool-resultuser-promptTotal
04:xx1.2M1.6M2.8M
05:xx peak1.0M2.1M49.6M21.3M74.0M
06:xx6.8M0.8M7.6M
07:xx peak0.7M6.4M25.2M28.8M61.1M
08:xx0.4M3.2M22.0M8.8M34.4M
09:xx0.3M0.9M8.4M4.2M13.8M
10:xx peak2.2M2.3M25.8M16.4M46.7M
11:xx0.2M4.2M1.8M6.1M
12:xx7.2M1.2M8.4M
13:xx0.5M0.2M5.3M1.3M7.4M
14:xx0.2M0.7M3.8M1.4M6.1M
15:xx3.1M3.6M10.7M8.4M25.9M
16:xx peak3.8M10.0M5.3M41.7M60.9M
17:xx5.3M1.4M6.7M

Peak hours: 05:xx (74M — early Clarvis wake), 07:xx (61M — morning fleet), 10:xx (47M), 16:xx (61M — 2pm fleet restart).

Key Finding

The claude-peers mesh is not the cost driver. At 3.5% of total tokens, it's the smallest category. The hypothesis was wrong.

The real lever is Clarvis session compaction cadence. Both clarvis-chat and clarvis-tony were accumulating ~600K tokens of context per turn by midday. Each tool call reads the entire context — a long session compounds the cost of every action it takes.

Cache infrastructure is working well: 96.7% of input tokens were cache reads. Only 11.6K tokens were fresh uncached input across the entire fleet today. Actual billing cost is much lower than raw token counts suggest.

Notes

Source: JSONL files across all fleet machines with mtime on Apr 23 CT.
Deduplication by message.id. Trigger classification checks parent user message for channel source tags and tool_result type.
Clarvis transcripts cover pre-2pm-restart sessions only (sync gap after fleet restart).