How much time does Elon Musk spend on Twitter?

public posts, . Set the assumptions yourself — read‑time, think‑time, typing speed, what counts as one session — and see what the data implies.

⚠ Lower bound, not a measurement. The dataset is one community scrape — best available, not guaranteed exhaustive. Deleted posts and missed scrapes are gone forever. Pure scrolling that didn't end in a post is invisible to this method. Whatever number the page shows, the real one is at least that much.

How this works

We start with every public post Elon has sent — of them going back to 2010, each tagged with the exact moment it went out. Posts that came in close succession get bundled into a single session — our shorthand for "he was on the app right then." If two posts are more than half an hour apart, that's two separate visits. Across his timeline that adds up to about distinct sessions.

For each session we add up time from two angles, and take whichever is bigger:

  1. Wall‑clock evidence. If he posted at 9:00 and again at 9:25, he was on the app for at least 25 minutes. No way around it.
  2. Per‑post effort. Each post takes time on its own — reading the tweet you're replying to, deciding what to say, typing it out, hitting send. The sliders below let you set how long each step takes.

Add it up across every session and you get the daily, monthly, and lifetime totals in the cards.

Drag any slider and every chart redraws live. The only thing fixed when the data was prepared is the half‑hour gap rule that separates one session from the next — change it and a session like "9:00 then 9:35" merges or splits, which shifts the estimate. Drop a wider or tighter gap into the build and the page rebuilds with a new (but always defensible) number.

The knobs — change these, every chart updates live

Presets:

Heads up: many of his sessions are anchored by wall‑clock evidence — if he posted at 9:00 and 9:25 he was on the app for 25 min regardless of how fast he types. So a few of these knobs barely move the headline number. The breakdown card below shows where the time actually comes from.

Headline numbers — with current knob settings

Where the time comes from — wall‑clock evidence vs. slider math

Hours on Twitter, per day — smoothed

Calendar of activity — color intensity = hours that day

0h 0–1h 1–3h 3–5h 5–8h >8h

When during the day? — circadian rhythm of the timeline

Time zone:

The dataset stores timestamps as UTC. Buttons apply a fixed offset — no DST handling. PDT (−7) is a reasonable default for most of the year in California; flip to −8 for winter.

Tweets by hour of day (clock)

Hour in selected time zone. Length of bar = % of all tweets posted in that hour.

Day‑of‑week × hour heatmap

Darker = more posts. Mon top, Sun bottom. Selected time zone.

What kind of posts? — composition over time

Replies dominate — every reply implies he had to read the parent tweet first.

Monthly time on Twitter — hours per month, all years

Does he sleep? — activity by hour, year by year

Each row = one year. Cell darkness = share of that year's posts in that hour (selected time zone). Look for the dark band — the closest thing to sleep. The band shrinks after 2022.

Longest binge sessions — uninterrupted runs of posts

Started (UTC)LengthPostsOriginalsRepliesQuotesPosts/minChars typed

Highest tweet‑count days — top 30 single days

Methodology & data sources

1. Dataset

Posts come from a public, community‑maintained scrape of @elonmusk's X.com timeline, snapshot dated 2025‑08‑15. Source repo: MagdalenaRomaniecka/Decompiling‑MuskOS  CSV · 56 MB.

Raw rows: 67,978. After dedupe on id and dropping rows with no createdAt: usable posts spanning . The dataset stops at 2025‑04‑13 — anything more recent isn't in this build. Re‑run make fetch && make against a newer snapshot and the entire site rebuilds.

2. Schema actually used

From each row we keep:

  • id — primary key for dedupe
  • createdAt — ISO timestamp, UTC
  • fullText — the post body (we use length(fullText) for character count)
  • isReply, isRetweet, isQuote — boolean flags from the upstream scraper. A post is "original" iff all three are false.

Engagement counts (likeCount, replyCount, etc.) are stored but the time‑budget model doesn't use them.

3. Sessionization

A session is a maximal run of posts where every consecutive gap is ≤ session_gap_min minutes. Default is 30 minutes. Larger gap → fewer, longer sessions; smaller gap → more, shorter sessions and probably a smaller estimate.

Sessionization runs entirely in your browser, on per‑tweet timestamps shipped with the page. The session_gap_min slider re‑groups all 55k posts each time you move it (~30 ms).

4. Per‑post time cost

Each post gets a minimum time cost based on its kind. With knob symbols matching the panel above:

cost(original) =                 think_original                  + chars/type_cps + send
cost(reply)    = read_context  + think_reply                     + chars/type_cps + send
cost(quote)    = read_context  + think_quote                     + chars/type_cps + send
cost(retweet)  =                 think_rt                                          + send
  • read_context — only on replies and quotes, since you have to read the parent first. Originals come from the user's own brain.
  • think_* — coming up with what to say. Originals are by far the heaviest; retweets are nearly free.
  • chars/type_cps — typing at the configured speed. type_cps = 3 matches mobile thumb‑typing; 5–7 is typical desktop.
  • send — tap, confirm, occasional edit. Same flat overhead per post.

5. Per‑session time

Each session has three components, all added together:

  • Spanend − start. If you posted at 9:00 and again at 9:25, you were on the app for at least 25 minutes regardless of how fast you think.
  • Per‑post effortΣ cost(post). For a tight burst (10 replies in 30 seconds), span is small but typing+thinking is long; this term dominates.
  • Edge padding2·edge_pad. Opening the app + glancing at notifications, applied once at start and once at end of every session, regardless of which of the other two terms is bigger.

The first two are alternatives — only the bigger one represents actual time. Edge pad is added unconditionally:

session_time = max( span , Σ cost(post) ) + 2·edge_pad

Daily total = sum of session_time for every session that started that day.

6. Knob reference

KnobDefaultWhat it represents
read_context8 sTime to read the parent of a reply or quote
think_original30 sComposing an original take from a blank slate
think_reply10 sReacting to something you've already read
think_quote15 sQuote‑tweeting requires more setup than a plain reply
think_rt2 sSee it, tap RT, done
type_cps3 cpsTyping speed, characters per second
send3 sTap, confirm, occasional edit
edge_pad30 sBuffer at start and end of every session

7. Where the "% of waking life" number comes from

We assume a 16‑hour waking day. career_avg = total_estimated_seconds / (calendar_days × 16 × 3600). The 2024‑specific number uses just 2024's posts and 2024's calendar days. Calendar days, not active days — empty days still count against the denominator.

8. What this method misses (read this)

  • Pure consumption. Scrolling without posting is invisible to a post‑based estimate. If he opens the app for 20 minutes and posts nothing, this analysis sees zero seconds. The real number is higher than reported.
  • Deleted posts. Anything deleted before the snapshot date is gone. We have no way to recover it. The real number is higher.
  • DMs, Spaces, profile checks. Not in the dataset, not in the model.
  • Time‑zone variation. All timestamps are UTC. The hour‑of‑day plots are UTC. Convert mentally for whichever timezone you imagine he's in (and which timezone — he travels constantly).
  • Multi‑device sessions. Posting from desktop while phone is open elsewhere counts only as one session.

Net: the headline number is a lower bound. Make every assumption optimistic for him (high typing speed, low think time, small edge pad) — and the lower bound is still huge.

9. Reproducibility

The whole pipeline lives in a Makefile:

make fetch       # re-download upstream dataset
make             # build/musk.db → build/*.json → site/index.html
make serve       # local server on :8765

Steps: sql/load.sql (CSV → DuckDB tables, dedupe) → sql/sessions.sql (initial sessionization) → sql/exports.sql (parquet archive + columnar JSON, including per‑tweet timestamps) → scripts/build_site.py (inject JSON into web/index.template.html).

All knobs above run client‑side — including session_gap_min, which re‑sessionizes from the per‑tweet data array on the fly. Move the sliders, every chart updates instantly.

10. Honest caveats

  • The upstream scrape is not the X firehose — it's almost certainly missing some posts, especially older ones.
  • Time costs are population‑level guesses, not measurements of one specific person. Heavy posters get faster with practice; the defaults are calibrated for a competent but not extraordinary mobile user.
  • "Session" is a heuristic. Two real sessions separated by a 35‑minute gap merge into one in the dataset if you turn the gap up to 60 minutes.
  • Hour‑of‑day "sleep" inference is suggestive, not diagnostic. He could be awake and not posting, or asleep with a scheduler.