AI × CEO

Is it possible to be a 100x CEO?

EVAN REISER / FEB 23, 2026 / 5 MIN READ

The whole gap between an AI-augmented CEO and a 100x CEO is context assembly. An engineering problem, not a cognition problem.

At the end of 2025, I dumped a thousand hours of meeting transcripts into Claude and asked it to grade my year as a CEO. I told it to strip away affirmation bias and give me the hard truths first. The number one verdict came back inside ninety seconds: "Evan, you preach all day about using AI and the power of AI transformation, yet you hardly use a single minute of your day to actually use any AI to do your job."

And I took that personally.

So I started asking the question I'd been avoiding. What does my job actually consist of? If AI is going to change how I work, I should probably know exactly what "how I work" is. When I broke it down honestly, the answer was simpler than I expected.

All CEO work is decisions

Responding to this email versus that one. Taking this meeting versus pushing it. Hiring the candidate versus passing. Firing the VP versus giving them another quarter. What I say in the all-hands, what I push back on with the board, where I spend Tuesday afternoon, whether I respond to that customer's note tonight or in the morning. Almost every minute of the job comes down to a decision, and the rest is the input I'm gathering for the next decision.

I sat down one morning and tried to count my decisions before lunch, and I gave up around forty. The "work" is the deciding, and everything else is the preamble that feeds it.

FORTY DECISIONS BEFORE LUNCH

Intelligence is the part that's already solved

For most of those decisions, AI is already at least as smart as I am. If not today, tomorrow. Anyone betting against that curve is betting against the most aggressive technology slope in human history.

I spent three weeks over the holidays in Claude Code building systems I'd assumed were theoretical, and watched them work. Most of the smart operators I talk to are still arguing about whether AI is smart enough to do real cognitive work, and that argument is about to be retired by reality the same way the "is the cloud secure enough for enterprise" argument retired itself.

Pretending intelligence is the bottleneck is comforting because it gives a CEO an excuse to keep doing things the old way. It's also wrong.

THE PROMPT IS A SENTENCE

The prompt is not the hard part

I don't believe in prompt engineering as a discipline that matters for executives. The prompt for most CEO decisions is a sentence. Should I fire X? Should I take this meeting? Is this a real risk or a vanity risk? Is this candidate the right hire for VP of Sales?

You don't need more than that, because the decision-language CEOs already speak is the prompt, and anyone selling executives on prompt-craft as the unlock is solving a problem you don't have. The thing inside your head when you walk into a meeting and ask "what should I do here" has been refined over your entire career and is already functional, so just type it in.

A THOUSAND NOVELS OF LIVED EXPERIENCE

The whole gap is context

Take superhuman intelligence and a one-sentence prompt and the only thing left is context. Which 500,000 tokens out of everything I've ever seen, every conversation I've ever had, every email I've sent, every customer dinner, every board call, every deal that closed and every deal that didn't, get loaded into the window for this specific decision?

My brain is a neural network trained on a trillion tokens of my own life, which is something like a thousand novels' worth of experience. The largest context window I can give an AI today is a million tokens, and that holds maybe ten novels. So the shape of the challenge is recall: out of the thousand novels I've actually lived, which ten do I open for the decision in front of me right now?

That reframes the problem completely. We've been arguing about whether AI is intelligent enough to think like a CEO.

The harder problem is whether the right context can find its way to the right decision at the right time.

That's a retrieval and assembly problem, not a cognition problem.

What People Argue About
Is AI smart enough to think like a CEO?
A cognition question
A research problem with no clear endpoint
An excuse to keep doing things the old way
What Actually Matters
Can the right context get to the right decision?
A retrieval and assembly question
An engineering problem with a working architecture
A roadmap I can build against, this quarter
PRE-COMPUTED LIBRARIES, ASSEMBLED AT DECISION TIME

The books architecture

Real-time recall at the speed of human conversation is still out of reach for any system I can run on my laptop. So we work backward and do the retrieval upfront.

What decisions am I going to make next week, and what context would I want in front of me when each one arrives? An hour spent figuring that out before the week starts buys back a day of bad decisions during it, so we pre-compute books.

ONE BOOK PER PERSON, ONE PER DEAL, ONE PER QUARTER

A book is a 10,000-token compression of a much larger raw corpus, built ahead of time. A billion tokens about a single direct report (every transcript they're in, every email they've written, every customer conversation that mentions them, every Slack thread, every quarter of feedback) gets reduced through a map-reduce pipeline into a small, dense, structured artifact. We pre-compute one book per person, one book per role profile, one book per deal, one book per board member's last six months of concerns, and so on, all kept updated while I sleep.

Then a decision lands, something like "Should I promote Sarah to VP of Sales Ops?", and the system identifies the books needed: the person book, the role-requirements book, the team-dynamics book, my promotion-philosophy book, the peer-comparisons book. Eight to ten books, picked from a library of hundreds, get loaded into a single inference call alongside the question, which gives me the best 830,000 tokens of relevant context I've ever encountered plus 128,000 tokens of headroom for actual reasoning and output, and the picking is where the magic lives.

That's the architecture: a librarian and a library.

FROM RESEARCH PROBLEM TO ROADMAP

Engineering problems get solved

Put the four pieces together. The work is decisions, the intelligence to make them is solved, the prompt is a sentence, and the entire remaining gap is context assembly. The question stops being "can AI ever be smart enough to do my job" and becomes "can the right context get to the right decision at the right time." That second question is an engineering problem, not a research one. Engineering problems get solved.

If context assembly is the entire gap, and the gap is now an engineering problem with a working architecture, the ceiling on what an AI-augmented CEO can do isn't capped by intelligence and isn't capped by prompting. It's capped by three things: how much of my own input the system gets to see, how well we pre-compute the books, and how well we pick the right books for any given decision. All three have a clear path to better, and all three compound.

The honest follow-up question is whether 10x the work at 10x the quality in the same number of hours is a slogan or a roadmap. A year ago I would have said slogan. With the books architecture in place and the input pipelines getting wider every week, it stops sounding like a slogan and starts sounding like a set of solvable problems. Which decisions get books first? Which inputs feed which books? How does the system get smarter about which books to pull for a given decision? Which books need to be rebuilt nightly versus monthly?

That's why I'm leaning all the way into this, not as a side experiment but as the actual work. A 100x CEO isn't a thought experiment anymore, it's a sequencing problem with a clear architecture, and I'm now convinced it's the achievable shape of the job over the next few years rather than a slogan to put on a slide. The next two posts in this series cover the first two pieces of context I'm bringing into the system, email and then calendar, which are the natural starting points before the less obvious ones.

Next in AI × CEO: Rebuilding Gmail for the AI future

-Evan