Tokenmaxxing is the new Operating Model — Simon Taylor
Why read: Elite engineers are burning 60 trillion tokens to ship faster while most firms see zero gains.
Summary: Meta and other leaders treat tokens as fuel for "AI factories" rather than a software subscription cost. Productivity now equals tokens multiplied by your operating model. Most companies fail because they keep manual gates and slow deployment instead of moving toward autonomy. The gap between firms that can burn tokens at scale and everyone else will decide who wins the next decade.
Probabilistic engineering and the 24-7 employee — tim
Why read: Software is becoming non-deterministic and work happens in parallel while you sleep.
Summary: We are moving from writing code to managing probabilistic systems that work with a certain likelihood. The 24-7 employee is an operator managing a fleet of agents in parallel. This splits roles into elite architects and agent babysitters who handle low-value translation. Coordination is the new bottleneck. Success requires triaging what your agents did overnight instead of catching up on yesterday's tasks.
HTML is 100% better than .md for agents — Nathaniel Whittemore
Why read: Markdown limits agents because it hides the visual hierarchy and state of a project.
Summary: Human work has shifted from producing text to staging jobs for agents. Markdown is a poor fit for this because it treats every line of text as equally finished. HTML allows for "liminal assets" where CSS and SVG show which parts of a project are locked and which are still open. Higher information density in HTML helps agents stay aligned with human intent.
Meta-Meta-Prompting: The Secret to Making AI Agents Work — Garry Tan
Why read: Garry Tan uses "book mirrors" to map complex ideas to his life and founder notes.
Summary: AI is most useful when it acts as an operating system for your thoughts. Tan builds chapter-level summaries that link directly to his personal history and therapy goals. This creates a 30,000-word searchable graph that no consultant could build. The goal is building a system that compounds as more books are added rather than just playing with prompts.
Why read: Product marketers are reading code and Linear tickets to keep up with fast shipping.
Summary: PMMs are skipping the walkthrough and going straight to the repo. Agents can map out feature changes and sort Slack or Figma noise into priority buckets based on weekly goals. This lets operators focus on positioning and judgment rather than just refreshing their inbox. Agents refine the signals and humans make the calls.
Ramping Your Coding Output with OpenAI's Codex — goodalexander
Why read: How to get 12x ROI by treating agents like a 24/7 CTO.
Summary: Agents can handle 20-hour sessions without stopping. A single operator can monitor multiple screens to replace several human contractors. The trick is building a step-by-step guide for the agent to follow instead of micromanaging every turn. Start by using agents to research your blind spots before you let them execute.
Escape from agentic loop: HITL vs HOTL — David Hoang
Why read: How to stop "agentic doom scrolling" and get your focus back.
Summary: Human-in-the-loop (HITL) makes you a bottleneck because the agent waits for your approval on everything. Human-on-the-loop (HOTL) moves you to a supervisor role where you only step in when something breaks. Shifting to HOTL lets the system run at full speed while you focus on thinking. It turns AI work into a dashboard you supervise rather than a chain you have to pull.
Why read: Why startups should use text and Slack threads instead of Jira.
Summary: Issue trackers often add more ceremony than value. Using Slack threads and Notion docs as the source of truth keeps the context intact. LLMs can then search these threads to answer technical questions with citations. Plain text is more "agent-ready" than a rigid ticket board because it records the actual tradeoffs and discussions.
SFT, RL, and On-Policy Distillation Through a Distributional Lens — wh
Why read: A technical map for how SFT and RL change model behavior.
Summary: Supervised Fine-Tuning (SFT) pulls a model toward a fixed target, which can cause it to forget its base training. Reinforcement Learning (RL) pushes the model toward higher rewards, which works best when you can verify the output. Think of these as different ways to reshape the model's output distribution. SFT is for starting a task and RL is for perfecting it.
Show Him How to Get Outcomes Instead of Demos — Jordan Crawford
Why read: Using Claude to build a scored lead list in 15 minutes.
Summary: Skip the prompt demos and focus on specific outcomes like a "Monday call list." Use sub-agents to scrape LinkedIn and conference data, then run it through a rubric file to score every row against your goals. This turns a directory of names into a prioritized spreadsheet. It is about getting the job done, not just showing what the tool can do.
Consensus 2026: Institutional Crypto Maturity — Lorenzo Valente
Why read: The "degens" are gone and the institutional giants have taken over crypto.
Summary: The crypto industry is now dominated by suit-wearing allocators and mega-funds. TradFi firms are moving fast to avoid missing the tokenization trend. Small VCs are pivoting to AI as giants like Tether and Anchorage consolidate the market. Success now requires building permissioned DeFi that fits into the existing financial establishment.
Attention Debt and Hyperinflation of Essays — Venkatesh Rao
Why read: Why the internet feels like a flood of too many essays.
Summary: Every essay published creates "attention debt." We are currently printing more than people can read, causing the value of a single attentive read to drop. Even high-quality writing loses value because the attention economy is hyperinflated. Proof-of-work signals like graphs are attempts to fix this, but the underlying economy of nonfiction is heading for a correction.
How to build a company that withstands any era — Eric Ries
Why read: Why most founders lose control of their mission after scaling.
Summary: Most companies lose their way because they lack the governance to protect their core mission from financial pressure. Eric Ries argues for using Public Benefit Corporation status to shield founders from being oustered by venture interests. Success makes you a target for short-termism. Only rigid legal structures provide a real shield for long-term goals.
Why read: How an "AI VP" drives $10M in revenue and what it means for human jobs.
Summary: SaaStr is using an "AI VP" role to replace high-output marketing functions with machine speed. This isn't just a tool; it's a role that handles launches, copy, and campaigns at a fraction of the cost. It redefines VP-level work as setting strategy while letting agents handle the volume. Companies like Palantir are using this playbook to accelerate growth at massive scale.
SandHill #283: Agentic Commerce & The AI Stack — Ali Afridi
Why read: VC focus is moving from chat boxes to "Vertical AI" in boring industries.
Summary: Investors are looking for startups that "rewire" industries like insurance or housing from the inside out. The hype is shifting toward "Permissioning Machines" and "Agentic Commerce." The AI stack is still half-built, and a compute shortage is favoring the giants. The winners will be those who solve specific, durable infrastructure problems rather than just adding an AI chat interface.