1. Hidden and Under-utilized Features in Claude Code — Boris Cherny

  • Why read: Mastering the advanced syntax of the most powerful agentic CLI tool is a massive productivity multiplier for engineers.
  • Summary: Cherny highlights several "power user" features of Claude Code that transcend basic chat interactions, most notably the use of `/loop` and `/schedule` to automate long-running tasks like rebasing and babysitting code reviews for up to a week. He introduces the concept of "hooks" (e.g., `SessionStart`, `PreToolUse`) to deterministically inject context or log agent actions, effectively turning the CLI into a programmable agentic framework. For frontend work, he emphasizes giving Claude a way to verify its output via the Desktop app's built-in browser or Chrome extension. Strategic use of `git worktrees` via `claude -w` allows for running dozens of parallel sessions without directory conflicts. Finally, the `/batch` command is presented as the ultimate tool for "fanning out" massive migrations across hundreds of automated agents simultaneously.
  • Link: https://twitter.com/bcherny/status/2038454336355999749/?rw_tt_thread=True

2. The Last 4 Jobs in Tech & Claude Computer Use — AINews

  • Why read: Understand the transition from "Tiny Teams" to specialized agentic roles and the breakthrough of closed-loop UI verification.
  • Summary: This issue explores Yoni Rechtman’s "World of Warcraft" analogy for post-AI org charts, where roles shift into specialized functions like "Tanks" and "Healers" rather than traditional hierarchy. The major news highlight is Anthropic’s addition of "computer use" directly inside Claude Code, allowing the agent to open apps and click through UIs to verify its own builds. This solves the "missing link" of reliable app iteration by enabling a closed-loop: code, run, inspect UI, and fix without human intervention. The editorial notes that Meta has now formalized an "AI Engineer" organization, signaling the industry-wide shift toward agentic infrastructure. As teams shrink ("Tiny Teams"), the value of engineers who can orchestrate these computer-using agents grows exponentially.
  • Link: mailto:reader-forwarded-email/67427aaaaf9754e4a8f5fe6c457de94b

3. The Only Moats That Matter — Michael Bloch

  • Why read: A sobering filter for investors and operators on why software defensibility is dying while "hard to get" assets are thriving.
  • Summary: Bloch argues that AI is making anything that is "hard to do" (writing code, building integrations) worthless as a moat because AI compresses the time to build. True defensibility now rests in things that are "hard to get," such as compounding proprietary data generated through physical operations (e.g., Orchard AI's farm cameras). Network effects remain vital, but the "cold start" problem is getting harder as AI makes it trivial to launch 100 well-built competitors simultaneously. Regulatory permission (FDA, bank charters) and physical infrastructure (chip fabs, power plants) are the ultimate moats because they move at the speed of politics and atoms, not bits. Crucially, "Capital at Scale" is becoming a defining advantage, as the endgame of AI is increasingly physical and requires billions in deployable liquidity.
  • Link: https://twitter.com/michaelxbloch/status/2038753872890778029/?rw_tt_thread=True

4. Veblen & Jevon Walk Into a Data Center — Tomasz Tunguz

  • Why read: A look at the "Claude Mythos" leak and how frontier intelligence is shifting from a commodity to a luxury "Veblen good."
  • Summary: While Jevon’s Paradox suggests that cheaper tokens lead to more consumption, the rumored launch of "Claude Mythos" suggests a move toward Veblen goods—products where demand increases with price. Mythos is rumored to be 5-6x more expensive than current models (up to $150 per 1M tokens) but offers a "step change" in coding and reasoning. This creates a strategic divide: companies with the capital to pay for "premium intelligence" will ship features that competitors on "commodity models" cannot match. The token-maxxing era of optimizing for cheap inference is ending, replaced by a race to maximize capability at any cost. Balance sheets are becoming the primary moat, as the gap widens between those who can afford frontier models and those who cannot.
  • Link: mailto:reader-forwarded-email/d8893675ab192e3049b14ab514f6a3db

5. AI Applications and Vertical Integration — Tanay Jaipuria

  • Why read: Framework for understanding the "Full-Stack" evolution of AI companies and where the next competitive flywheels are.
  • Summary: Jaipuria posits that every successful AI application will eventually vertically integrate to become "full-stack," either moving "down" into the model layer or "up" into the human service layer. Companies like Cursor and Intercom are integrating "down," using proprietary usage traces to fine-tune their own domain-specific models for better performance and lower COGS. This creates a flywheel where more usage leads to better training data, which improves the model, which drives more usage. Conversely, moving "up" means selling true outcomes (e.g., "resolved tickets") rather than just software, effectively replacing the human service layer with agents. Both paths represent a move away from being a "thin wrapper" on top of frontier APIs.
  • Link: https://twitter.com/tanayj/status/2038769555745411300/?rw_tt_thread=True

6. The AI Labor Conversation is Framed Wrong — ry

  • Why read: Learn why the unit of value is shifting from "individual tasks" to "judgment-driven systems."
  • Summary: The core of the AI shift isn't about replacing workers; it's about replacing human-judgment-driven workflows with software. While deterministic work (flowcharts) was automated long ago, AI can now encode the "judgment" required for ambiguous tasks, such as triaging context or deciding what "good" looks like. Operationalization occurs when a human owns the outcome while the system handles the context assembly and execution in the background. Most companies are stuck in the "experiment trap," bolting copilots onto old workflows rather than redesigning systems from the ground up. The gap is now an organizational design problem, not a technology problem.
  • Link: https://twitter.com/rywalker/status/2038653841067974676/?rw_tt_thread=True

7. SpaceX Doesn't Buy Rockets. Why Buy Software? — Nick Co

  • Why read: A call for enterprises to shift from a "consumer" to a "builder" mindset as the cost of creating internal tools plummets.
  • Summary: Nick Co argues that the "SpaceX model" of vertical integration is coming to software, where companies will build their own internal tools instead of paying for expensive SaaS vendors. Historically, companies bought software because the build cost was high; now, AI allows teams to ship functional internal tools in hours, making the procurement process for vendors (weeks/months) the primary bottleneck. Most SaaS products lack a real moat beyond a well-designed interface, making them ripe for "vibecoding" into internal replacements. The structural advantage goes to organizations that retrain their teams to think like builders rather than consumers. This shift reduces cost, removes vendor lock-in, and allows for much faster iteration.
  • Link: https://twitter.com/nickco/status/2038748230817894551/?rw_tt_thread=True

8. Execution Speed is the Only Metric — David Cramer

  • Why read: A provocative take on why commit frequency is more important than lines of code in the age of agentic engineering.
  • Summary: Cramer argues that the biggest issue in most teams is execution speed, defined as the "frequency of change." He dismisses "lines of code" (LOC) as an irrelevant metric, especially when LLMs can generate "slop," and instead advocates for tracking daily commit frequency and total contributions. The goal is to maximize the number of "at-bats" a team has, using AI to unblock small tasks and build MVPs faster. He critiques the idea that removing code is always a win, focusing instead on the net quantity of high-quality, stable contributions. More isolated, high-frequency changes lead to better products and faster learning cycles.
  • Link: https://twitter.com/zeeg/status/2038705984194195664/?rw_tt_thread=True

9. The Barometer of Trust: Blameful Post-Mortems — Claire Vo

  • Why read: Essential leadership advice on managing high-stakes technical failures and maintaining customer trust.
  • Summary: Claire Vo argues that CPTOs must hand-edit Sev-0 incident reports to remove passive voice and "incident fairies" (e.g., "a bug occurred") in favor of direct ownership. Reports should clearly explain what happened, why, how it was fixed, and why it will never happen again in plain, non-technical language. For caching incidents, she notes the difficulty of scoping impact and recommends disclosing to all logged-in users if sensitive data was potentially exposed. True incident management includes executives personally calling key customers to offer follow-up via their personal cell numbers. Trust is built not by avoiding mistakes, but by how transparently and "blamefully" those mistakes are handled.
  • Link: https://twitter.com/clairevo/status/2038746825684136262/?rw_tt_thread=True

10. The Rise of the AI Guru — Aaron Levie / Alex Lieberman

  • Why read: A strategic career roadmap for early-career professionals to command C-suite attention through AI orchestration.
  • Summary: Levie and Lieberman highlight a massive opportunity for "resourceful talent" to reimagine organizational workflows for an agentic world. Becoming the "AI Guru" within a company involves mapping manual processes (e.g., SDR inbound, growth marketing) and building automated AI systems to handle them end-to-end. This is not just about using ChatGPT; it requires setting up unstructured data, creating specific skills for agents, and validating outputs. For junior employees, this is a "leverage play" that earns instant visibility with leadership by solving high-value problems that the C-suite doesn't yet know how to automate. Specialize in the "un-shortcuttable" work of connecting disparate systems and designing human-in-the-loop oversight.
  • Link: https://twitter.com/levie/status/2038816649927913834/?rw_tt_thread=True

Themes from yesterday

  • Vertical Integration as Strategy: Companies are moving "down" into custom models (Cursor/Intercom) and "up" into service delivery to avoid being thin API wrappers.
  • The Capital Moat: As frontier models like "Claude Mythos" become "Veblen goods," the ability to finance massive inference costs is becoming a primary competitive advantage.
  • Agentic Infrastructure: The focus has shifted from "chatting with AI" to "looping agents" with computer-use capabilities and specialized CLI workflows.
  • Organizational Speed: Success is being redefined by "commit frequency" and the ability to build internal software stacks rather than buying them.