1. this karpathy interview might be the clearest glimpse of what’s coming — CG / Sarah Guo (No Priors)

  • Why read: A masterclass on the transition from "AI as a tool" to "AI as an autonomous research swarm."
  • Summary: Andrej Karpathy describes a phase shift where humans move from doing work to directing autonomous systems that write code and conduct research independently. He highlights the psychological shift of "working for the AI," noting how he finds himself seeking praise from Claude, signaling a change in the human-machine hierarchy. The interview explores "AutoResearch" and the potential for a SETI-at-home style movement for distributed AI training. The practical implication is clear: the most valuable skill is no longer execution, but high-level orchestration and the ability to "collaborate" with self-improving agents.
  • Link: https://twitter.com/cgtwts/status/2035541308283244678/

2. Princeton built an AI that went from nearly useless to highly personalized — Robert Youssef

  • Why read: Introduces a breakthrough in Reinforcement Learning (OpenClaw RL) that personalizes models through natural conversation.
  • Summary: Princeton researchers have developed OpenClaw RL, a system that treats every user correction as a high-signal feedback loop for real-time personalization. By monitoring subtle signals—like a "re-ask" meaning failure or a "smooth reply" meaning success—the system achieves high personalization in as few as 36 conversations without engineers or retraining. This solves the "frustration gap" where AI systems repeatedly miss the point despite user corrections. For product builders, this marks a shift toward "living" software that adapts its tone and utility purely through usage.
  • Link: https://twitter.com/rryssf_/status/2035315863163912332/

3. Tesla's Co-Signed Stack: Five Patents Behind AI5 — SETI Park

  • Why read: A deep dive into why vertical integration (hardware/software "co-signing") is the final frontier of performance.
  • Summary: Tesla’s AI5 chip "punches above its weight" because the hardware and software are "co-signed"—meaning the circuits are physically optimized for specific software pipelines like Voxel occupancy. The patents reveal a seven-stage vision pipeline where precision is prioritized at specific steps (Step 230) to prevent error propagation without wasting compute on less critical stages. This "hardware-software contract" allows Tesla to match dual-SoC performance on a single chip. Operators should view this as a blueprint for the future of edge computing: general-purpose hardware is becoming a bottleneck for specialized AI workloads.
  • Link: https://twitter.com/seti_park/status/2035348937775976505/

4. The Key AI Decision That Will Shape Your Revenue Org — Kyle Norton

  • Why read: A pragmatic guide to choosing between centralized and decentralized AI implementation within a company.
  • Summary: Kyle Norton argues for a "centralized" approach to AI transformation, where a specialized team (RevOps/BizOps) builds production-grade capabilities rather than letting individual reps experiment with low-power tools. He introduces a "sophistication ladder": basic chat -> custom GPTs -> automation (n8n/Make) -> Claude Code -> custom AI applications with proper evals. While decentralized models build "AI muscle," centralized models ensure that the AI outputs are consistent and actually "move the needle." For leadership, the takeaway is to focus on building "production-grade" workflows rather than just encouraging AI literacy.
  • Link: https://www.therevenueleadershippodcast.com/p/the-key-ai-decision-that-will-shape

5. We Have Learned Nothing — Colossus

  • Why read: A searing critique of commoditized startup advice and why the "Lean Startup" method might be failing today’s founders.
  • Summary: This piece argues that once a startup method (like Customer Development or Lean) becomes widely known, it causes founders to converge on the same answers, resulting in zero differentiation and inevitable failure. The author posits that "scientific" entrepreneurship is a paradox; if you follow a replicable process, you are by definition not building something unique. To win in a world of radical uncertainty, founders must reject the "punditry" and embrace non-linear, non-replicable strategies. The practical implication for operators is to beware of "best practices" that have become "common practices."
  • Link: https://colossus.com/article/we-have-learned-nothing-startup-pundits/

6. The world energy shock is coming — it will deepen inequality — Isabella M Weber

  • Why read: Essential macro-context on "Sellers' Inflation" and the geopolitical risk of the Strait of Hormuz.
  • Summary: Economic historian Isabella Weber warns that the crisis in the Strait of Hormuz—through which 20% of global LNG and 1/3 of crude oil passes—is triggering a massive energy shock. She explains the concept of "Sellers' Inflation," where dominant corporations use cost shocks as cover to hike prices and protect margins, further driving inequality. This creates a stagflationary environment where input costs rise while output is crushed by shortages. For strategic planners, this is a signal to prepare for rising input costs and a potential freeze in consumer spending as real wages are eroded again.
  • Link: https://twitter.com/IsabellaMWeber/status/2035349999165333966/

7. Paid marketing is a tax on your product's defensibility — Bill Gurley / Andrew Chen

  • Why read: A high-level warning for AI startups tempted to rent growth through performance marketing.
  • Summary: Venture legends Bill Gurley and Andrew Chen warn that heavy reliance on paid acquisition is an admission of lack of creativity and a failure of product defensibility. In the AI era, where competition is fierce, the moment you cannot outspend an incumbent, you die. The goal for any founder should be to build channels that get cheaper as you grow (network effects, viral loops) rather than "renting" growth at market rates. The strategic takeaway: if your LTV/CAC math is the only thing keeping you alive, you have no moat.
  • Link: https://twitter.com/bgurley/status/2035541014069612952/

8. On Maintenance and Restoration — IM—1776

  • Why read: A philosophical reset on the value of "caretaking" over the "move fast and break things" ethos.
  • Summary: Writing from a 14th-century French chateau, the author reflects on the "silent witness" of walls that survive revolutions only through constant maintenance. Using the 1968 solo sailboat race as an example, he contrasts sailors who were "caretakers" of their vessels (fixing leaks with Stockholm tar and navigation bulbs) against those who let small problems accumulate into disaster. For operators, this is a reminder that excellence is found in the "boring" work of repair and immediate resolution. A great "house" (or company) stays standing because its caretakers treat every failure as an immediate obligation, not an item for the backlog.
  • Link: https://im1776.com/on-maintenance/

9. Managing a team of AIs is like managing first-year analysts — Professor Campbell

  • Why read: A brilliant, tactical analogy for anyone currently struggling to manage agentic workflows.
  • Summary: Professor Campbell observes that AI agents share the exact traits of "high-ambition, low-judgment" junior quants: they are overconfident, distracted by "shiny" tasks, and perpetually confuse "goals" with "tasks." Managing them requires the same rigor as managing a junior human team: constant todo list reminders, strict synthesis of their work, and checking their insane "ambition to practicality" ratio. The practical implication for users is to stop treating AI as a "magic box" and start treating it as a brilliant but unreliable intern who needs a nap (and a better prompt).
  • Link: https://twitter.com/abcampbell/status/2035528714797056126/

10. Nabeel Qureshi: Understanding as a Virtue — Ben Springwater (Matter)

  • Why read: A curated intellectual toolkit for the AI era, focusing on deep "context" and the phenomenology of truth.
  • Summary: Entrepreneur Nabeel Qureshi shares a reading list that defines intelligence as a "virtue" rather than a fixed trait. He argues that the smartest people are not the fastest thinkers, but those who stubbornly refuse to accept answers they do not actually understand. The list covers Palantir’s culture (why it’s a "founder factory"), the "Bitter Lesson" of AI scaling, and the importance of thinking of "context" in its broadest possible sense. For the modern operator, this is a call to return to "first principles" and high-quality "literary criticism" of the systems we are building.
  • Link: https://words.getmatter.com/p/nabeel-qureshi-iliad-context-dead

Themes from yesterday

  • From Tools to Swarms: The conversation is shifting from "how to use an LLM" to "how to manage an autonomous research team" (Karpathy, Campbell).
  • The Personalization Breakthrough: New RL methods (OpenClaw) are making "customized AI" a byproduct of conversation rather than engineering.
  • Vertical Integrity vs. Macro Shock: Success is being defined by deep vertical integration (Tesla), even as macro shocks (Energy/Inflation) threaten to disrupt the broader economy.
  • Maintenance as Strategy: A resurgence of interest in long-term resilience, caretaking, and defensibility over the "rented growth" models of the past decade.