1. How to practically deploy Jack Dorsey's 'world intelligence' today — ericosiu

  • Why read: A tactical blueprint for implementing the "AI-native" organizational structure recently popularized by Jack Dorsey.
  • Summary: The author details a four-layer stack—Hardware, World Model, Intelligence, and Surfaces—currently powering their company's operations. By using a "Single Brain" vector database that ingests all company data every 15 minutes, they have replaced traditional coordination layers with autonomous agents. Moving inference to local hardware like the DGX Spark has reportedly cut costs by 70% while improving latency. This transition suggests that the "World Model" isn't the AI itself, but the data structure that allows AI to comprehend your specific business context.
  • Link: https://twitter.com/ericosiu/status/2040543007716553088/?rw_tt_thread=True

2. TBM 414: Legibility and Legitimacy — John Cutler

  • Why read: A critical philosophical counter-argument to the rush toward AI-driven organizational flattening and "world models."
  • Summary: Cutler warns that the drive for "legibility"—making a company fully readable by AI—is a tool for centralized control masquerading as decentralization. He argues that Dorsey’s "Hierarchy vs. Intelligence" thesis ignores the question of legitimacy: who holds the power once the system is transparent? The risk is that humans become "edge meat" operators in a system designed for ultimate corporate legibility rather than flourishing. Leaders should be wary of rhetorical tricks that repackage surveillance and control as "freedom" from middle management.
  • Link: https://cutlefish.substack.com/p/tbm-414-legibility-and-legitimacy

3. One Soul, Many Minds: Model-Specific Prompt Architecture for AI Agents — superada.ai

  • Why read: Essential tactical advice for developers managing agent behavior across different foundation models like GPT-5.4 and Opus 4.6.
  • Summary: Research shows GPT-5.4 is nearly 2x more verbose than Claude Opus on identical tasks, often hedging decisions or "proposing" instead of "doing." The solution is a three-layer architecture: a core "SOUL.md" for identity, model-specific "Overlays" to correct behavioral failure modes (like GPT's sycophancy), and "Provider Guides" for prompt editing. This approach ensures a consistent agent personality while optimizing for the unique quirks of each LLM. Practical benchmarks suggest that ignoring these differences leads to significant token waste and degraded UX.
  • Link: https://superada.ai/blog/one-soul-many-minds/

4. The $1B Rorschach Test — Kyle Harrison

  • Why read: A sobering analysis of the "vibe coding" trend through the lens of MEDVi, the AI startup claiming $1.8B in revenue.
  • Summary: Harrison argues that MEDVi is less a "healthcare company" and more a masterclass in modern outsourcing and "vibe coding." By using AI tools (ChatGPT, Claude, Grok) to wire together backend infrastructure and third-party platforms like OpenLoop, a single founder created a massive revenue engine with minimal overhead. However, the "Rorschach Test" lies in whether you see this as a wizard-like feat of efficiency or a fragile shell of a company. It highlights a future where "building" matters less than "orchestrating" existing high-leverage platforms.
  • Link: https://investing101.substack.com/p/the-1b-rorschach-test

5. Continual learning for AI agents — Harrison Chase

  • Why read: A foundational framework for understanding how AI agents improve over time beyond just fine-tuning.
  • Summary: Continual learning in agentic systems occurs at three layers: the Model (weights), the Harness (the code driving the agent), and the Context (instructions and skills). While updating model weights is the most discussed method, it suffers from "catastrophic forgetting." Optimizing the "Harness" through meta-optimization loops or updating the "Context" (memory) are often more practical for production systems. For developers, this means the most effective "learning" often happens in the configuration and tooling rather than the base model.
  • Link: https://twitter.com/hwchase17/status/2040467997022884194/?rw_tt_thread=True

6. How we built Steer, our interpretability playground — Ramp Labs

  • Why read: Technical insights into "activation steering," a method to control LLM behavior without expensive retraining.
  • Summary: Activation steering modifies internal representations at inference time by adding vectors to specific layers. Ramp's experiments with Qwen 2.5 revealed that over-steering causes models to "revert" to their pretraining distribution (e.g., Qwen suddenly outputting Mandarin). They also found that different concepts require unique "magnitude" calibrations; what works for "expense management" might break the model for "Dutch painting." This technique offers a lightweight, reversible way to bake specific obsessions or guardrails into an agent's logic.
  • Link: https://twitter.com/RampLabs/status/2039726632886235648/?rw_tt_thread=True

7. How We Built Our AI VP of Customer Success, “Qbee” — SaaStr

  • Why read: A case study in using vibe-coding to replace expensive SaaS tools and reduce human labor by 70%.
  • Summary: SaaStr replaced a static customer portal with an AI agent named Qbee, built entirely with Replit and no formal engineering team. Qbee manages 100+ sponsors, sends hyper-personalized emails, and tracks subtasks, resulting in a 10x increase in on-time task submissions. The project evolved from a simple project management tool into a full "VP of CS" once the team saw the granularity of data they could collect. It proves that for many operational roles, a custom-coded agent is now superior and cheaper than off-the-shelf software.
  • Link: https://www.saastr.com/how-we-built-our-ai-vp-of-customer-success-qbee-and-how-you-can-build-yours/

8. Bets I'd put $1M behind right now — James Camp

  • Why read: A high-signal list of contrarian bets on AI, energy, and the future of work.
  • Summary: Camp predicts the end of the "specialist era," suggesting that generalists with taste will dominate as AI commoditizes labor. He argues that $200/month AI subscriptions will die as local models catch up to the cloud, and OpenAI’s business model is on a "clock." Key strategic bets include hardware, nuclear energy, and "defense tech" as the world enters the "Fourth Turning." For operators, the takeaway is clear: "Your resume is worthless; the only thing that matters is what you’ve built."
  • Link: https://twitter.com/JamesonCamp/status/2040598279784943879/?rw_tt_thread=True

9. Brain Food: Credibility is Expensive — Shane Parrish (FS)

  • Why read: Features Joe Liemandt’s $1B bet on AI-driven education and the concept of mastery-based learning.
  • Summary: Liemandt (Trilogy founder) argues that traditional schools are broken and that AI can help students learn 10x faster through individualized, mastery-based instruction. At Alpha School, kids spend only two hours on academics and the rest on "life skills" like leadership and entrepreneurship. The newsletter also highlights that "credibility is expensive because the bills never stop"—it is built in private conversations and lost in an afternoon. For leaders, this emphasizes the importance of internal character over external accomplishments.
  • Link: https://fs.blog/brain-food/april-5-2026/
  1. The Case for Doing Real, Hard Things — Brad Stulberg
  • Why read: It’s a strong counterpoint to AI-mediated work: in a world of synthetic polish and AI slop, physically grounded, objectively constrained effort becomes more valuable psychologically and morally.
  • Summary: Stulberg argues that “autotelic” activities — lifting, crafting, gardening, running, building — matter because they expose you to reality that can’t be faked through rhetoric or abstraction. The piece is really about reclaiming contact with truth, friction, and earned competence in an increasingly mediated environment.
  • Link: https://twitter.com/BStulberg/status/2040416574721442160/?rw_tt_thread=True

Themes from yesterday

  • The AI-Native Org Debate: Heavy focus on Jack Dorsey's "World Model" vision, with split opinions on whether it represents operational freedom or centralized surveillance.
  • Vibe Coding as a Business Model: Real-world examples (SaaStr, MEDVi) showing that non-engineers are building $1B+ revenue or 70% efficiency gains using low-code/AI orchestration.
  • Agentic Architecture Maturity: A shift from simple prompting to complex multi-layer systems involving activation steering, model-specific overlays, and continual learning harnesses.
  • The Commoditization of Labor: A recurring sentiment that specialist resumes are dying, replaced by the need for generalist "orchestrators" who own their data and hardware.