1. Productive Individuals Don't Make Productive Firms — George Sivulka

  • Why read: Explains why individual AI tools don’t automatically create company-level gains.
  • Summary: Much like the transition from steam to electricity, the true gains of AI come from structural reorganization rather than just replacing old tools with faster ones. While individual "co-pilots" boost personal speed, they often create bottlenecks if the surrounding business processes remain analog. To unlock 10x returns, firms must move toward institutional agent systems where workflows are redesigned from the ground up for autonomous execution. This shift requires moving beyond "AI-assisted humans" to "AI-first operations" where the model is the core logic engine. Practically, this means leaders should stop counting seats and start mapping how agents can own entire outcomes.
  • Link: https://twitter.com/gsivulka/status/2031797989908627849/?rw_tt_thread=True

2. What’s 🔥 in Enterprise IT/VC #489 (The Sandwich Model) — Ed Sim

  • Why read: Practical model for enterprise AI rollout.
  • Summary: The "Sandwich Model" proposes a dual-path strategy for AI integration: top-down strategic mandates paired with bottom-up experimentation. While leadership sets the vision and security guardrails, the most impactful use cases are often discovered by the teams closest to the work. By encouraging every department to "own" their own automation, companies avoid the "innovation lab" trap where ideas never reach production. This approach builds internal muscle and ensures that AI initiatives are solving real operational pain points rather than just following trends. Organizations should focus on creating a "permissionless" internal environment for building small-scale agentic workflows.
  • Link: https://www.whatshotit.vc/p/whats-in-enterprise-itvc-489

3. BUILD YOUR OWN SOFTWARE FACTORY — Gokul Rajaram

  • Why read: Tactical playbook for internal agent-driven delivery.
  • Summary: The shift from using AI as a coding assistant to building persistent "Internal Software Factories" represents a massive leap in organizational capability. Instead of engineers simply using LLMs to write snippets, cross-functional teams should define standardized "specs" that agentic workflows can execute, test, and deploy autonomously. This model treats internal software development as a high-throughput manufacturing process rather than a handcrafted art. By standardizing the inputs and verification steps, companies can drastically reduce the time from idea to shipped internal tool. Practically, this requires building a "platform layer" that manages agent state, memory, and tool access across different business units.
  • Link: https://twitter.com/gokulr/status/2032271386161684665/?rw_tt_thread=True

4. AI-native operating model (1–2 month sprint) — amirmxt

  • Why read: Strong change-management approach for adoption.
  • Summary: Transitioning to an AI-native operating model is primarily a challenge of habit and culture rather than technical implementation. A concentrated 1–2 month sprint focused on "AI fluency" can bridge the gap between having access to models and actually using them to transform daily work. Success is measured not by tokens consumed but by how many manual steps have been permanently removed from core workflows. Workshops should focus on co-building solutions with the most receptive teams to create "lighthouse" examples that others can follow. This practical, time-bound approach prevents "pilot fatigue" and creates the momentum needed for broad organizational change.
  • Link: https://twitter.com/amirmxt/status/2032777654663786931/?rw_tt_thread=True

5. Enterprise Weekly #552: NVIDIA’s $26B Bet — Work-Bench

  • Why read: Big strategic implications for AI stack value capture.
  • Summary: NVIDIA's aggressive move into the software and open-weight model space represents a major shift in the AI power dynamic. By offering high-performance models that run optimally on their hardware, NVIDIA is squeezing both foundation model labs and application-layer companies. This forces labs to decide whether they are utility providers or end-user platforms, as the "middle" of the stack becomes increasingly crowded. For enterprises, this means more power shifting back to infrastructure providers who can offer vertically integrated performance. Teams should be wary of deep lock-in with a single model provider if that provider's roadmap is being disrupted by hardware giants.
  • Link: mailto:reader-forwarded-email/ef7a411b391dc38cd2f7bfdbe3cf4cef

6. Hello, Claude? Are You There? — Tomasz Tunguz

  • Why read: Clear framing of inference scarcity risk.
  • Summary: As demand for high-reasoning models explodes, we are entering an era of "inference scarcity" driven by physical data center and power constraints. This bottleneck suggests that AI capacity may be rationed or priced at a premium through at least 2028, making "limitless compute" a dangerous assumption. Organizations must prioritize their most valuable workloads for the highest-performing models while offloading routine tasks to smaller, optimized local models. This scarcity will likely drive a new wave of innovation in model distillation and hardware-specific optimization. Practically, engineering teams should build "model-agnostic" architectures that can dynamically swap providers based on current availability and cost.
  • Link: https://www.tomtunguz.com/what-if-we-run-out-of-capacity/

7. Winners, Losers, and the Unknown — Ben Thompson

  • Why read: Contrarian take on model commoditization.
  • Summary: Contrary to the belief that foundation models are becoming interchangeable commodities, Thompson argues that deep product integration creates a lasting "moat." When a model is tightly coupled with proprietary data and user interfaces—like Google Search or Microsoft Office—the underlying intelligence becomes part of a larger, defensible system. Pure infrastructure providers may struggle with low margins, but those who control the "context" and the "user action" will capture the lion's share of the value. This implies that the winning strategy is not just having the best model, but having the model that is most deeply embedded in the user's workflow. For developers, this underscores the importance of focusing on "workflow gravity" rather than just raw model performance.
  • Link: https://stratechery.com/2026/winners-losers-and-the-unknown/

8. In Defense of Model Lab Profitability — Jamin Ball

  • Why read: Financial thesis on AI lab unit economics.
  • Summary: While critics point to the massive burn and low initial margins of AI labs, Ball suggests these businesses may follow the "S-curve" seen in early cloud and SaaS infrastructure. As utilization increases and the cost of inference continues to drop through optimization and scale, the unit economics should flip from negative to highly profitable. The key is surviving the current high-CapEx phase where "R&D" is effectively the cost of manufacturing. Investors and enterprises should look for signs of "operating leverage"—where revenue grows faster than the cost of compute. Practically, this means monitoring the "cost-per-query" trends as a proxy for the long-term viability of different model providers.
  • Link: mailto:reader-forwarded-email/257d441f5d1dcc5146fdb4ef47f5cd79

9. AI Did Not Change GTM. It Exposed What Was Broken. — Maja Voje

  • Why read: Useful GTM architecture perspective.
  • Summary: AI is not a "magic pill" for broken Go-To-Market strategies; instead, it acts as an accelerant that reveals existing system flaws. If your sales and marketing loops are inefficient, simply automating them with AI will only lead to faster, more expensive failure. The real opportunity lies in using AI to redesign the GTM architecture itself—moving from volume-based outreach to high-precision, data-driven engagement. Teams must fix their underlying data structures and customer segments before trying to layer on autonomous agents. Practically, leaders should audit their "customer journey" and identify where AI can create 10x better experiences rather than just 10% more volume.
  • Link: https://knowledge.gtmstrategist.com/p/the-gtm-architecture-shift

10. 10 AI pricing moves you can steal this week — Rob Litterst

  • Why read: Tactical packaging/pricing examples you can apply quickly.
  • Summary: The pricing landscape for AI products is rapidly evolving away from simple per-seat licenses toward usage-based "credit" systems. Companies are increasingly using tier limits and "enterprise controls" to manage their own compute costs while still capturing the value delivered to the user. Successful pricing strategies align the cost to the customer with the actual utility gained—whether that's time saved or tasks completed. This shift requires product teams to be much more transparent about what consumes "tokens" or "credits" to avoid bill shock. For builders, the takeaway is to design pricing that scales with the output of your AI, not just the number of people with login access.
  • Link: mailto:reader-forwarded-email/b37d334fc18d84d6b7af789ed2077aa8

Themes this week

  • From Pilots to Factories: The “agent factory” era is replacing individual AI productivity hacks with organization-level workflow redesign and persistent automated systems.
  • Strategic Scarcity: Inference and power constraints are becoming critical strategic bottlenecks, forcing a shift toward model optimization, tiering, and agnostic architectures.
  • Vertical Compression: Value capture is blurring as hardware giants like NVIDIA move into software, challenging the traditional boundaries between infrastructure, models, and apps.
  • Systemic GTM Overhaul: Go-to-market and pricing models are being fundamentally rebuilt for AI-native economics, prioritizing precision and credit-based value alignment over raw volume.