1. How we built Stripe Projects in seven weeks — Rami Banna

  • Why read: A masterclass in how AI accelerates the journey from prototype to production for developer tools.
  • Summary: Stripe Projects moved from ideation to developer preview in just seven weeks by deeply integrating AI throughout their entire development lifecycle. The team recognized that while agents can quickly "vibecode" apps, the bottleneck has shifted to provisioning infrastructure like databases, auth, and hosting. To solve this, they built a permissionless CLI network allowing agents to securely provision and manage these surrounding services without brittle workarounds. With 32 live providers, this creates a standard path for AI to request services and receive credentials natively. The core takeaway is that infrastructure must become agent-ready, unlocking a new distribution surface for providers and a faster deployment path for builders.
  • Read more
  1. Who brings in the money now? — Chandrika Maheshwari
    • Why read: Reveals how usage-based AI pricing is fundamentally rewiring the post-sales organizational chart.
    • Summary: Customer Success teams are fracturing into entirely new roles as AI products change how software delivers value. Because AI models require real-world context and workflow integration to function effectively, traditional Customer Success Managers are now functioning as solutions engineers or account executives. This is closing the product complexity and value exchange gaps while opening a new gap around production iteration and data configuration. Consequently, the people driving adoption and value are now the primary revenue generators. Companies operating on outcome-based pricing must recognize that post-sales is no longer a cost center, but the core engine of their commercial growth.
    • Read more
  2. If AI is so great, why isn't it working? — vas
    • Why read: Explains why enterprise AI initiatives have an 80% failure rate despite rapidly improving models.
    • Summary: C-suite executives are spending millions on AI pilots only to see negligible changes in day-to-day business operations. The bottleneck is no longer the underlying foundation models, which are more than capable, but rather the way AI is deployed into legacy environments. While 95% of integrated enterprise AI pilots fail to generate ROI, software engineers are seeing massive productivity gains because their work relies the least on convoluted business logic. Effective enterprise agents require the simplest possible solutions—often mostly code with strategic model calls—rather than overly complex harnesses. To see real value, organizations need to stop blaming the models and start redesigning their workflows around the technology.
    • Read more
  3. Agent Engineering Is Not the Hard Part — Getting Buy-In Is — Trace Cohen
    • Why read: A reality check on the organizational friction that kills enterprise automation efforts.
    • Summary: The vision of technical operators seamlessly wiring agents into enterprise systems ignores the messy reality of how work actually gets done. Inside large companies, workflows are rarely linear or well-documented; they exist as fragmented sequences of decisions distributed across different individuals and isolated systems. Before any code can be written, teams must spend months reverse-engineering tribal knowledge and navigating misaligned incentives among various stakeholders. The real hurdle is not technical implementation, but overcoming the organizational inertia and securing budget from leaders who only own partial fragments of the process. True automation requires treating adoption as an incentive problem rather than merely a product problem.
    • Read more
  4. The $112 Billion Quarter — Tomasz Tunguz
    • Why read: A quantitative look at how vertical integration is driving explosive, highly profitable growth in hyperscaler cloud revenue.
    • Summary: Google Cloud's staggering 63% year-over-year growth in Q1 2026 dramatically outperformed AWS and Azure, driven largely by its vertically integrated AI strategy. Unlike competitors who must pay licensing fees, Google owns its models (Gemini) and hardware (TPUs) top-to-bottom, yielding structural margin advantages and 80% better performance per dollar. The demand is so intense that Google is entirely compute-constrained, with its backlog doubling to $460 billion as enterprises commit to massive, multi-year contracts. To meet this unprecedented demand, hyperscalers spent a combined $112 billion in Q1 capital expenditures. This signals a massive infrastructure supercycle where exponential post-deployment AI usage forces budgets far beyond initial estimates.
    • Read more
  5. The Pulse: token spend breaks budgets – what next? — The Pragmatic Engineer
    • Why read: A look at how engineering leaders are reacting to runaway AI agent API costs.
    • Summary: In recent months, API token spending from internal AI tools and agent usage has skyrocketed up to 10x at many tech companies, prompting serious budgetary concerns from leadership. A perverse incentive structure called "tokenmaxxing" has even emerged, where developers run agents unnecessarily just to rank higher on internal usage leaderboards. In response, organizations are quietly implementing cost-control measures, such as defaulting background coding tools to cheaper models like Claude Sonnet. While companies are monitoring the heaviest users, they are currently hesitant to impose strict limits because the business cases and productivity gains are still proving out. Engineering teams must prepare for tighter governance as these exponential costs become unsustainable for bottom lines.
    • Read more
  6. AI, tractors, and the productivity paradox — Technically
    • Why read: Provides historical context for why AI's massive technological leaps aren't showing up in economic productivity stats yet.
    • Summary: Despite the hype, AI has yet to create a measurable uptick in macroeconomic productivity, mirroring the "productivity paradox" seen during the IT revolution of the 1970s and 80s. Historically, transformative technologies like steam engines and computers began as malleable "kits" for tinkerers rather than finished products, requiring a decade of organizational restructuring before gains materialized. Current AI tools function much like these early kits, heavily reliant on talented amateurs and grassroots experimentation. The proliferation of cheap, open-ended AI tools signals a market ripe for revolution, even if formal business applications are lagging. Operators should embrace this kit-building phase, as the most profound innovations will emerge from unstructured tinkering before translating into systemic efficiency.
    • Read more
  7. cursor's warchest, xai's redemption — Ethan Ding from mandates
    • Why read: Analyzes the strategic implications of Cursor's rumored $60B sale to xAI and Anthropic's competitive countermoves.
    • Summary: Cursor, the fastest-growing software business in history, reportedly sold to xAI for $60B despite hitting $2B ARR in just 13 months and dominating enterprise procurement. The decision signals the founders' hesitation to underwrite a path to $100B independently in a market highly vulnerable to the platform power of frontier model labs. Anthropic is actively fighting intermediaries like Cursor by launching Claude Code and pricing it at near-zero margins to crush the resale market. This dynamic reveals that AI model labs view developer interfaces as their own rightful profit pools, not partner ecosystems. Builders in the AI application layer must recognize the existential threat of margin compression from foundational labs moving up the stack.
    • Read more
  8. Standard Intelligence's $75M Series A: Training General Intelligence in Pixel Space — Sonya Huang 🐥
    • Why read: Explores a contrarian approach to building AI agents by training foundation models purely on raw video and pixel data.
    • Summary: While the industry focuses on scaling language models and complex agent harnesses, Standard Intelligence is betting that general computer agents must be trained directly on screen pixels. They argue that full video pre-training is the only viable way to scale action data, essentially applying the Tesla FSD approach to knowledge work. To achieve this, the team built an 11-million-hour computer action dataset and a highly token-efficient video encoder capable of fitting two hours of video into a context window. Their first model, FDM-1, already demonstrates the ability to execute complex tasks like extruding CAD models and debugging software by exploring state spaces visually. This paradigm shift suggests that future agentic intelligence may emerge natively from raw computer streams rather than text.
    • Read more
  9. TBM 420: The AI Playbook Puzzle — John Cutler from The Beautiful Mess
    • Why read: A critical framework for distinguishing between AI applications that actually improve workflows versus those that just automate bad habits.
    • Summary: The rush to adopt AI is causing many organizations to blindly automate fundamentally broken processes, essentially using AI to make bad ideas execute faster. Static PRDs, siloed workflows, and strategy documents that lack decision heuristics were always ineffective, yet teams are now proudly generating them at scale with AI. However, AI can exponentially supercharge practices that were already good, such as continuous co-design, prototyping for shared understanding, and acting as an adversarial stress-tester in pre-mortems. Operators must treat the AI transition as a forcing function to question underlying operational models rather than just checking an adoption box. The true value of AI lies in enhancing iterative, high-context collaboration, not in generating false polish for broken systems.
    • Read more
  10. The new underutilized sales trick you can steal — Jaryd from How They Grow
    • Why read: A tactical teardown of a brilliant, zero-cost growth hack that turns conversational AI into a personalized sales engine.
    • Summary: Innovative B2B and B2C websites are replacing traditional sales pages with embedded prompts that instantly open personalized product explanations in Claude or ChatGPT. By pre-populating a query, prospective buyers are fed a tailored pitch that perfectly aligns with their specific workflows and context. This bypasses the typical "browse and bounce" behavior by delivering a hyper-relevant, interactive experience without requiring human sales intervention. It proves that buyers are increasingly comfortable being sold to by AI if it reduces friction and accelerates discovery. Growth operators should steal this tactic to create immediate, personalized value propositions directly within consumer AI platforms.
    • Read more
  11. How to manage scope creep without saying no — Arnie Gullov-Singh
    • Why read: Essential advice on reframing custom feature requests to protect your product roadmap while keeping enterprise buyers happy.
    • Summary: Early-stage enterprise deals often suffer from scope creep when teams blindly agree to custom feature requests out of fear of losing the deal. Instead of saying no, operators should act like product managers by asking customers what underlying problem they are trying to solve, rather than discussing the proposed solution. Many "custom" requests are actually just workarounds for a lack of product understanding, which can be solved through targeted training. You can also benchmark their request against how the rest of your customer base operates, subtly grounding outliers and building credibility. By pressure-testing these demands with the customer's own data, you can gracefully close deals without committing to unsustainable technical debt.
    • Read more
  12. 15 Agentic Plays Every RevOps Team Should be Running Using Your Client Interactions — The Signal, by Brendan Short
    • Why read: A tactical guide to transforming dormant call recordings into actionable data pipelines for your GTM teams.
    • Summary: Most revenue organizations treat their conversational intelligence platforms as passive video libraries rather than rich, first-party data sources. Every sales call contains crucial signals—buyer language, competitive intel, feature requests, and expansion cues—that currently go to waste. RevOps teams should be utilizing AI platforms to build proactive agents that trigger automated workflows based on specific mentions during client interactions. Shifting from manual coaching and summary generation to automated, intent-based action can exponentially scale sales efficiency. Operators must stop treating transcripts as archives and start treating them as the foundation for automated GTM motion.
    • Read more
  13. Scarce Assets — Packy McCormick
    • Why read: A strategic thesis on how the AI-driven abundance of content and code is creating a massive premium for truly scarce assets.
    • Summary: As AI drives the cost of replication to zero, the market is entering a supercycle that disproportionately rewards unique, uncopiable assets. Historically, fortunes are made by balancing imbalances—just as Joseph Duveen matched abundant European art with abundant American capital. In today's economy, as generated code, text, and digital goods become infinitely abundant, things like trusted brands, physical experiences, proprietary data, and distinct human perspectives become drastically more valuable. Builders must aggressively lean into differentiation and authenticity because easily replicable features are entering a permanent bear market. The winning strategy is to stop competing on volume and start accumulating assets that AI cannot commoditize.
    • Read more
  14. .@Collision is bullish on two types of people: high-agency individuals... — TBPN
    • Why read: John Collison's mental model for the types of talent that will thrive over the next two decades in an AI-leveraged world.
    • Summary: Stripe's co-founder argues that AI makes high-agency individuals and multi-disciplinary thinkers more valuable than ever. High-agency people who deeply understand customer problems now have the tools to independently build and execute solutions without waiting on large teams. Similarly, "double majors"—those who understand both software and another domain like finance or marketing—can now single-handedly optimize entire funnels that previously required dozens of specialists. By combining functional understanding across disciplines with AI leverage, individuals can drastically scale their impact. The future belongs to those who use AI to bridge functional silos and actively push organizations forward.
    • Read more

Themes from yesterday

  • The shift from copilots to agents: Organizations are realizing that true ROI comes from fully integrated agents rather than just faster text generation, but adoption is severely bottlenecked by organizational friction and legacy enterprise systems.
  • The exploding cost of AI compute: Hyperscalers are betting hundreds of billions on infrastructure while engineering teams are already scrambling to govern runaway API token costs and internal usage inflation.
  • Abundance creates new scarcity: As AI drives the marginal cost of code and digital workflows to zero, human agency, cross-disciplinary expertise, and truly uncopiable assets are becoming the most valuable economic resources.