1. Getting on the right side of the ice — Eric Glyman (Ramp)

  • Why read: Hard data proving that high AI adoption is now directly correlated with massive revenue outperformance.
  • Summary: Ramp's internal data shows a growing "K-shaped" divide where the top quartile of AI spenders have doubled their revenue since 2023, while laggards remain flat. This trend extends beyond tech into traditional sectors like construction and roofing, where AI-driven automation for estimates and paperwork is driving 20-60% growth. Glyman warns that revenue is a lagging indicator; by the time a slowdown hits, the market gap may already be too wide to jump. To survive, companies must move from "calm" traditional operations to aggressive AI leverage immediately.
  • Link: https://twitter.com/eglyman/status/2036477278394138772/?rw_tt_thread=True

2. What I Learned Sending 22,961 AI Written Emails — Aman Azad (Vercel)

  • Why read: A technical blueprint for high-performing AI outbounding that solves the "mid-quarter slump."
  • Summary: Leveraging Claude 4.6 and Vercel Workflows, Azad built an "intent-based" system that achieves 1.2% to 6% reply rates—outperforming manual human outreach. The secret lies in "last-mile" engineering: instead of a new dashboard, the system injects highly contextual, researched drafts directly into Salesforce or Outreach for SDRs. The system synthesizes data from Exa, Common Room, and first-party product usage to ensure messaging is timely and relevant. Success requires treating GTM as a software engineering problem rather than just a prompting task.
  • Link: https://twitter.com/realamanazad/status/2036471315264315451/?rw_tt_thread=True

3. Compound Engineering 2.51.0 — Trevin Chow (Every)

  • Why read: A look at the next generation of AI-native engineering tools that automate the PRD-to-Review lifecycle.
  • Summary: The latest update to the Compound Engineering plugin introduces `ce:brainstorm`, an interactive agent that helps flesh out ideas into requirements before coding starts. It also features a new structured review pipeline that uses specialized "reviewer personas" to provide high-signal feedback rather than vague comments. For frontend work, the tool now detects existing design systems and verifies UI changes with actual screenshots. This represents a shift toward "autopilot" modes where AI makes low-level decisions while maintaining a transparent log for the human lead.
  • Link: https://twitter.com/trevin/status/2036524884277493832/?rw_tt_thread=True

4. Build vs buy: how to deploy coding agents at scale — Zach Lloyd (Warp)

  • Why read: A strategic framework for leaders deciding how to integrate AI coding agents into their enterprise stack.
  • Summary: Lloyd argues that companies should focus on building "Intelligence Infrastructure" rather than just picking a tool. This infrastructure must include a control plane for model governance, context architecture for persistent institutional memory, and secure sandboxing for agent execution. While an MVP "prompt box" can be built in days, the long-term value lies in how agents integrate with proprietary data and internal triggers. For most, the "buy" option for orchestration (like Warp's Oz) is becoming more attractive as the surface area of compliance and observability grows.
  • Link: https://twitter.com/zachlloydtweets/status/2036509756404158559/?rw_tt_thread=True

5. The operating principles we use to run Clay — Varun Anand (Clay)

  • Why read: Two powerful mental models ("Negative Maintenance" and "Non-Attached Action") for building high-velocity teams.
  • Summary: Clay prioritizes hiring "negative maintenance" individuals—those who solve problems upstream and reduce future work for others without being asked. This is paired with "non-attached action," the ability to work diligently on a feature while remaining willing to rewrite or scrap it as the market provides better signals. They intentionally built "throwaway" integrations to gain speed, knowing they would later consolidate them. These principles prioritize momentum and system health over individual ego or "perfect" initial designs.
  • Link: https://twitter.com/vxanand/status/2036447670424740274/?rw_tt_thread=True

6. The PLG vs. SLG debate is over — Rob Litterst (Good Better Best)

  • Why read: New data on how AI is forcing the collapse of Product-Led Growth and Sales-Led Growth into a "Hybrid-led" model.
  • Summary: Analysis of 3,847 pricing changes shows that AI-native tools are shifting toward 7-day trials and credit-based expansion because users hit "value" much faster. Credits are becoming the universal currency, allowing for self-serve consumption that automatically triggers sales outreach when high-usage thresholds are met. This hybrid model replaces the binary choice of "Free" or "Enterprise" with a multi-path expansion playbook. Pricing is no longer about seats, but about the specific output and utility provided by AI agents.
  • Link: mailto:reader-forwarded-email/c1c787af8adb40e66e1b92f24028374d

7. Why There Is No "AlphaFold for Materials" — Latent.Space

  • Why read: Evidence of AI discovering novel quantum mechanical effects to create materials 4x tougher than human-designed ones.
  • Summary: Professor Heather Kulik details how AI designed new polymers by identifying building blocks that break in novel, non-intuitive ways. While LLMs excel at academic chemistry, they still lack the "intuition" for lab-ready synthesis, making domain expertise the critical bottleneck. The interview highlights that "AI for Science" is moving beyond simple prediction into active discovery of physical properties that human scientists missed for decades. For operators, it underscores the need to integrate LLMs with physics-based modeling rather than relying on text alone.
  • Link: mailto:reader-forwarded-email/28dd7aa2b2049453a3257a686e93640e

8. The Cannonball Guide to Claude Code GTM Development — Cannonball GTM

  • Why read: A "Ladder of Escalation" framework for building cost-effective, high-quality data pipelines.
  • Summary: Real GTM pipelines are chains of dependent steps where one error ruins the entire output. This guide suggests a disciplined approach: start with free local scraping, move to cheap marketplace APIs, and only use expensive LLM calls for the "last mile" of cleaning and validation. It identifies "Data Source Discovery" as the most critical step—using authoritative, niche sources like government filings instead of generic paid databases. This modular "source-stack" mindset ensures higher match quality and lower costs at scale.
  • Link: mailto:reader-forwarded-email/83ae1b893c4eeb7b0a76868a9f99630a

9. “How to be a 10x engineer” – interview with a standout dev — Gergely Orosz

  • Why read: A profile of an elite engineer with zero public footprint, proving that referrals and "product-mindedness" trump social proof.
  • Summary: The interview features "Sam," a top 3% engineer at Uber who hasn't committed to a public GitHub in 10 years and has no social media presence. His job searches consist entirely of reach-outs from former colleagues who value his ability to deeply understand product goals and set clear technical boundaries. Sam's success highlights that for senior roles, internal reputation and the ability to solve complex business problems are far more valuable than "slop" applications or public vanity metrics.
  • Link: mailto:reader-forwarded-email/bb98ffee9000f055bf9c72d635e1b7bb

10. How to Finally Make Something — Scott Stevenson

  • Why read: A psychological breakdown of why people fail to ship and how to avoid "Fantasy Games."
  • Summary: Creative blocks often stem from "Fantasy Games"—activities like Learning Syndrome (too many books, no code) or Tool Syndrome (searching for the perfect gear) that feel like progress but avoid the anxiety of creation. Stevenson argues for "just-in-time learning" and sticking to a fixed set of tools to force movement through ambiguity. The key is recognizing that "unstructured tasks" create anxiety, and we subconsciously substitute the real game of shipping for more comfortable, structured "prep" work.
  • Link: https://blog.scottstevenson.net/p/how-to-finally-make-something-a16c8db7ba2a

Themes from yesterday

  • The Intelligence Infrastructure Shift: Companies are moving from "using AI tools" to building enterprise-wide "intelligence infrastructure" (governance, memory, and orchestration).
  • Death of the Cold App: Both in GTM (outbounding) and Careers (hiring), AI-generated "slop" is being rejected in favor of high-context, referral-driven, or personalized video interactions.
  • Cognitive/Comprehension Debt: As AI accelerates code generation, the bottleneck has shifted to comprehension and review, leading to a new class of "review agents" and "PRD agents."
  • The Revenue Divide: Hard data is beginning to show a permanent split between companies that use AI as structural leverage and those that view it as a discretionary cost.