1. [AINews] NVIDIA GTC: Jensen goes hard on OpenClaw, Vera CPU, and announces $1T sales backlog in 2027 — AINews
- Why read: Essential briefing on the shift from AI models to AI agent ecosystems and the hardware powering them.
- Summary: NVIDIA’s GTC 2026 keynote signaled a massive pivot toward "NemoClaw," their secure enterprise response to the open-source OpenClaw agent framework. Jensen revealed a staggering $1T sales backlog for 2027, proving that the demand for Blackwell and Rubin chips remains insatiable. A technical highlight is Moonshot’s "Attention Residuals" paper, which replaces fixed residual accumulation with input-dependent attention for a 1.25x compute advantage. This marks a shift toward more efficient, dynamic model architectures that better support autonomous agents.
- Link: mailto:reader-forwarded-email/8062dea41531979e8f2337f16f2c4e54
2. The Collapse Of Terminal Value - What Happens If AI Makes Every Moat Temporary? — Chamath Palihapitiya
- Why read: A provocative framework for how AI disruption fundamentally breaks traditional discounted cash flow (DCF) valuations.
- Summary: Chamath argues that if AI lowers the cost of disruption to the point where no company can credibly project cash flows beyond five years, equities must be repriced as current-year multiples rather than future streams. If a business faces a 20% annual probability of AI obsolescence, its rational valuation drops to roughly 3.9x FCF. This "Disruption Repricing Framework" suggests that traditional moats (brand, network effects) are being liquidated by the pace of innovation. Investors should shift their focus from long-term "terminal value" to businesses that can generate massive cash immediately.
- Link: https://twitter.com/chamath/status/2033385903520129161/?rw_tt_thread=True
3. 🎙️ This week on How I AI: From Figma to Claude Code and back — Lenny's Newsletter
- Why read: Real-world tactical guide on creating a bidirectional "design ↔ code" loop using AI agents and MCPs.
- Summary: Figma engineer Alex Kern and designer Gui Seiz demonstrate how to pull production code directly into Figma as editable frames, modify them, and push changes back via Claude Code. A key takeaway is the death of the "handoff"; instead, teams use Model Context Protocol (MCP) to keep design files in sync with live production states. Alex also suggests turning internal engineering wikis and SOPs into executable "skills" for agents to automate pre-flight checks and PR deployments. Structuring codebases specifically for AI legibility is now a core engineering responsibility, reportedly yielding 20-30% efficiency gains.
- Link: mailto:reader-forwarded-email/729072c6c9062dbf67da020d698ac8ad
4. The Vertical SaaS Org of the Future (And Why Yours Isn’t It) (Yet) — Cannonball GTM
- Why read: Critical warning for GTM leaders about the collapse of search-based buyer journeys.
- Summary: Traditional marketing funnels are failing as "blue link" organic search traffic declines by over 60% due to AI Overviews. Buyers are increasingly conducting vendor research directly inside ChatGPT and Claude, where they receive "the answer" rather than a list of sites to visit. This shift means brand reputation and public sentiment are more critical than ever, as they form the training data for these models. GTM organizations must pivot from "assembly-line" marketing to influencing the data sets and "AI reputation" that prospects encounter during chat-based research.
- Link: mailto:reader-forwarded-email/6f08cdba315e34632effdf436adc0437
5. You Are Responsible for Your Agent — Tomasz Tunguz
- Why read: Vital perspective on the legal and operational liabilities of deploying AI agents in production.
- Summary: Amazon recently faced four "Severity 1" incidents and a 99% order volume drop contributed to by its AI coding assistant, leading to a 90-day "safety reset." Legally, the "hallucination defense" is dead; the Utah AI Policy Act clarifies that companies are fully liable for any violative statements or acts made by their GenAI tools. Since AI-generated code currently creates 70% more issues than human code, "bring your own agent" to work is becoming as risky as unmanaged mobile devices in 2009. Companies must implement mandatory human-in-the-loop reviews for all agent-deployed code and contracts.
- Link: mailto:reader-forwarded-email/9ceb2ada8ea50ed49fa91a7b251dee2d
6. From Identity to Intent: The Rise of Intent-Aware Access Fabrics — Francis (Software Analyst)
- Why read: Defines the next generation of enterprise security required to manage autonomous AI agents.
- Summary: Traditional security (RBAC/Zero Trust) is human-centric and verifies who is accessing a resource, which is insufficient for machine-speed AI agents. The new paradigm is "Intent-Aware Access," which continuously evaluates why an entity is accessing data to ensure it aligns with a legitimate operational purpose. This requires an "Access Fabric"—an architectural mesh that unifies context across identity systems, AI pipelines, and APIs. As agents gain the ability to sign contracts and move data, validating intent becomes the only way to prevent authorized but malicious "workflow abuse."
- Link: mailto:reader-forwarded-email/575b970cb47049ff6397815136211bfa
7. AI skill for strategic thinking — George from 🕹prodmgmt.world
- Why read: A practical, high-value skill for Claude Code that enforces the Pyramid Principle for problem-solving.
- Summary: George shares "Find the Strategic Crux v2," an AI skill designed to guide PMs through a 5-step sequential process to identify the core blocker in messy strategy problems. Unlike standard prompts that jump to options, this skill uses "lightweight evals" to ensure the user defines the "gap" and "crux" before the AI suggests moves. It demonstrates a shift toward "analytical structure" agents that coach the human through hard thinking rather than just generating low-quality summaries. The skill is compatible with Claude Code and Cursor, focusing on Minto’s Pyramid Principle for disciplined diagnosis.
- Link: https://twitter.com/nurijanian/status/2033481144055026079/?rw_tt_thread=True
8. The Shorthand Guide to Everything Agentic Security — cogsec
- Why read: Deep dive into current vulnerabilities (CVEs) found in popular agentic developer tools.
- Summary: Widespread adoption of OpenClaw and Claude Code has drastically increased the attack surface for prompt injection, which can now lead to shell execution and secret exposure. Recent disclosures (CVE-2025-59536 and CVE-2026-21852) showed that agents could be tricked into executing code or leaking API keys before a user even accepts a "trust" dialog. Attack vectors are expanding to include malicious GitHub PR comments, PDF attachments, and even WhatsApp messages that agents read as instructions. Security teams must treat any agent with root or broad filesystem access as a high-risk lateral movement vector.
- Link: https://twitter.com/affaanmustafa/status/2033263813387223421/?rw_tt_thread=True
9. Fuse — Nikhil Basu Trivedi (@nbt)
- Why read: Case study of an AI-native startup successfully displacing legacy systems-of-record in "laggard" industries.
- Summary: Fuse has launched an AI-native Loan Origination System (LOS) specifically for credit unions and regional banks, industries typically locked into 10-year legacy software cycles. By using agentic workflows to automate underwriting and account opening, Fuse demonstrates a much faster "time-to-value" than traditional incumbents. They are aggressively targeting the market with a $5M "Rescue Fund" to help credit unions transition before their existing contracts expire. This illustrates the trend of "agentic displacement" where AI-first platforms don't just add features but reinvent the entire core stack.
- Link: mailto:reader-forwarded-email/e236bd70b4713c573be4159214001f2d
10. Are We Already Building a Piecemeal AI Data Royalty Model? — Byrne Hobart @ The Diff
- Why read: Analyzes the emerging economic model for compensating data creators in the age of LLMs.
- Summary: As LLMs package the "zeitgeist" into commercial products, creators are increasingly concerned about their creative output rendering their own jobs obsolete. Hobart compares the current situation to the early cable TV era, where retransmission royalties eventually became more stable than ad revenue for local stations. While common law protects those who offer "better products at lower prices," the political pressure for an AI data royalty model is mounting as model weights achieve "partial immortality" using public tokens. The strategic takeaway is that the "right to the data" may soon become as important as the model itself.
- Link: mailto:reader-forwarded-email/61a2785b26ef12b37749e08fe8937253
Themes from yesterday
- The End of Durability: A recurring argument that AI accelerates the disruption cycle so much that traditional "moats" and long-term equity valuations (terminal value) are collapsing.
- Agentic Liability & Security: A sharp pivot toward the legal and technical risks of agents, moving from "funny hallucinations" to production-breaking CVEs and corporate liability for agent actions.
- Workflow Convergence: The blurring lines between design and code (Figma/Claude) and the transformation of static SOPs into executable AI "skills."
- GTM Crisis: The realization that the search-based web is dying, forcing brands to optimize for AI training data and LLM chat-based research rather than "blue links."
