1. A New Token Rule For Engineering Leadership — Alfred Lin
- Why read: A practical management framework for ensuring engineering leaders stay connected to the reality of AI-assisted development.
- Summary: Chainguard implemented a new rule requiring engineering managers to maintain AI token usage at the 50th percentile of their direct reports. Leaders with usage too low lack the firsthand context needed to scope work or coach their teams effectively through the AI transition. Conversely, leaders indexing too high need to shift their focus from being power users to enabling their teams. This creates a balanced forcing function where management remains deeply familiar with the tools without bottlenecking execution. The approach treats AI adoption as a leadership imperative rather than an organic, bottom-up trend.
- Read more
- The Three Questions in AI Sales — Tomasz Tunguz
- Why read: A strategic reframe for selling AI software by targeting labor budgets instead of traditional SaaS budgets.
- Summary: The conventional software sales motion focuses on capturing a slice of the existing software budget, but AI sales must pivot to address the total labor budget. Tunguz outlines three critical questions for buyers: what is your software budget, your labor budget, and what do you want their ratio to be in three years? Because AI collapses the labor side of the equation, the software budget acts as a floor rather than a ceiling. This shift turns a standard software sale into a strategic organizational planning conversation. Sales teams must first land by justifying the cost against current software spend, then expand by capturing the resulting labor savings.
- Read more
- The Model Is Not the Agent — Hiten Shah
- Why read: A reality check on why impressive AI demos often translate into fragile, high-maintenance production systems.
- Summary: While the intelligence of the underlying model is the most visible part of an AI agent, it is rarely the component that breaks in production. The true burden of an agent lies in its operating design: waking up at the right time, managing state across runs, using tools safely, and recovering gracefully from failures. A model provides raw capability, but the agent wrapper creates an ongoing operating burden that compounds with every tool integration. Demos prioritize reach and autonomy, but real-world utility requires narrow, dependable boundaries where leverage is highest. Building trust requires reliable triggers, appropriate memory decay, and systems that fail without causing cascading damage.
- Read more
- *AI startups will eat services, but will not provide services* — hari raghavan
- Why read: A sharp critique of the "AI-enabled services" startup thesis and why venture-scale returns require selling software, not human-in-the-loop services.
- Summary: Despite renewed investor interest in AI-native service companies, startups that package AI as a traditional professional service face significant structural headwinds. Customers paying for services expect human-level SLAs and support, making it difficult to maintain premium pricing as the offering becomes more software-driven. These businesses risk being squeezed between high-end boutique firms offering true human touch and low-cost agentic software delivering the same outcomes. Using services as a temporary wedge to build software often results in subscale consulting shops with declining ACVs that repel venture funding. The most successful companies will build agentic software that attacks the services TAM directly, rather than masquerading as service providers.
- Read more
- Agents broke the security stack and it's costing you a lot — Shruti Gandhi / Array VC preseed rounds
- Why read: A wake-up call regarding the massive security vulnerabilities introduced by non-human AI identities in the enterprise.
- Summary: The enterprise security stack built over the last two decades fundamentally assumes a human is somewhere in the loop, leaving it blind to autonomous AI agents. Attackers are exploiting this by compromising third-party AI assistants, stealing OAuth tokens, and silently exfiltrating data without ever triggering traditional endpoint or DLP alerts. Enterprises now have vastly more non-human identities than human users, and these agents possess tokens, admin permissions, and the ability to move data autonomously. Single-layer security tools are failing because they cannot monitor the cross-layer trust gaps where agentic API calls and third-party code execute. Protecting the modern enterprise requires new security paradigms designed specifically for process-driven actors rather than human behavior.
- Read more
- Canals are a better metaphor for modern defensibility than moats. — Trace Cohen
- Why read: A modern framework for understanding competitive advantage in an era of interoperability and agentic coordination.
- Summary: Traditional software defensibility relied on moats—closed ecosystems, proprietary data, and high switching costs designed to lock customers in and keep competitors out. In today's API-driven, agentic landscape, the strongest companies are instead building canals that connect disparate participants. A canal compounds value through flow: the more data, models, and workflows that route through it, the harder the system becomes to displace. We see this across the stack, from Nvidia acting as a canal for intelligence to Palantir and Salesforce sitting in the flow of enterprise decisions. Advantage now comes from being the essential routing infrastructure rather than an isolated fortress.
- Read more
- The Harness Is the Backend — Mike Piccolo
- Why read: A technical exploration of how AI agent infrastructure must evolve to merge natively with traditional backend systems.
- Summary: Current AI architectures treat the "agent harness"—the orchestration loop, tools, and memory—as a separate layer distinct from the traditional backend of queues, databases, and HTTP routing. This artificial separation forces developers to debug highly stochastic LLM behaviors across disjointed deterministic systems, causing exponential increases in complexity. Frameworks like LangGraph encode rigid logic, while Anthropic relies on thin loops, but both still treat the agent as an external entity triggering backend events. The next evolution of infrastructure requires deconstructing the traditional backend to natively support and trace stochastic agent operations. True reliability will only emerge when agentic workflows are seamlessly integrated into the core execution infrastructure.
- Read more
- Software Is Eating the World (But Actually This Time) — Siddharth
- Why read: An explanation of how AI is finally automating the core logic of human work rather than just the digital interfaces.
- Summary: For the past fifteen years, software successfully digitized interfaces—giving us apps for banking and dispatching—while humans continued to perform the actual cognitive labor behind those screens. AI inference is now actively consuming the work itself, transforming roles like claims adjusters and support reps into autonomous agent loops. These jobs are essentially state transitions and exception handling in a human costume: reviewing inputs, applying rules, and executing actions. Because the input can be captured digitally and the output is an API call or database update, these workflows are ripe for end-to-end automation. The limiting factor for deeper automation is no longer capability, but the speed and cost of verification in any given domain.
- Read more
- Workday's Last Workday? — Joe Schmidt IV
- Why read: An analysis of why legacy enterprise software monoliths are uniquely vulnerable to the AI platform shift.
- Summary: Workday dominates the enterprise HRIS market not because users love it, but because its deep technical wiring and proprietary configuration layer make leaving nearly impossible. It originally won by riding the platform shift from on-premise client-server architectures to multi-tenant cloud subscriptions. Now, a new platform shift toward AI-native software threatens to unravel Workday's complex moat of certified consultants and convoluted workflows. Modern AI challengers can bypass proprietary integration studios and rigid reporting syntaxes by leveraging natural language and dynamic, agentic orchestration. The immense friction and frustration that currently lock customers into legacy systems will become the exact wedge AI-first competitors use to unseat them.
- Read more
- The Brave New World of AI Markets — Rachel Park
- Why read: A guide to properly sizing the Total Addressable Market (TAM) for AI by looking beyond current human labor costs.
- Summary: Sizing AI markets by simply mapping them to existing human labor spend relies on flawed assumptions about price and fixed output. Just as Uber unlocked a latent market vastly larger than the legacy taxi industry, AI will create entirely new categories of demand. AI disrupts the traditional Price x Quantity equation because the marginal cost of machine intelligence trends toward zero, unlike human labor burdened by taxes, benefits, and scarcity. As the cost per unit of work plummets, the quantity demanded will expand exponentially into use cases previously considered economically unviable. Analysts must build bottoms-up models based on this abundant intelligence rather than artificially constraining AI to the boundaries of human limitations.
- Read more
- How AI Actually Remembers (Full Guide) — Siddharth
- Why read: A deep dive into the mechanics of the KV cache and why agents silently drop critical context.
- Summary: Most memory failures in AI agents are not flaws in the RAG pipeline, but rather token-level evictions deep within the model's KV cache. As a transformer processes text, it caches Key and Value vectors for every token, which grows linearly and quickly exhausts hardware memory budgets. When the cache fills up, the attention mechanism quietly drops tokens with low attention scores, permanently deleting that context from the model's working memory. Early solutions like StreamingLLM simply dropped older tokens, causing agents to forget crucial initial instructions during long tasks. Building reliable agents requires understanding that recency does not equal importance, and that managing the KV cache budget is a fundamental design constraint.
- Read more
- Introducing Mesa: the most powerful filesystem ever built, designed specifically... — Oliver
- Why read: A look at new foundational infrastructure being built specifically to handle agentic data workflows.
- Summary: As enterprise AI agents scale, they outgrow ephemeral sandboxes and standard S3 buckets that cannot handle concurrent, autonomous file manipulation. Mesa has been introduced as a POSIX-compatible filesystem with native version control built specifically for the demands of AI agents. It allows agents to read and write files normally while automatically versioning, branching, and making every change rollback-able. The system features sparse materialization for massive document sets and fine-grained access controls to support secure, parallel agent operations. This represents a critical evolution in tooling, shifting from ephemeral chat histories to durable, auditable artifacts.
- Read more
- Why I Gave Up on AI Meeting Notes — Taylor Pearson
- Why read: A thoughtful reflection on the hidden cognitive costs of automating seemingly mundane tasks like note-taking.
- Summary: While AI scribes offer obvious efficiency gains by eliminating administrative documentation, completely outsourcing note-taking can hollow out professional effectiveness. The act of writing notes is not just recording data; it forces crucial cognitive reflection, orientation, and synthesis of the information. When an AI perfectly captures a meeting, it removes the necessary friction required to internalize the outcomes and plan strategic next steps. Operators must distinguish between load-bearing friction that drives understanding and mere overhead that can be safely automated. The optimal use of AI involves a hybrid approach where machines capture the raw data, but humans actively engage in the summarization and contextualization.
- Read more
- Maker Taste — Jorn van Dijk
- Why read: A nuanced differentiation between consuming good design and the hard-earned intuition required to build it.
- Summary: Taste is not an innate trait; it is recognized by others over time and must be earned through exposure and execution. "Viewer taste" is developed by consuming products, movies, and art to understand what feels good. "Maker taste," however, is forged by grappling with shortcuts, trade-offs, and the invisible details that make software feel magical in use. While AI excels at generating multiple options and turning design into a frictionless process of picking, picking is merely an exercise in viewer taste. True maker taste requires understanding the problem deeply enough to know exactly what should exist, what to discard, and what is almost—but not quite—right.
- Read more
- A costume called conviction — Adam Shuaib
- Why read: An insider perspective on why venture capital firms struggle to back the polarizing founders who actually drive outsized returns.
- Summary: The venture capital industry brands itself on conviction, but institutional incentives overwhelmingly reward consensus investing. Truly fund-returning investments often feature unusual founders and unproven markets, requiring an investor to stake their reputation against the skepticism of their partners. However, the requirement to make every deal legible to LPs, partners, and associates acts as a tax that dilutes genuine conviction throughout the translation process. Consequently, funds often transition from underwriting bold bets to merely curating broadly recognized taste to protect their brand. The most iconic companies are routinely passed on because the metabolic and political cost of championing a polarizing founder is simply too high for most consensus-driven committees.
- Read more
Themes from yesterday
- The Shift from Software to Services: AI is fundamentally moving from selling productivity tools to executing the work itself, attacking legacy labor budgets and threatening traditional SaaS and service-based business models.
- The True Bottlenecks of Agentic AI: The primary challenges in AI adoption are no longer base model capabilities, but rather operating burdens, specialized backend architectures, KV cache limits, and novel security vulnerabilities.
- Redefining Defensibility and Value: Value capture in the AI era relies on building "canals" of workflow orchestration rather than traditional walled-garden moats, requiring new TAM calculations that account for abundant, near-zero marginal cost intelligence.
- The Human Cost of Frictionless Work: As AI absorbs cognitive tasks like note-taking and coding, operators must be careful not to automate away the "load-bearing friction" required for strategic orientation and the development of deep maker intuition.
