New post w/ random thoughts on AI (thread) — Elad Gil
Why read: A high-level strategic look at how AI is actively reshaping company headcounts, compute as currency, and the changing value of artisanal engineering.
Summary: AI revenue is likely to soon account for 1-2% of US GDP, but its impact will be felt profoundly inside organizations where compute budgets may become as important as financial ones. Traditional corporate growth will no longer correlate directly with headcount growth as companies use AI to hold team sizes flat while scaling revenue. This shift will particularly affect "artisanal" software engineers, who may find less satisfaction as AI generates the bulk of the code, shifting the premium toward systems and product thinkers. Furthermore, the actual value of AI will be in selling labor units rather than software seats, unlocking drastically larger total addressable markets for startups. Overall, these macro and micro shifts suggest an impending and highly disruptive transition period for both tech culture and global economies.
🎙️ This week on How I AI: How Intercom 2x’d their engineering velocity with Claude Code — Lenny's Newsletter
Why read: An empirical case study on how a mature engineering organization successfully doubled its throughput using AI without sacrificing code quality.
Summary: Intercom managed to double their merged PRs per R&D employee in just nine months by treating their engineering organization like a product and fully instrumenting the adoption of Claude Code. They didn't just deploy AI; they built custom guardrails like a "Create PR" skill that forces the AI to write context-rich descriptions instead of merely regurgitating code. Crucially, they realized that AI only magnifies existing strengths and weaknesses, meaning high-trust cultures with mature CI/CD pipelines and comprehensive test coverage are prerequisites for success. The velocity gains actually improved code quality because the time saved allowed engineers to finally tackle technical debt and fix flaky tests. This demonstrates that scaling AI adoption requires deep telemetry and a solid engineering foundation, not just a tool rollout.
Why read: A counterintuitive breakdown of how newer, smarter AI models are becoming cheaper per task despite higher per-token costs.
Summary: As AI models like Claude Opus 4.5 get smarter, they can solve complex problems in significantly fewer steps, reducing redundant exploration and verbose reasoning. Even though Opus 4.5 costs 67% more per token than Sonnet, it uses 76% fewer tokens to achieve the same outcome, effectively lowering the overall cost of the task. However, the introduction of new tokenizers breaks text into smaller pieces, forcing the model to pay closer attention to detail and improving accuracy at the expense of generating more tokens. This sets up a fascinating economic tradeoff: smarter models will inherently require fewer conversational turns, but the increased token density for higher accuracy means generating those turns will cost more. Ultimately, developers must optimize for task completion efficiency rather than just looking at base API pricing.
Why read: A critical analysis of why the traditional seed investing model is breaking down in the face of exploding AI valuations and cap table mechanics.
Summary: The 95th percentile for seed-stage post-money valuations has surged from $65M in early 2022 to over $173M in 2026, fundamentally altering venture capital mathematics. Even when early investors hit a generational outlier like OpenAI, the massive capital requirements and subsequent dilution compress returns—angel investors in OpenAI saw a 140x return on an $852B outcome, which is insufficient to carry a standard venture fund. The era of the 1000x seed return is likely over, not for lack of great companies, but because capital structures and massive funding rounds place a hard ceiling on early-stage multiples. Instead of retreating to lower-valuation pre-seed deals, investors need to internalize this new ceiling and adjust their portfolio construction and underwriting models. This structural shift requires operators and founders to understand that the bar for acceptable capital efficiency is higher than ever.
THE LEISURE INFRASTRUCTURE PLAYBOOK — moneyfetishist
Why read: A compelling thesis on where the economic surplus generated by AI productivity will flow, pointing toward fixed-supply physical leisure assets.
Summary: Drawing parallels to Keynes's 1930 prediction about productivity solving the economic problem, this analysis argues that humanity will consume the AI-generated surplus rather than working fewer hours. Just as previous technological revolutions drove urbanization and consumer electronics, the current AI boom will funnel excess capital into physical experiences and status-signaling assets. The true beneficiaries of this wealth transfer won't just be software platforms, but irreplaceable physical assets like members clubs, marinas, and wellness infrastructure built on fixed-supply land. Because these assets cannot be replicated or competed away by AI, they represent the ultimate defensive investment in an age of digital abundance. Operators and investors should look beyond digital agents and focus on building integrated leisure platforms that capture the massive upcoming wave of physical consumption.
After advising 50+ consumer companies over the last year, the... — Nikita Bier
Why read: A stark reminder that successful product development relies on the immediate, high-resolution visualization of ideas, necessitating a full-time designer.
Summary: Despite raising millions or even billions, many consumer companies hamstring their execution by failing to keep a full-time designer in the room during product discussions. Without real-time visualization, teams cannot have constructive debates about ideas, leading to vague alignment and poor execution. Consequently, experiments often yield inconclusive results simply because users can't discover features or misunderstand fundamental navigation and copywriting. The role isn't just about making things pretty; it is about shepherding the fundamental building blocks of product comprehension so that the team can rally around a tangible vision. Ultimately, products live and die in the pixels the user interacts with, and relying entirely on non-visual thinkers to dictate user experience is a fast track to failure.
Topology aware GPU compute | Composable and distributed systems study group — Yak Collective
Why read: A reality check on the physical and architectural constraints of deploying AI at scale, moving beyond simple software-layer intuitions.
Summary: Discussions around AI compute often ignore the brutal realities of data center engineering, hardware allocation, and power usage effectiveness (PUE). Deploying large open-source models isn't just about securing GPUs; it requires deep understanding of interconnects, cooling, and the massive energy overheads that differentiate a hyperscaler from a sloppy enterprise deployment. The useful compute extracted from a gigawatt-scale facility depends entirely on how the infrastructure is organized, making heterogeneous setups highly inefficient compared to homogeneous, specialized architectures. As AI transitions into a nation-state level investment, operators must reason about constraints that exist several abstraction layers below what application developers typically see. Understanding these hardware and energy topologies is critical for accurately forecasting deployment costs and system performance.
[AINews] Moonshot Kimi K2.6: the world's leading Open Model refreshes to catch up to Opus 4.6 (ahead of DeepSeek v… — AINews
Why read: An update on the aggressive pace of open model development in China, showcasing Moonshot's ability to compete directly with frontier models.
Summary: Moonshot's Kimi K2.6 demonstrates staggering progress in the open model ecosystem, solidifying its position as the leading Chinese AI lab and directly challenging top-tier Western models. The K2.6 release shows a massive leap in capability over just three months, pushing beyond merely mimicking frontier models to actually beating Gemini 3.1 Pro in frontend design benchmarks. Furthermore, they are aggressively scaling out their Agent Swarm RL capabilities, now rebranded as "Claw Groups," showcasing real imagination in post-training techniques. This rapid iteration cycle highlights that the competitive moat for Western labs is continually shrinking as international players execute with relentless drive. For operators, this signals that the baseline for open-source AI capabilities is rising faster than anticipated, unlocking new possibilities for local and agentic deployments.
Change Management Requires Surround Sound — Ibrahim Bashir from Run the Business
Why read: A practical framework for leaders on why strategic shifts fail to land and how to effectively drive organizational change.
Summary: Leaders frequently confuse the announcement of a strategy with its successful integration, assuming a single all-hands presentation is sufficient to change behavior. In reality, landing a strategic shift requires a sustained, multi-channel campaign—"surround sound"—that reaches the organization from various angles until the new strategy becomes the default way of working. This means translating the core message across different altitudes and modalities, as the implications of a shift for a CEO are vastly different than for an engineering manager or an individual contributor. If the strategy isn't actively reflected in prioritization criteria, hiring decisions, and daily product reviews, teams will naturally default to their old habits. Effective change management demands treating internal communication with the same rigor and repetition as a multifaceted marketing campaign.
Agents are actors — Gordon Brander from Squishy Computer
Why read: A clarifying mental model that maps the chaotic landscape of multi-agent AI systems back to the battle-tested principles of the actor model in computer science.
Summary: As the industry debates how AI agents should coordinate—whether as swarms, hierarchies, or unified large models—the most robust framework is simply treating agents as actors. In the actor model, an entity receives messages, updates its internal state, spawns other actors, and generates responses, which perfectly describes the lifecycle of an AI agent. This approach mirrors biological systems, where cells encapsulate their environment behind a membrane and communicate strictly through signaling, thereby managing massive complexity. By adopting this Object-Oriented, late-binding philosophy, developers can design more resilient and modular multi-agent architectures. Understanding agents through the lens of actor-based messaging provides a practical blueprint for building scalable, cooperative AI systems without overcomplicating the coordination layer.
I’ve become obsessed with the idea of “talent engineering” as... — David Booth
Why read: A strategic reframing of the recruiting function into a systemic, engineering-driven process powered by AI agents and personalization.
Summary: The role of recruiting is undergoing a transformation similar to what sales development experienced five years ago, shifting from manual relationship-building to systematic "talent engineering." This new discipline treats the candidate funnel like a growth engineering problem, utilizing public and private signals for discoverability and leveraging AI for extreme personalization at scale. Talent engineers focus on structured assessment and high-conversion closing tactics by deploying hyper-targeted landing pages, voice agents, and custom outreach to attract top-tier talent. This approach replaces sprawling, manual CRM curation with agentic systems designed to identify and capture the exact MVP hires a startup needs. Operators should view talent acquisition not as an HR function, but as a critical, automated growth engine that demands deep technical and systemic thinking.
Are You Ready to go "from Hierarchy to Intelligence"? — Boundaryless - Platform Design & Orgs
Why read: An exploration of how AGI and advanced coding assistants are fundamentally dissolving coordination costs and reshaping organizational design.
Summary: The arrival of highly capable AI is not just changing software development; it is collapsing the coordination costs that have traditionally justified strict corporate hierarchies. As Jack Dorsey recently noted, this technological shift is pushing firms away from top-down management toward "intelligence," effectively turning platform organization concepts like Rendanheyi into unavoidable realities. With AI capable of abstracting the complexities of organizing work, companies must transition toward AI-native operations, portfolio rationalization, and scaling without the burden of bureaucracy. This requires a profound reimagining of how accountability is structured and how new markets are opened when small, autonomous nodes can execute at the scale of large departments. Leaders must aggressively adopt these decentralized, intelligence-driven operating models to remain competitive in a rapidly flattening corporate landscape.
Ad-Based Platforms Have Mostly Solved Optimal Taxation — Byrne @ The Diff
Why read: A fascinating economic perspective comparing the business models of ad-based tech giants to theoretical optimums in taxation and public goods funding.
Summary: Economists generally agree that the ideal tax system minimizes behavioral distortion while funding public goods, yet technocratic tax policies are notoriously difficult to implement politically. Interestingly, massive ad-based platforms operate similarly to an optimal tax engine: they levy a consumption tax on attention, offer credits for producing positive externalities (like high-quality content), and reinvest the proceeds into making incredibly complex services universally free. This model successfully captures and redistributes output without relying on traditional income or property taxes, effectively subsidizing global digital infrastructure. By framing monetization as a form of optimal taxation, operators can better understand the immense leverage and societal integration that ad-supported ecosystems command. It also highlights the structural brilliance behind why these platforms have become some of the most profitable and indispensable entities in history.
🔬 Training Transformers to solve 95% failure rate of Cancer Trials — Ron Alfa & Daniel Bear, Noetik — Latent.Space
Why read: A look into how autoregressive transformers are being deployed to solve massive biological matching problems, unlocking value in existing medical treatments.
Summary: A staggering 95% of cancer treatments fail in clinical trials, but this high failure rate is largely a matching problem rather than a lack of effective drugs. Cancer is not a single disease but thousands of unique biological variations, meaning success relies heavily on pairing the right patient tumor with the right existing treatment. Noetik is utilizing autoregressive transformers, like their TARIO-2 model, to deeply analyze cellular data and solve this intricate mapping challenge, prompting a $50M licensing deal with GSK. By better predicting which tumors will respond to which interventions, AI can dramatically improve trial success rates and save millions of lives without necessarily needing to invent new drugs from scratch. This highlights a profound shift in biotech, where AI's greatest immediate impact lies in hyper-personalized disease mapping rather than purely novel drug discovery.
Why read: A sobering cultural critique on why public and insider sentiment around AI is turning negative, highlighting massive strategic missteps by major labs.
Summary: There has been a systemic failure to craft a compelling, positive narrative about the future of AI, leaving the public to rely on dystopian sci-fi and current economic anxieties. This fear has even permeated tech culture, where a pervasive meme of rapid human capital depreciation is causing intense status panic and a rush to cash out before automation takes over. The major AI labs have exacerbated this volatility through confusing rhetoric, massive private valuations that exclude public participation, and ton-deaf infrastructure buildouts that actively antagonize local communities. By allowing venture capital to capture all the upside while socializing the infrastructural friction, the industry has birthed a generationally bad cultural backlash. If the tech community itself does not viscerally believe in a positive future, it will be impossible to prevent severe societal and regulatory resistance.
The decoupling of growth and headcount: AI is fundamentally shifting how companies scale, keeping team sizes flat while revenue grows and reframing software from selling "seats" to selling "labor units."
Capital and physical reality checks: The limits of AI are increasingly physical and financial, from data center topology constraints and power efficiency needs to seed valuations hitting hard mathematical ceilings.
Workflow integration over raw capability: Focus is shifting toward how AI fits into real systems, emphasizing token-efficient task completion, actor-based agent architectures, and strict engineering guardrails to achieve massive velocity gains.
Cultural and organizational backlash: Waning AI optimism stems from poor narrative control and infrastructural friction, forcing a shift away from traditional structures toward decentralized, intelligence-driven operating models.