#1 The part of Project Glasswing nobody is really talking about: — Ben Pouladian
- Why read: A compelling thesis on why Anthropic's new Mythos model is highly restricted: a severe lack of compute.
- Summary: Anthropic recently unveiled Project Glasswing and the highly capable Claude Mythos model, but restricted its access to a select group of partners. The author argues this isn't just a product strategy, but a glaring admission of compute constraints, as serving a model this token-hungry at scale is currently unaffordable. Every new frontier capability is now gated by power, HBM, and silicon, shifting the AI economy's unit of account from dollars to joules and bandwidth. The gap between what labs have internally and what they can publicly serve is widening, creating massive structural advantages for early enterprise partners.
- Link: https://twitter.com/benitoz/status/2041604165911376078/?rw_tt_thread=True
#2 Emerging from the Mythos — Tomasz Tunguz
- Why read: Explores how AI's new collateral capability of autonomously finding decades-old security flaws will invert software security postures.
- Summary: Anthropic's massive new 10-trillion parameter model, Mythos, was able to discover a 27-year-old vulnerability in OpenBSD purely as a downstream consequence of general improvements in reasoning. This capability fundamentally changes enterprise security, as systems not protected by this level of analysis will become porous by default. The competitive advantage moves from reselling GPU hours to having exclusive access to these extreme hardening capabilities. Ultimately, a large portion of software engineering budgets will shift toward security, as buyers begin to demand this new standard of enterprise-grade resilience.
- Link: https://www.tomtunguz.com/mythos-glasswing/
#3 How to get your company AI pilled — Geoff Charles
- Why read: A tactical playbook on how Ramp achieved a 6,300% increase in internal AI usage and transformed their engineering velocity.
- Summary: Instead of overthinking AI strategy with formal change management, Ramp focused on embedding AI into their core culture of velocity. They built an internal Claude-powered agent called Glass, hosted a massive 700-person company-wide hackathon, and mandated AI usage expectations from leadership down. By treating AI proficiency as a learning curve and removing friction, non-engineers now account for 12% of all human-initiated pull requests. The key takeaway is to build the infrastructure for employees to teach themselves, driving compounding productivity gains across the entire organization.
- Link: https://twitter.com/geoffintech/status/2042002590758572377/?rw_tt_thread=True
#4 Where Are You in the Context Supply Chain? — Educated Guess
- Why read: A sharp organizational framework that explains why so many white-collar jobs are vulnerable to AI replacement.
- Summary: Every business operates on a "context chain" made up of Innovators (creators), Marketers (sellers), and Context Carriers (everyone in between). The carriers exist solely to repackage and transport context from one department to another, representing a massive "coordination tax" for the company. As AI becomes highly adept at summarizing, formatting, and coordinating information, businesses will inevitably ask why they are still paying this tax. Operators must recognize whether their role produces actual results or merely carries context, as the latter is rapidly becoming automated.
- Link: https://educatedguesser.substack.com/p/where-are-you-in-the-context-supply
#5 AI Adoption by the Numbers: Where Enterprise AI is Actually Working — Kimberly Tan
- Why read: Hard data cutting through the hype to show exactly where and how Fortune 500 companies are deploying AI.
- Summary: Contrary to reports claiming high failure rates for AI pilots, actual data shows that 29% of the Fortune 500 and 19% of the Global 2000 are already live, paying customers of leading AI startups. This represents an unprecedented speed of enterprise penetration, driven heavily by use cases in coding, customer support, and search. The tech, legal, and healthcare sectors are currently leading this adoption curve. This data provides a clear roadmap for founders and operators on which use cases are actually crossing the chasm into widespread enterprise ROI.
- Link: https://twitter.com/kimberlywtan/status/2041896368877531158/?rw_tt_thread=True
#6 *how to predict vrality* — fckgrowth
- Why read: A fascinating look at using open-source neuro-simulation models to optimize video editing for human attention.
- Summary: Meta’s FAIR team open-sourced TRIBE v2, a model trained on over 1,000 hours of fMRI brain scans that can predict neural engagement to video and audio. A creator fed a video draft into the model, identified exactly where human brain activity spiked or flatlined, and re-edited the pacing to keep neural response elevated. The optimized video garnered over 220,000 views, demonstrating how biological prediction tools can directly drive engagement. This signals a wild new frontier for content creation where performance is simulated and optimized before publishing.
- Link: https://twitter.com/fuckgrowth/status/2041580077826371733/?rw_tt_thread=True
#7 On the Political Economy of Language Models — Will Manidis
- Why read: A macro-level analysis of how AI automation is reshaping political coalitions by threatening administrative support jobs.
- Summary: The largest occupational group in the US economy consists of millions of administrative and office support workers who form a significant portion of the non-college working class. Projections indicate this category will face sharp employment declines due to the integration of AI and automation into corporate workflows. This economic pressure is creating an unusual alignment of interests between the capital class and the labor class against the credentialed professional managerial class and regulatory state. Understanding these structural labor shifts is crucial for anticipating the downstream political and regulatory environments for AI.
- Link: https://twitter.com/WillManidis/status/2041855464904827006/?rw_tt_thread=True
#8 I joined DeepMind in 2017 and I remember in my... — Andrew Trask
- Why read: A great historical lesson on how geographic positioning can create a massive structural moat for talent acquisition.
- Summary: When DeepMind was acquired by Google, Demis Hassabis made the brilliant strategic decision to keep the lab in London rather than relocating to Silicon Valley. This created a 5-to-7 year window where DeepMind faced virtually zero competition for top European AI researchers who wanted to work at a major lab without moving 5,000 miles. By zagging against conventional wisdom, DeepMind secured an incredible talent monopoly at a discount, leading to extremely high retention and foundational breakthroughs. It's a reminder that talent strategy can be just as critical as compute or data advantages.
- Link: https://twitter.com/iamtrask/status/2042021297627262979/?rw_tt_thread=True
#9 GTM Weekly #3: The Opt-Out POC — Work-Bench
- Why read: A highly actionable sales tactic that drastically improves enterprise software conversion rates.
- Summary: Instead of offering standard opt-in free trials that attract uncommitted buyers, B2B startups should switch to an opt-out Proof of Concept (POC) model. This requires prospects to sign the full contract upfront with a 90-day free evaluation period that automatically converts to paid unless explicitly cancelled. This structure filters out tire-kickers who refuse to do the procurement work upfront, saving months of false pipeline. It also shifts the psychological default from having to convince the customer to stay, to forcing them to actively decide to leave.
- Link: mailto:reader-forwarded-email/0a58b6bb338ccddaf3fbda0475c5a268
#10 the best and simplest engineering advice ive ever gotten was... — Tyler Angert
- Why read: A sharp reminder that the best way to speed up software is to fundamentally eliminate unnecessary work.
- Summary: When building complex systems, engineers often default to processing everything upfront, leading to sluggish onboarding and poor user experiences. The most effective optimization strategy is simply "not doing the work" by determining the minimum viable compute needed to be useful. For example, instead of processing a user's entire 100,000-photo library, an app can process the first 1,000 photos in seconds to provide immediate value, handling the rest asynchronously. Prioritizing immediate signal over exhaustive processing is a key principle for building fast, AI-native products.
- Link: https://twitter.com/tylerangert/status/2041929514960216396/?rw_tt_thread=True
Themes from yesterday
- The Physical Limits of Frontier AI: New models like Anthropic's Mythos are proving so capable and token-hungry that inference is now bottlenecked by raw compute, shifting the strategic landscape toward extreme hardware and power constraints.
- AI's Attack on the "Coordination Tax": White-collar administrative and middle-management roles are being explicitly identified as massive friction points in the corporate "context chain," making them primary targets for the next wave of AI automation.
- Enterprise Integration Moves to Production: Companies are pushing past theoretical AI strategies to practical adoption—using internal tools and culture shifts to drive massive productivity gains, with nearly a third of the Fortune 500 already utilizing leading AI startups in production.
- Predictive Media & Neuromarketing: The release of open-source neuro-simulation models like Meta's TRIBE v2 is allowing creators to map content pacing directly to predicted biological brain activity, unlocking a wild new frontier for audience engagement.
