1. After “AI-First” Comes “AI-Only” — Daniel Schreiber

  • Why read: A provocative look at the transition from humans in the loop to humans as system stewards in an AI-driven organization.
  • Summary: The author, cofounder of Lemonade, argues that "AI-first" is just the beginning. While AI-first focuses on replacing humans in an existing org chart, "AI-only" reimagines the workflow entirely without humans in the execution loop. Humans shift to setting goals, values, and constraints rather than executing tasks. The current limitation is that AI workflows are still bottlenecked by the speed of human review and approval. Removing the "pilot" unlocks the true capabilities of the technology.
  • Read more

2. The Dossier: Where AI GTM Begins — Jordan Crawford

  • Why read: A crucial framework for structuring customer data before implementing any AI reasoning or go-to-market motions.
  • Summary: Most companies have fractured customer data spread across CRM, billing, product analytics, and support, which all tell different stories. The CRM is often wrongly treated as the source of truth, whereas actual customer actions and voice should be higher in the trust hierarchy. A "customer dossier" is a single, sorted timeline per account combining all systems and surfacing conflicts without trying to predict or score them. Building this unified, descriptive artifact is mandatory before layering on any AI reasoning models. Without it, a single prompt change in a scoring model can force a full rebuild of every dossier.
  • Read more

3. No harness, no moat — Pranjali Awasthi

  • Why read: A compelling argument that the actual product and competitive advantage of AI agents lies in the harness, not the underlying model.
  • Summary: In the age of capable foundation models, the model itself is no longer a durable moat. The harness—the environment, tool access, memory systems, and constraints you build around the model—determines how an agent behaves and feels. Outsourcing the harness to general-purpose wrappers means giving up the ability to dictate natural behaviors and trajectories specific to your domain. For an AI product to be excellent, builders must deeply customize the tools and the domain model exposed to the agent. Ultimately, the quality of your agent is a direct function of the harness you constructed.
  • Read more

4. Agent Memory Engineering — Nicolas Bustamante

  • Why read: An inside look at why transferring memory between different AI agents fails and how memory architecture actually shapes agent behavior.
  • Summary: Memory is not just a file you can copy-paste between agents like Claude Code and Codex because models are post-trained specifically on their own harness's memory UI and file taxonomy. The way an agent reads, trusts, and applies memory is deeply fused with how it was trained to interpret that specific data structure. When transferring between agents, the bytes may land, but the behavioral discipline differs wildly. The winning memory architecture turns out to be surprisingly simple: LLMs, markdown files, and a bash tool. Ultimately, memory relies on strict operational discipline rather than complex vector databases or knowledge graphs.
  • Read more

5. Everyone wants to be AI-pilled. Most Companies Are Still Level 1 — Ann Miura-Ko 🦖

  • Why read: A practical maturity model for evaluating how deeply an organization has actually integrated AI autonomy into its operations.
  • Summary: Just like autonomous vehicles, AI integration in companies exists on a spectrum of levels rather than a simple binary of being "AI-pilled." True AI nativeness requires shifting from personal productivity tools to systems where AI can see structured data, act on systems of record, and be extended by non-engineers. A company hasn't really transformed if its org chart, handoffs, and dependence on managers remain identical to past years. Real progress is measured by what the AI is allowed to see, do, and autonomously execute. Leaders must focus on building organizational autonomy rather than just deploying individual AI tools.
  • Read more

6. Keep Your People. Replace Your Software. — Sudheesh

  • Why read: A contrarian take on the classic build-vs-buy debate, arguing that AI makes custom enterprise software cheaper than licensing SaaS.
  • Summary: The cost of producing maintainable, well-architected enterprise software is approaching zero thanks to modern AI coding assistants. Because generating the code is now the easy part, the real value lies in a company's proprietary business logic—the rules, margins, and thresholds that live in the heads of your senior people. Instead of renting generic SaaS products built for the masses, companies can now affordably build custom systems that exactly match their unique operational soul. This shift turns the traditional build-versus-buy calculation completely on its head. Teams should prepare to replace their rented software with bespoke tools managed by their own top talent.
  • Read more

7. Manager Mode -> Founder Mode -> Native Mode — Brian Halligan

  • Why read: A quick checklist to determine if your company is actually operating as an AI-native organization.
  • Summary: Moving beyond just "Founder Mode," companies must now enter "Native Mode" to remain competitive in an AI-driven market. Signs you are not there yet include planning on an annual basis, maintaining traditional bell-curve compensation reviews, and treating budget allocation as equivalent to headcount allocation. If your CFO isn't stressing over API token bills and your internal systems aren't heavily accessed via AI, you are still operating with outdated legacy frameworks. True native mode requires restructuring your entire operational cadence around AI capabilities. Founders need to abandon rigid corporate structures in favor of fluid, AI-enabled execution.
  • Read more

8. Deep Dive: Where Value Accrues in the AI Stack — Chamath Palihapitiya

  • Why read: A high-level strategic mapping of the new AI OSI stack and where the defensible "fulcrum assets" are located.
  • Summary: The AI technology stack is forming across six distinct layers: infrastructure, chips, data, models, execution, and application. Value is highly concentrated at the bottom in infrastructure, with massive global bottlenecks in power, cooling, and critical minerals. At the chip layer, the stack forks into software AI—where intelligence costs are collapsing—and physical AI, which remains constrained by energy storage and actuation. Winning the next era of computing requires positioning companies at the non-obvious boundaries and chokepoints of this new stack. Investors and builders must identify these fulcrum assets to capture long-term compounding value.
  • Read more

9. AI Value Capture - The Shift To Model Labs — Daniel Nishball, Dylan Patel, Cheang Kang Wen

  • Why read: An economic analysis of how value capture in the AI ecosystem has rapidly shifted from hardware infrastructure to the model providers.
  • Summary: While 2023 and 2024 saw hardware infrastructure companies capture most of the AI boom's value, the pendulum has swung sharply toward AI model labs in 2026. Anthropic's ARR has exploded as gross margins widen and the cost of generating tokens plummets due to major hardware advancements. End users are experiencing massive ROI from consuming tokens, driving an insatiable demand for agentic AI workflows. The AI labs are currently reaping the financial rewards of this productivity bonanza while hardware pricing power begins to plateau. This shift marks a critical transition in where the ecosystem's ultimate value is accruing.
  • Read more

10. The technology sector in 2026 is the single most asymmetric... — Silicon Salvage

  • Why read: A sharp financial thesis on why the market's obsession with AI capex has created a massive mispricing in traditional software stocks.
  • Summary: The market is entirely focused on hyperscalers pouring hundreds of billions into AI data centers—a capex cycle that will likely outrun actual revenue and inevitably correct. Meanwhile, publicly traded, highly profitable software companies are trading at single-digit multiples of free cash flow because the market wrongly assumes AI will destroy them. In reality, these companies will embed AI to improve workflows and charge more while their sticky customer base seamlessly renews. This dynamic has created a heavily undervalued time-arbitrage play in traditional tech stocks. Investors willing to look past the AI hype cycle can find massive asymmetric upside at the bottom of the software market.
  • Read more

11. Stop Investing in AI. Start Investing in What AI Needs. — George Kikvadze

  • Why read: A clear-eyed thesis on how to invest in the physical bottlenecks constraining the AI boom.
  • Summary: The real AI opportunity lies not in picking the winning model, but in supplying the physical infrastructure that the broader system desperately lacks. With nearly half of US data center projects currently delayed due to power shortages, capital is misallocating into chips while the actual constraint has moved to the energy grid. Hyperscalers are on track to spend hundreds of billions, but their growth is completely bottlenecked by a lack of megawatts, transformers, and cooling capacity. Companies that can provide continuous, baseload power and critical grid solutions are the true, mispriced enablers of the AI boom. Smart investors should pivot their focus toward the physical dependencies required to keep the system running.
  • Read more

12. The Next Bloom Energy? - AI is no longer waiting for the grid — Nutty

  • Why read: Explores how data centers are bypassing slow grid upgrades by moving to localized, onsite power generation.
  • Summary: Upgrading the global power grid to support AI data centers will take years, forcing companies to find immediate solutions to energy bottlenecks. The narrative is rapidly shifting from long-term hopes for Small Modular Reactors (SMRs) to faster, deployable onsite power solutions like fuel cells. The most critical question in AI infrastructure right now is simply "who can turn the power on first?" Companies that offer onsite, fast-deploying energy are moving to the center of the AI conversation. Operators must evaluate infrastructure investments based on proven backlog and deployment speed rather than theoretical gigawatt announcements.
  • Read more

13. Palantir may be actually the best illustration of the issues... — Steve Hou

  • Why read: A reality check on why enterprise AI adoption is struggling to show ROI and where the real opportunity lies.
  • Summary: Despite millions spent on AI software licenses and token usage, most large enterprises have seen zero impact on their actual day-to-day operations. The bottleneck isn't the AI models—which are now highly capable—but the immense friction of integrating AI into complex, legacy corporate workflows. Leadership teams buzz about going "AI-first," yet core business metrics and operational speeds remain entirely unchanged. True enterprise value will only be captured by AI-native solutions that act as robust harnesses to do real work within existing systems of record. Vendors must focus on seamless workflow integration rather than simply dropping raw models into enterprise environments.
  • Read more

14. Working on vs. Solving — Anastasia Gamick

  • Why read: A vital mental model for operators and founders on the difference between making progress and actually reaching a finish line.
  • Summary: Most organizations and philanthropic efforts are designed to simply "work on" a problem indefinitely, self-perpetuating their own existence through ongoing research and grants. Solving a problem requires a fundamentally different operational structure: picking a specific, measurable finish line and completely owning the outcome. The world needs more "general managers" who feel personally responsible for delivering a final result rather than just doing excellent work within the problem space. Working on a problem means doing great work within a dynamic system, while solving it means systematically dismantling the issue until it no longer exists. Operators must honestly assess whether their daily work is driving toward a definitive conclusion or just managing the status quo.
  • Read more

15. Emotional Churn — Ibrahim Bashir from Run the Business

  • Why read: Identifies a silent killer in B2B SaaS products where users are psychologically checked out before their contract ends.
  • Summary: Emotional churn happens when users remain active on paper but have entirely disengaged from a product's core value, often shopping for alternatives in the background. Dashboards might show healthy logins, masking deep-rooted issues like poor onboarding, workflow friction, or integration gaps. To combat this silent killer, operators must obsess over time-to-value and actively re-onboard disengaged users. Closely monitor core workflows for silence, as a complete lack of user feedback or support tickets is a massive warning sign. Building proactive channels to empower power users can reverse emotional churn before it translates into a lost contract.
  • Read more

Themes from yesterday

  • The Shift from Models to Harnesses: Competitive advantage in AI is moving away from foundation models toward customized harnesses, memory systems, and domain-specific workflow integrations.
  • AI's Physical Bottlenecks: The true constraints on AI scaling are no longer compute, but physical infrastructure—specifically localized power generation, grid capacity, and hardware cooling.
  • Enterprise AI Reality Check: Despite the hype, most companies are still at "Level 1" of AI adoption, struggling to translate raw model capabilities into autonomous, ROI-generating business processes.
  • The End of Generic SaaS: The collapsing cost of software creation through AI is making custom, logic-driven internal tools more viable and cost-effective than renting standard enterprise software.