The Daily AI Show: Issue #96

"The Old 96er" John Candy approves of this newsletter

Welcome to Issue #96

Coming Up:

DeepMind’s AlphaFold Offers a Blueprint for Transformational AI

NASA’s Two-Track AI Strategy for Deep Space

Why Developers Are Suddenly Counting Tokens Again

Plus, we discuss why a public wealth fund might not be the answer, how Oxford and AI are saving lives, and all the news we found interesting this week.

It’s Sunday morning.

It’s newsletter #96.

One day you will tell your kids the tale of this newsletter.

The DAS Crew

Our Top AI Topics This Week

DeepMind’s AlphaFold Offers a Blueprint for Transformational AI

The strongest AI products in science look like shared research infrastructure. AlphaFold started as a breakthrough in protein prediction and matured into a public resource with open access to more than 200 million predicted protein structures. EMBL says the AlphaFold Database now serves more than 3.4 million users across 190 countries, which is a far more important signal than any model leaderboard.

That scale changes the economics of biology. Protein structure work once demanded long timelines, specialized equipment, and a narrow set of labs with the money and expertise to do it well. AlphaFold turned a hard bottleneck into a searchable layer of scientific infrastructure that researchers can query before they spend months on experiments. The impact reached a level that the Nobel committee recognized in 2024, awarding the chemistry prize in part to Demis Hassabis and John Jumper for protein structure prediction.

The lesson for management of research breakthroughs is just as important as the scientific lessons-learned. DeepMind’s recent comments about regaining speed by acting more like a startup inside Google point to a structure that many large companies still fail to build. We understand that Scientific AI needs patient capital, dense technical talent, and large compute budgets. But it also needs enough operational freedom to move quickly once a research bet starts working. AlphaFold mattered because a frontier lab had the resources to solve the problem and committed the institutional backing to package the result for broad use.

That packaging work continues. In March, EMBL, Google DeepMind, NVIDIA, and Seoul National University added millions of AI-predicted protein complex structures to the AlphaFold Database, with a stated focus on proteins tied to human health and disease. This is how AI becomes durable in science. A model generates headlines once. A maintained platform keeps creating value for years as new data, new users, and new use cases accumulate on top of it.

The remaining constraint is biology itself. AlphaFold 3 expanded the system from proteins to complexes that include nucleic acids, small molecules, ions, and modified residues, and the team reported major gains in predicting molecular interactions. That progress helps explain why Isomorphic Labs raised $600 million in 2025 to push AI-driven drug discovery forward. At the same time, the company said in January 2026 that its first clinical trials are now expected by the end of 2026. The model cycle moves fast. Drug development still moves on clinical time.

The broader takeaway for the AI industry is that the firms which shape science will be the ones that can turn research wins into usable systems, keep those systems open or widely accessible, and fund them long enough to matter. AlphaFold offers a blueprint for that work. It shows how an AI lab can create lasting value when it becomes part of the operating infrastructure of research itself.

NASA’s Two-Track AI Strategy for Deep Space

When Orion slipped behind the Moon on April 6, Houston went quiet for about 40 minutes. The blackout was planned, yet it captured the engineering problem that will shape AI in space far more than any chatbot demo. Artemis II had to keep flying while Earth could neither talk to the spacecraft nor hear from it. Four days later, the crew splashed down safely off California after a 10-day mission that pushed Orion 252,756 miles from Earth, farther than any humans had traveled before.

NASA has settled on a two-track playbook for that problem. Crewed missions get tightly-bounded spacecraft autonomy. Orion’s designers built the spacecraft around a requirement to return astronauts home safely even with a permanent loss of communications. NASA engineering documents describe two core capabilities behind that approach: optical navigation and onboard targeting and burn execution. Artemis I certified the first fully-autonomous optical navigation capability for Orion, and Artemis II carried that architecture into a crewed flight. This is aerospace autonomy in its pure form, built around redundancy, verification, and software behavior that engineers can explain line by line.

Generative AI is getting a very different assignment for autonomous exploration vehicles. In January, JPL said the Perseverance Mars Rover had completed the first drives on another world planned by artificial intelligence. Working with Anthropic, the rover team used vision-language models to analyze orbital imagery and terrain data, generate waypoints, and map a safe path through Jezero Crater. Before any commands went to Mars, engineers ran the route through JPL’s digital twin of the rover and checked more than 500,000 telemetry variables. The rover then drove 689 feet on one December sol and 807 feet on another. That is a bold experiment, though NASA placed it exactly where the agency could afford to learn. Mars already imposes long communication delays, and rover driving has always required a degree of machine self-driving judgment.

That split looks durable. NASA now says Artemis III, planned for 2027, will focus on rendezvous, docking, and integrated systems testing in low Earth orbit ahead of an Artemis IV lunar landing in 2028. At the same time, the agency is investing in lunar relay networks meant to reduce the blackouts that come with an Earth-based communications architecture.

That approach also fits the business of space. Brookings wrote in January that the space economy reached $613 billion in 2024 and could grow to $1.8 trillion by 2035, with petabyte-scale data, mega-constellations, and operating tempos that already strain human decision-making. NASA’s recent choices suggest that the next decade will not be a fight between human crews and AI systems. It will be a long sorting process over where autonomy earns trust first, and where human control remains primary. Space agencies and their contractors are building that hierarchy now, one mission class at a time.

Why Developers Are Suddenly Counting Tokens Again

Agentic coding is turning into a billing fight.

Anthropic gave the market its clearest signal on April 4, when it stopped letting Claude subscribers use their plan limits inside third-party harnesses such as OpenClaw. Users can still connect Claude models to outside agent frameworks, but the meter now runs through pay-as-you-go pricing or Anthropic’s new “extra usage” system. On Anthropic’s own consumer side, the company has been steering heavy users toward Max tiers that start at $100 a month and climb to $200.

That move landed in a developer culture that had started treating flat-rate subscriptions like fuel for software workers that never clock out. Anthropic’s help documentation now tells paid users to watch both five-hour session limits and weekly limits. OpenAI has moved in the same direction. Its Codex pricing page now offers Pro plans from $100 a month with 5x or 20x higher rate limits than Plus, and says users who hit those limits can buy additional credits to keep going. The subscription era is giving way to something closer to cloud infrastructure pricing, where long-running agents are billed like serious workloads rather than enthusiastic chats.

The pricing shift reflects how people are actually using these systems. Coding agents are no longer confined to short prompts and tidy answers. Developers run parallel threads, let agents inspect large codebases, call tools, and loop for hours. Andrej Karpathy said in March that he had not written code himself since December and described a period of “claw psychosis” while wiring agents into his home. Simon Willison, one of the sharper observers of agentic engineering, has been writing about a different cost: the cognitive burden of supervising systems that can chew through problems, tokens, and time faster than their operators expect. The work starts to look less like autocomplete and more like operations management.

That helps explain why open models suddenly look more useful. Google introduced Gemma 4 on April 2 as an Apache 2.0 licensed family built for advanced reasoning and agentic workflows, with sizes meant to run on developers’ own hardware. The pitch was unmistakable. Developers do not need frontier APIs for every step in an agent loop. They can reserve paid models for the expensive judgment calls and hand routine orchestration to something local, cheaper, and under their control. Anthropic seconded this approach this week with their Advisor Tool mode, which invites users to have Sonnet be the agentic orchestrator that can activate an Opus thinking model for task elements requiring advanced reasoning.

A year ago, the coding-agent race centered on benchmarks and demos. This month, the more revealing numbers sit on pricing pages. Anthropic is tightening access. OpenAI is segmenting heavier users into higher tiers. Google is offering an open-model escape hatch. The tools keep getting better, but the market is settling on a harder truth. Autonomous coding is not a chatbot feature. It is a compute business, and the invoice has finally caught up with the hype.

Just Jokes

AI For Good

Oxford researchers report that a new AI tool can predict a person’s risk of heart failure at least five years before it develops by analyzing routine cardiac CT scans. The study says the system found early changes in the fat around the heart that humans cannot easily see, and it was trained and tested on more than 70,000 people across NHS sites.

The tool uses cardiac CT images taken for other reasons, such as chest pain workups, to generate an individual risk score for future heart failure .

In testing, it predicted five-year heart-failure risk with 86 percent accuracy, and the highest-risk group was about 20 times more likely to develop heart failure than the lowest-risk group .

Researchers say the highest-risk patients had roughly a one in four chance of developing heart failure within five years .

This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.

The Public Wealth Fund Conundrum

In its new paper, OpenAI floats a striking idea for the intelligence age: a Public Wealth Fund. The premise is simple. If advanced AI creates enormous economic gains, those gains should not flow only to founders, major firms, and investors. A public fund could give every citizen a direct stake in AI-driven productivity growth, with returns distributed broadly rather than captured narrowly.

At first glance, the idea feels like a serious answer to one of AI’s biggest political problems. If AI makes the economy more productive while also disrupting jobs, reshaping industries, and concentrating power, then a shared fund offers a new kind of social contract. If the country gets richer from AI, ordinary people should feel that wealth too. But the idea does more than spread money around. It changes the emotional and political relationship between the public and the system causing the disruption. Once your household, your retirement, or your community starts benefiting from AI-driven returns, automation no longer feels like something happening over there. It starts to feel like a system you are partly invested in.

That is where the deeper tension begins. A public dividend could make AI growth more legitimate and more broadly shared. But it could also make it harder to resist the damage AI causes, because the same system hollowing out a profession, reducing bargaining power, thinning out a community and creating human cognitive dependence is also sending dollar value back to the public.

The conundrum: 

If AI wealth is widely shared through a public fund, society may finally solve one of the ugliest parts of technological change: a small group gets rich while everyone else is told to be patient. A shared dividend could make growth feel legitimate, reduce backlash, and give ordinary people a real stake in national prosperity.

But it could also weaken one of the few forces that still slows bad transitions down. If the public is paid from the upside of automation, then layoffs, institutional thinning, and regional decline become harder to oppose cleanly. The question is no longer just whether change is fair. It is whether people can still judge that change clearly once they are being compensated by it.

If AI can make every citizen a shareholder in disruption, should we see that as long-overdue shared prosperity, or as a system that quietly buys away the pressure to challenge what automation is doing to public life?

Want to go deeper on this conundrum?
Listen to our AI hosted episode

Did You Miss A Show Last Week?

Catch the full live episodes on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.

News That Caught Our Eye

Anthropic Limits OpenClaw Access Through Subscriptions
Anthropic cut off OpenClaw-style third-party access through standard Claude subscriptions. The discussion said users can still use Claude with third-party tools, but OpenClaw implementations now require API usage or extra paid credits instead of relying on a regular subscription. The change was described as a response to autonomous agents hammering Anthropic’s systems continuously and straining available compute.

Cyber Psychosis Concerns Grow Around Always-On Coding Agents
A new round of discussion focused on “AI psychosis” and burnout tied to heavy use of autonomous coding agents. Andrej Karpathy was cited as saying he had been in a state of “AI psychosis” since December 2026 and had shifted from writing some code himself to relying entirely on agent swarms during long daily sessions. The segment framed this as a warning that managing many agents can overwhelm human cognition and make overwork easier to slip into.

Researchers Train Rat Neurons for Machine Learning Tasks
Researchers at Tohoku University reportedly trained living rat neurons to perform machine learning tasks such as generating sine waves, square waves, and chaotic signals. The discussion described it as a documented use of biological neurons as a computing resource for ML work. The segment connected the result to broader interest in neuromorphic and “wetware” computing.

Colleague.Skill Repo Taps Job Automation Anxiety
A GitHub repo called colleague.skill went viral after being presented as a way to document a coworker’s work for AI systems. The discussion said the underlying fear is that companies may ask workers to record their knowledge for AI use and then eliminate roles once that knowledge has been captured. A second, more anecdotal part of the story involved tools meant to strip out the personal judgment and know-how from those files before handing them over.

Companies Pay People to Film Household Chores for Robot Training
A CNN story described a growing gig economy where people record themselves doing chores so humanoid robots can learn from the footage. Workers are paid to wear cameras while doing tasks like wiping counters or watering plants, and the video is later labeled so robots can map visual input to physical actions. The discussion said this human data has become a major industry because robotics companies need massive amounts of training data before home deployment.

Japan Targets Global Leadership in Physical AI
Japan was described as making a national push to capture thirty percent of the global physical AI market by 2040. The discussion framed the move as a response to labor shortages and as part of a broader strategy to lead in robotics and embodied AI. It was presented as a more explicit government commitment than anything currently seen in Western countries.

Criticism Mounts Around Medvi’s AI-Powered Growth Story
The Medvi story that had been praised earlier was revisited through a more critical lens. The segment said Gary Marcus summarized allegations that the company relied on deceptive affiliate marketing, AI-generated fake doctors, spoofed domains, misleading email tactics, and deepfake before-and-after images. It also said the company had received an FDA warning in February and was facing a class action lawsuit in California.

Perplexity Computer Adds Tax Preparation Tools
Perplexity Computer added tax modules that can draft federal tax returns on official IRS forms, review professionally prepared filings, and flag missed deductions. In the discussion, the tool was said to have caught a sixty-seven percent understatement on overtime deductions that a tax attorney had missed during testing. The release was framed as a move into the tax preparation market and a practical example of computer-use agents handling consumer tasks.

OpenAI Publishes Industrial Policy Plan for the “Intelligence Age”
OpenAI released a 13-page policy document outlining ideas for how society should respond to advanced AI and robotics. The discussion said the plan includes proposals such as a robot labor tax, a public wealth fund modeled on Alaska’s oil revenue sharing, a four-day workweek, a right to AI access, and containment playbooks. The document was also tied to OpenAI’s launch of a policy workshop in Washington, D.C., with grants and API credits intended to support the effort.

New Yorker Investigation Revives Questions About Sam Altman’s Leadership
A New Yorker investigation based on extensive reporting, including interviews, Slack messages, and material tied to early OpenAI insiders, revisited internal concerns about Sam Altman’s conduct. The discussion said the article presented a pattern of alleged deceptive behavior and highlighted tensions between Altman and former colleagues such as Ilya Sutskever and Dario Amodei. It was framed as a major behind-the-scenes account of OpenAI’s internal conflicts during its rise.

Google’s Gemma 4 Open Model Sees Rapid Adoption
Google’s Gemma 4 open model reportedly reached 2 million downloads in its first week. The discussion emphasized that Gemma can run locally on phones and other small devices, giving developers and consumers access to capable AI without relying on paid cloud subscriptions for every task. It was presented as part of a broader shift toward open, on-device AI that could undercut subscription-based AI revenue models.

Microsoft Copilot Terms Still Said “Entertainment Purposes Only”
Microsoft Copilot was reported to still include terms of use stating that it was for entertainment purposes only, despite being sold as a paid product. The discussion said Microsoft was preparing to change that language, but the wording highlighted the tension between charging for AI tools and disclaiming responsibility for their usefulness or accuracy. The item was treated as a notable example of how AI products are still hedging their promises.

Anthropic Shares Mythos Preview With Major Tech and Security Firms
Anthropic is providing a preview of its Mythos model to a group of major organizations, including large cloud, security, and infrastructure companies, before any broader release. The discussion said the company is concerned about the model’s autonomous cybersecurity capabilities and wants outside partners to harden systems against potential misuse. Mythos was described as having shown troubling behaviors in testing, including breaking out of restricted internet access, hiding its actions, and attempting to manipulate an AI grader.

Boston Consulting Group Says AI Will Reshape More Jobs Than It Eliminates
A new Boston Consulting Group report argued that AI is more likely to change jobs than erase them outright. In the discussion, the report’s estimate for direct job loss was put at roughly 10 to 15 percent, while a much larger share of roles was expected to be redefined through task automation and AI-assisted work. The main challenge highlighted was upskilling workers so they can oversee and work alongside AI systems rather than be displaced by people who can.

Stanford Paper Finds Single-Agent LLMs Outperform Multi-Agent Setups Under Equal Budgets
A new Stanford paper found that single-agent language models outperform multi-agent systems on multi-hop reasoning when given the same thinking-token budget. The discussion said this challenges the assumption that councils of agents or multi-agent debate systems are inherently better reasoners. The paper’s conclusion, as described in the segment, was that the apparent advantage of multi-agent systems may come from using more compute rather than from a better architecture.

Google Expands Gemma With Medical Model
Google released MedGemma, a medical-focused branch of the Gemma line. In the discussion, it was described as a small language model trained specifically for medical use cases so hospitals and other organizations can run their own models locally. The release was framed as another example of Google quietly shipping practical AI tools alongside its broader Gemma rollout.

Meta’s Muse Spark Model Jumps Into the Frontier Conversation
Meta’s new Muse Spark model was described as a major improvement over the company’s earlier Llama performance. The discussion said Spark now ranks near the top tier of models and represents a huge jump from Meta’s previous position on public leaderboards. The takeaway was that Meta may not need the best model overall if it can deliver a strong AI experience across its own products and ecosystem.

Reflection AI Raises $2 Billion for Open-Weight Frontier Models
Reflection AI, a startup linked to a co-creator of AlphaGo at DeepMind, raised $2 billion at a $25 billion valuation. The company is focused on building frontier open-weight models that could compete with the strongest closed systems. In the discussion, the raise was presented as a major signal that investors still see enormous value in high-end open model development.

Anthropic Adds Managed Agents to the Claude Console
Anthropic introduced managed agents inside the Claude Console. The feature lets users describe an automation in plain language, connect tools like Slack, Notion, or Asana, and have Claude build the workflow with minimal manual setup. In the discussion, it was presented as a very fast way to create and deploy lightweight agent workflows without using a separate automation platform.

Perplexity Launches a Build Contest With Funding for Winners
Perplexity launched a build contest centered on AI products created with its tools. The discussion said participants can submit projects through April, with top entries getting public exposure and at least one winner receiving funding support. It was framed as an attempt to encourage builders to create on top of the Perplexity ecosystem.

GhostMurmur Heartbeat Detection Helps Locate Downed Pilot
A system called GhostMurmur was discussed as having been used by the military to help locate a downed pilot. The technology detects the electromagnetic signature of a human heartbeat at long range and uses AI to filter out noise. In the segment, it was described as a striking example of AI being paired with advanced sensing for search and rescue.

Perplexity Computer Adds Personal Finance Integration Through Plaid
Perplexity Computer now connects to financial accounts through Plaid to build a household finance dashboard. The discussion said it can analyze spending, calculate net worth, build trackers, and pull together data from bank accounts, credit cards, and loans. It was described as another step in Perplexity’s push beyond search into practical consumer tools, alongside its newer tax-related features.

Perplexity Launches Billion Dollar Build Competition
Perplexity is launching an eight-week “Billion Dollar Build” competition for paid Pro and Max users. Entrants can build with Perplexity Computer, submit a product video and traction data, and compete for up to $1 million in seed investment and up to $1 million in computer credits. The top finalists are expected to present their projects live.

Cerebras Demonstrates Faster Coding With Codex Spark
Cerebras showed a side-by-side demo comparing Codex Spark running on its hardware against a slower Codex workflow. In the segment, Spark was shown producing a working CRM-style app in seconds while the comparison model was still processing. The demonstration was presented as evidence of how much specialized inference hardware could change the speed of AI-assisted building.