The Daily AI Show: Issue #98

"Take your protein pills and put your helmet on." - Ground Control

Welcome to Issue #98

Coming Up:

The New AI Stack Is Built on Protocols and Skills

Reese Witherspoon Touched a Raw Nerve in the AI Debate

AI Just Gave Climate Forecasting a Reality Check

Plus, we discuss Canva’s new living memory, listening for Orcas, the AI tradeoffs in space exploration, and all the news we found interesting this week.

“Yeah, it’s time to move on, time to get going. What lies ahead I have no way of knowing But under my feet baby, the grass is growing. It’s time to move on. Time to get going.”  
- Tom Petty

We couldn’t agree more Tom.
AI waits for no one.

The DAS Crew

Our Top AI Topics This Week

The New AI Stack Is Built on Protocols and Skills

Working AI agents are becoming part of AI automation infrastructure. The important shift is happening below the demo layer, where companies are standardizing how models reach data, tools, and workflows, then packaging expert instructions into reusable skills. That combination is turning agents from clever assistants into something closer to enterprise software.

Model Context Protocol sits at the center of that change. When Anthropic introduced MCP in November 2024, it framed the protocol as a universal way to connect AI systems to the places where work actually lives, from content repositories to business tools and developer environments. By March 2026, the maintainers were describing a very different stage of maturity. The roadmap says MCP has moved beyond its early role wiring up local tools and now runs in production at companies large and small, with active work on transport scalability, agent communication, governance, and enterprise readiness. That matters because infrastructure standards gain power when they stop being experiments and start absorbing the messy requirements of production systems.

The second layer is where the real leverage shows up.

Anthropic’s Agent Skills push organizations to capture know-how as structured instructions that can be shared, updated, and reused across Claude.ai, Claude Code, the Claude Agent SDK, and the developer platform. Anthropic has been explicit that skills can complement MCP servers by teaching agents how to carry out more complex workflows involving external tools and software. The MCP community is now formalizing that direction with a Skills Over MCP working group focused on how these skills are discovered, distributed, and consumed through the protocol. In practical terms, companies are starting to separate access from judgment. MCP gives an agent the keys to the system. Skills teach it how the company wants those keys used.

Salesforce’s new Headless 360 announcement shows what this looks like when a major enterprise platform commits. Salesforce says everything on its platform is now exposed as an API, MCP tool, or CLI command, and that agents can use all of it. The release includes more than 60 new MCP tools and over 30 preconfigured coding skills designed to give coding agents live access to data, workflows, and business logic inside tools developers already use. That is a meaningful change in posture. Instead of treating AI as a chatbot bolted onto the edge of a system, Salesforce is rebuilding the system so agents can operate directly inside it. Once that happens, the competitive question shifts. Model quality still matters, but workflow design, permissions, evaluation, and operational trust move much closer to the center.

This is why the current wave of agent building feels more durable than last year’s frenzy of autonomous demos. The market is converging on a stack. Open protocols handle connectivity. Skills capture institutional memory. Extensions like MCP Apps add user interfaces directly inside the agent experience. The hard work ahead is less about inventing a magical general agent and more about building reliable, governed pathways between agentic models and the systems companies already depend on.

Reese Witherspoon Touched a Raw Nerve in the AI Debate

When Reese Witherspoon posted an Instagram video this week urging women to learn more about artificial intelligence, the response was swift and hostile. Some critics saw another celebrity lending her platform to a technology industry already under fire for displacing creative work, consuming enormous amounts of energy and moving faster than public safeguards. A few days later, Witherspoon answered the backlash in a follow-up post, saying no one had paid her to speak, acknowledging concerns about jobs and the environment, and adding, “I don’t believe computers should replace humanity.”

The episode drew attention because it touched a live fault line in the labor market. Artificial intelligence is arriving unevenly, and women are positioned awkwardly inside that shift. LinkedIn’s latest research found that women are more likely than men to work in occupations categorized as disrupted by generative AI and less likely to work in roles likely to be augmented by it. In January 2025, LinkedIn found that 25.8% of women worked in occupations that may be augmented by generative AI, compared with 31.6% of men. Its earlier global analysis found that in 93% of countries, women held a higher share of disrupted roles than men.

That imbalance helps explain why a vague call to “learn AI” landed badly. For many women, the issue is already concrete. The most exposed jobs are often administrative, clerical and support roles, exactly the parts of the labor market where automation is easiest to introduce and hardest to negotiate from below. The International Labour Organization has found that women face a higher risk than men of seeing their work transformed by generative AI, especially in higher-income economies where office work makes up a larger share of employment.

At the same time, the support structure around that transition remains weak. Lean In and McKinsey reported this year that women are less likely than men to want the next promotion, a gap the organizations tie closely to lower levels of sponsorship and manager advocacy. Only 31% of entry-level women report having a sponsor, compared with 45% of men at the same level. Those figures are not about AI specifically, but they describe the workplace conditions in which AI adoption is unfolding. New tools reward experimentation, visibility and institutional backing. Workers who receive less of all three start the race behind.

That is why the Witherspoon dispute resonated beyond celebrity culture. It was never only about whether one actress had the standing to talk about technology. It was about a broader frustration with how AI is being introduced to the public: as a mix of invitation, warning and inevitability, often without clear terms about who benefits and who bears the cost.

The practical question is narrower than the online argument made it seem. Women do not need a slogan about AI. They need time to learn it, room to question it and enough institutional support to decide how it will be used in their work. The labor market is already shifting. The people most exposed to that shift will need more than encouragement to meet it.

AI Just Gave Climate Forecasting a Reality Check

For years, climate politics has revolved around a familiar split. Governments announce emissions targets. Energy analysts publish scenarios showing what it would take to meet them. The scenarios are challenged and the emissions targets are diluted. Then the real world moves unevenly, with policy bursts, price shocks and long stretches of delay.

A research team in Sweden is trying to narrow that gap.

In a paper published this month in Nature Energy, researchers at Chalmers University of Technology built a machine learning model to estimate how quickly countries are likely to expand wind and solar power, based on how those technologies have actually spread in more than 200 countries. The conclusion from this grand monte-carlo simulation of Earth’s energy consumption is cautiously optimistic: the world remains on a plausible path to limit warming to about 2 degrees Celsius, but the more ambitious 1.5 degree target would require a faster acceleration in clean energy deployment than current trends suggest.

Under the Paris climate agreement, governments pledged to hold the increase in global average temperatures to well below 2 degrees Celsius above preindustrial levels, and to pursue efforts to limit warming to 1.5 degrees. Climate scientists have warned for years that each fraction of a degree raises the risks and the likely severity of heat, drought, flooding, food disruption and displacement.

What sets the new model apart includes its starting assumptions about the behavior of countries in transition to renewables. Many energy source forecasts assume smooth growth curves. The Chalmers team argues that is not how renewable energy spreads. Countries often move in bursts, driven by policy changes, falling costs or national targets, followed by periods of slower growth. To capture that pattern, the researchers generated 13,000 simulated worlds, trained a model on those trajectories and then tested it against real-world deployment data. In backtesting, the model performed better than older forecasting methods in predicting where and when global nations would arrive at progress points, and those results are compared to past projections from the International Energy Agency, .

The paper arrives as the underlying numbers have started to shift in renewables’ favor. Ember, the energy think tank, reported this week that clean electricity growth in 2025 was enough to meet all new global electricity demand, keeping fossil-fuel generation essentially flat. Its review found that renewable power surpassed coal in global electricity generation last year, with solar posting a record annual increase and wind continuing to expand.

The IEA, in its latest renewables outlook, projects that renewable electricity generation will rise from 32% of global generation in 2024 to 43% by 2030, with solar expected to provide more than half of that increase and wind about 30%.

Even so, the Swedish researchers found that the global pledge made at COP28 to triple renewable energy capacity by 2030 sits near the outer edge of what appears likely under current patterns. In the model, that outcome lands around the 95th percentile, meaning it is possible, but would require unusually strong performance across many countries at once.

That leaves governments with a narrower question than the one climate debates usually pose. The issue is no longer whether wind and solar can scale. They already are. The issue is whether large economies can move fast enough, and consistently enough, to turn a strong buildout into a decisive one.

The Chalmers paper does not claim to predict the future. It cannot account for abrupt breakthroughs, wars, political reversals or technological surprises. But it does offer something that climate debates often lack: a baseline rooted in observed behavior rather than aspiration.

That may be its clearest contribution. The transition to cleaner power is happening faster than many forecasters expected. It is also happening too slowly to make the hardest climate target feel secure. Both facts can be true at the same time.

Just Jokes

Canva AI 2.0 Gets a Living Memory

AI For Good

A new AI system called OrcaHello is helping protect the endangered Southern Resident (Salmon-eating) Orcas by listening for their calls underwater and alerting people when the whales are nearby. The project uses hydrophones placed in the Salish Sea (below Vancouver Island) to capture underwater sound, then runs those audio streams through a machine learning model trained to recognize Southern Resident Orca vocalizations in real time. When the system detects the whales, it sends alerts that can help nearby ships, ferries, and industrial operators slow down or reduce noise in the area. Mongabay reported the story this week and noted that only 76 Southern Resident Orcas remained as of December 2025.

What makes OrcaHello useful is that it turns passive acoustic monitoring into something people can act on right away. Instead of recording whale sounds and reviewing them much later, the system listens continuously and identifies likely Southern Resident calls as they happen. That matters because vessel noise is one of the major pressures on this population. If marine operators know the whales are in the area in the moment, they have a better chance to change behavior while it still helps. The system was developed through work involving the nonprofit Oceans Initiative and researchers focused on reducing human disturbance to the whales in one of the busiest marine corridors in the Pacific Northwest.

This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.

The Chosen Anomaly Conundrum

Space exploration has always depended on scarcity. There is never enough time, bandwidth, human attention, or instrument capacity to examine everything. That was manageable when the stream of possible discoveries was still small enough for scientists to review by hand. But that era is ending. Telescopes now generate oceans of data. Rovers see more terrain than teams on Earth can parse in real time. Future missions will only widen that gap.

AI looks like the obvious answer. It can scan signals, rank targets, flag strange patterns, and decide what deserves a closer look before the moment passes. Without that help, science teams risk drowning in their own data and missing discoveries simply because no human got to them in time. In that sense, AI does not just make exploration faster. It makes modern exploration possible.

But once AI becomes the system that filters what humans notice first, exploration starts to change in a subtler way. The universe we study is no longer just the universe our instruments capture. It is the universe that survives a machine’s first pass. That may be a huge advantage when the model catches weak patterns no person would have spotted. It may also mean the frontier gradually bends toward what machine systems are best at recognizing, while the truly strange, noisy, low-confidence anomalies get pushed aside because they look too messy to trust.

The conundrum: 

If AI becomes the first judge of what in space deserves human attention, then the tradeoff is no longer just efficiency. It is about what kind of explorers we are willing to become.

One path says we should embrace that filter. Discovery at scale now depends on machine triage, and refusing it would mean letting extraordinary signals die unseen in overwhelming data. In that view, AI expands human curiosity by helping us notice more of the universe than we ever could alone.

The other path says the cost is deeper than it appears. Some of the most important discoveries in history looked ambiguous, inconvenient, or easy to dismiss at first. If AI becomes the layer that decides what gets surfaced, then humanity may get better at finding the patterns it already knows how to value while getting worse at noticing the anomalies that force it to rethink reality.

So as exploration moves deeper into a universe too large for human attention alone, what should matter more: using AI to ensure we miss less, or protecting room for the kinds of strange signals that a machine might be least prepared to recognize?

Want to go deeper on this conundrum?
Listen to our AI hosted episode

Did You Miss A Show Last Week?

Catch the full live episodes on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.

News That Caught Our Eye

Salesforce Opens Its Platform to AI Agents Through MCP
Salesforce has opened its platform to AI agents with what it calls headless 360. In the discussion, this was described as using MCP as the main way for agents to connect to enterprise data stores and tools. The move was framed as a major validation of MCP for enterprise use.

Meta Plans Broad Layoffs and an AI Reorganization
Meta is reportedly laying off 8,000 people on May 20 across divisions including Reality Labs, Facebook, recruiting, sales, and global operations. Those functions were described as being reorganized into AI pods under superintelligence labs. The change was presented as part of Meta’s shift into a more AI-centered company structure.

Senior OpenAI Leaders Depart as the Company Narrows Focus
Three senior OpenAI executives have left the company, including the heads of science, Sora, and enterprise. The departures were discussed alongside a broader push to narrow OpenAI’s priorities and de-emphasize some side efforts. One of the exits was also described as being related to family needs.

Humanoid Robot Completes Beijing Half Marathon at Record Pace
A humanoid robot called Honor Flash was described as running the Beijing half marathon in 50 minutes and 26 seconds. The discussion said it beat the human world record by more than six minutes, while using a dry ice cooling pack and making battery-change pit stops. The event was highlighted as a sign of how quickly robot mobility is advancing.

Anthropic Reopens CLI Use With OpenClaw
Anthropic has reversed an earlier restriction and is again allowing CLI use with OpenClaw. The change was discussed alongside continued rapid updates to Claude tools, including Claude Design and model improvements. The reopening was notable because it restores a higher access path for users working directly in the terminal.

Tim Cook to Step Down as Apple CEO
Tim Cook is reportedly stepping down as Apple CEO, with hardware chief John Ternus named as his replacement. The discussion said Cook would remain involved as chairman. The leadership change was framed as a major handoff after a long tenure that was described as highly successful.

Sergey Brin Personally Pushes Google to Improve Gemini Coding
Sergey Brin is reportedly leading a new internal effort at DeepMind to close Gemini’s coding gap with Claude. The discussion said the team’s mandate is to improve Google’s coding performance and that internal researchers have been rating Claude Code more highly. The move was presented as a sign that Google sees coding as a critical path in the broader AI race.

SpaceX and XAI Strike an Unusual Deal With Cursor
SpaceX and xAI are reportedly partnering with Cursor in a deal tied to access to GPU compute and a future acquisition option. The discussion described it as different from a standard acquisition, with SpaceX gaining the right to buy Cursor later while Cursor gains access to large-scale compute. The arrangement was framed as a new kind of AI consolidation, where startups can remain nominally independent while becoming deeply tied to a larger partner.

OpenAI Rolls Out a New ChatGPT Image Model
OpenAI released a new ChatGPT image model, referred to in the discussion as Image 2. It was described as a major upgrade in realism, text rendering, editability, and image consistency, with stronger performance on tasks like complex prompts and aspect-ratio changes. The model was also compared favorably to prior generations that struggled with accurate text and detailed layouts.

Meta Reportedly Tracks Employee Activity Ahead of Layoffs
Meta is reportedly using an internal system called the Model Capability Initiative to log employee keystrokes, mouse activity, and screenshots across work apps. The discussion said the system is active in the United States and cannot be opted out of, while similar tracking is not allowed in the EU because of privacy rules. It was presented as part of a broader concern that companies may be capturing worker knowledge in order to reproduce workflows after employees leave.

Sam Altman Criticizes Anthropic’s Mythos Messaging
Sam Altman publicly criticized Anthropic’s messaging around Mythos, calling it fear-based marketing. In the discussion, he was quoted comparing the approach to building a bomb, warning people about it, and then selling them the shelter. The remarks were framed as part of the escalating back-and-forth between major AI companies as they compete for influence and positioning.

Anthropic Introduces Live Artifacts in Claude
Anthropic’s new Live Artifacts feature was discussed as a major update to Claude, letting users build interactive tools such as dashboards that stay connected to live data and inputs. In the conversation, it was described as powerful for creating visual, working interfaces directly inside Claude rather than static outputs. The feature was highlighted as especially useful for building personalized dashboards and other dynamic artifacts that update as users work.

Unauthorized Access Reported in Anthropic’s Mythos Program
A group reportedly gained unauthorized access to Mythos through a third-party vendor setup tied to Discord. The discussion described it as a case of people figuring out the URL pattern used to reach the system, rather than a direct break of the model itself. The incident was framed as a bad look for Anthropic, especially because Mythos is supposed to be a highly restricted and security-sensitive release.

Anthropic’s Claude Code Access Change Sparks Confusion
Anthropic briefly appeared to remove Claude Code from its first two subscription tiers, prompting backlash online. According to the discussion, the company later said it was only an A/B test affecting a small percentage of users, and existing users who already had access were not broadly cut off. The episode was presented as another sign of Anthropic struggling with rollout decisions and communication.

OpenAI Launches ChatGPT Agents for Team and Enterprise Users
OpenAI released ChatGPT Agents, expanding beyond custom GPTs into more capable agent workflows with tool connections and memory. In the discussion, the new agents were described as able to build and run more structured automations, including workflows that connect to services like Gmail, Slack, Notion, and Asana. The launch also came with a pricing caveat: agents are free to use until May 6, after which usage will be billed by tokens rather than included in a standard plan.