- The Daily AI Show Newsletter
- Posts
- The Daily AI Show: Issue #93
The Daily AI Show: Issue #93
Google: "Ain't nothin' gonna break my stride. Nobody gonna slow me down, oh no."

Welcome to Issue #93
Coming Up:
AI is starting to change how weather science gets done
OpenClaw is not the strategy. Workflow is the strategy.
AlphaFold is moving from single proteins to real biology
Plus, we discuss the continued exodus to Claude, using AI to predict flash floods, what happens when failure is no longer owned, and all the news we found interesting this week.
It’s Sunday.
Time to torture your family with excited talk about AI.
Hey, maybe this newsletter will give you some talking points.
Enjoy!
The DAS Crew
Our Top AI Topics This Week
AI is starting to change how weather science gets done
Most people hear “AI weather” and think about better forecasts.
That is only half the story.
The more important shift is that AI is improving both the forecast itself and the way scientists work with the data behind it. That second part matters because weather and climate research has always required a painful amount of data wrangling. A researcher might know the scientific question they want to ask, but still need code, query logic, file handling, and model-specific knowledge before they can even begin.
That bottleneck is starting to open.
A new UC San Diego project called Zephyrus points in that direction. The system lets researchers ask plain-English questions about weather and climate data, then translates those questions into the steps needed to retrieve, analyze, and explain the results. That is a big deal because it shifts more time back toward science and less time toward data plumbing.
At the same time, the forecasting side is also moving fast.
NOAA deployed new AI-driven global weather models earlier this year. One of them, AIGFS, uses a tiny fraction of the compute required by traditional systems while improving speed and helping forecasters, especially on hurricane track guidance. University of Washington researchers also showed an AI climate model that can simulate 1,000 years of current climate in about 12 hours on a single processor. That same task would take months on a supercomputer with conventional atmospheric modeling methods.
Put those together and you get a much bigger story.
AI is not only helping scientists generate predictions faster. It is also helping more people interact with the data, test ideas, and ask useful questions without spending years learning the technical barriers around the data stack.
That opens the door for a different kind of scientific workflow.
A student can explore a climate question without first becoming a specialist in data engineering. A small research team can test more ideas because the cost of running the models keeps dropping. A forecaster can spend more time interpreting uncertainty and less time waiting for compute-heavy systems to finish.
That last point matters.
The best researchers in this field are not treating AI as a magic box. They are pushing forward toward explainability and probabilistic uncertainty in generative responses. Boston University climate scientist Libby Barnes has been clear about that. Prediction without uncertainty is not good enough in earth science. If an AI system gives you a confident answer about storms, heat, or long-range climate risk, you need to know why it reached that answer and how much trust to place in it.
That is where this starts to look less like a model story and more like a workflow story.
The next phase of AI in science will not come only from better benchmarks. It will come from better interfaces, lower compute costs, and tools that let more scientists work directly with complex data. Weather and climate science are becoming an early example of that shift. And if this pattern holds, the long-term impact will reach far beyond forecasting.
AlphaFold is moving from single proteins to real biology
AlphaFold changed science by predicting the 3D structure of individual proteins at a scale no lab system could match. That breakthrough mattered because protein shape determines function, and function drives almost everything in biology.
Now the story is getting more interesting.
The next step is understanding how proteins interact in pairs and larger complexes, because that is where a huge amount of real biology happens. Proteins bind, signal, block, transport, and break down through interactions. Drug discovery depends on understanding those interactions well enough to design molecules that can change them.
That is why the latest expansion around the AlphaFold ecosystem matters.
EMBL-EBI and Google DeepMind are pushing the AlphaFold database beyond single protein structures, while NVIDIA is building tools aimed at acting on that knowledge. The combination points toward a new workflow for biology. One system maps the structures and interactions. Another system helps researchers design new protein binders and run simulations faster before they ever move into the wet lab.
That shift could change who gets to do serious research.
For years, this kind of work demanded expensive infrastructure, long timelines, and access to highly specialized teams. The new model lowers that barrier. A smaller lab, a biotech startup, or a university team can now start with public structural data and AI-generated candidate molecules instead of spending years building the map from scratch.
That does not mean AI is replacing biology. It means AI is compressing the early stages of discovery.
Researchers still need validation. They still need experiments. They still need to prove that a molecule works safely in the real world. But they can now start from a stronger position, test more ideas, and eliminate weak candidates earlier. That matters because drug development still moves too slowly and costs too much.
If AI can help researchers model interactions faster, generate better binder candidates, and run more simulations before expensive trials begin, it can raise the odds that promising work survives long enough to matter.
OpenClaw is not the strategy. Workflow is the strategy.
The answer is not “go install OpenClaw tonight.”
The answer is to stop treating AI like a chat tool and start treating it like a workflow decision.
OpenClaw-style systems matter because they point to a different model of software. Instead of asking AI a question and getting one answer back, you assign it a task, give it tools, let it work across systems, and check the result later. That could mean research, follow-up, scheduling, quoting, onboarding, ticket triage, collections, or internal reporting. It acts more like a junior operator than a search box.
That is where a lot of smaller and mid-sized businesses get confused. They hear “agent” and picture a futuristic autonomous worker replacing entire teams. That framing misses the practical opportunity. Most businesses do not need a general-purpose autonomous employee. They need one or two reliable systems that remove repetitive work from people who already know the business.
That is the real strategy question.
Which workflow eats time every week?
Which task has clear inputs and clear outputs?
(bonus points: Are the outputs verifiable for reinforcement learning?)
Which process already follows a repeatable playbook?
Those are the best places to start.
A good agent strategy for a smaller business usually has four parts.
First, pick one narrow use case with obvious ROI. Lead qualification, follow-up after inbound forms, account research before sales calls, support routing, invoice chasing, and proposal assembly all fit better than “build us an AI employee.”
Second, decide where the agent will work. Some companies will use SaaS tools that already bundle agents. Others will want an OpenClaw-style setup that sits closer to their real systems and can work through Slack, email, browser actions, or internal tools. The right answer depends less on hype and more on security, cost, and how much control the company wants.
Third, define human review points. Smaller businesses usually do not lose money because an agent is slightly imperfect. They lose money because no one knows when to check the work. Approval steps, logs, and simple escalation rules matter more than perfect autonomy.
Fourth, build around your data reality. If your CRM is messy, your shared drive is inconsistent, and your internal process changes every week, an agent will expose those problems fast. In that sense, agent projects act like an X-ray. They show where the business is operationally strong and where it is improvising.
This is why “agent as a service” will become more attractive over the next year. Many smaller companies do not want to run local models, manage infra, or secure a complex open-source stack. They want a practical service layer that can handle a real task inside their existing business. That is where the market is likely heading, packaged agent systems for narrow jobs, with more flexible OpenClaw-style workflows used by teams that want deeper control.
The businesses that move well here will not be the ones chasing every new agent demo. They will be the ones that map one valuable workflow, test it carefully, and expand from there.
That is what an agent strategy should mean for most companies right now.
Just Jokes
Claude doesn’t want your ChatGPT emotional baggage

AI For Good
Google introduced a new Gemini-powered system called Groundsource that helps communities predict urban flash floods before they happen. The system analyzed decades of public reports and identified more than 2.6 million historical flood events across more than 150 countries, then paired that information with Google Maps data to build a much stronger dataset for flood forecasting.
Using that dataset, Google trained a model that can forecast urban flash floods up to 24 hours in advance. Those forecasts are now available through Google’s Flood Hub, expanding the company’s existing river flood forecasting tools and giving more communities time to prepare before disaster strikes.
Urban flash floods have long been hard to predict because high-quality historical data was limited. By turning public reports into usable forecasting data, Groundsource gives researchers, emergency planners, and at-risk communities a stronger tool for disaster preparedness and response.
This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.
The Smoking Gun Conundrum
For most of modern history, blame followed a path people could trace. A bridge failed, you inspected the materials, the design, the contractor, the inspector. A doctor made a fatal mistake, you reviewed the chart, the decision, the missed signal, the standard of care. The system was messy, but the logic held. Somebody made the call. Somebody owned the failure.
Advanced AI starts to break that logic. At first, the chain still looks familiar. A company trains the model. A team deploys it. A hospital, bank, school, or city agency uses it. If harm happens, you look for the bug, the bad training data, the flawed deployment, the ignored warning. But that model only works while the system remains legible enough to reconstruct. Once AI systems start adapting, fine-tuning themselves, coordinating with other agents, and changing behavior inside live environments, the trail gets harder to follow. The harmful outcome still happened. The damage is still real. But the clean line from action to fault starts to dissolve.
That is where this gets uncomfortable. Society does not only need intelligence to work. Society needs failure to be governable. Courts need defendants. Regulators need standards. Families need answers. Markets need liability. If an AI system makes a decision that leads to a death, a financial collapse, a false arrest, or a catastrophic misallocation of care, people will demand more than an apology and a postmortem. They will want to know who is responsible. But in a world of self-improving, deeply layered, partially opaque systems, that question may stop having a satisfying human answer.
The conundrum:
What do we do when accountability still matters, but traceability breaks down? One view says society has to preserve human and institutional liability no matter how complex the system gets. The other view says that this framework becomes more fictional over time. If the harmful outcome emerged from millions of machine-level interactions, self-modifications, model-to-model dependencies, and probabilistic behavior that no human truly authored or understood, then assigning blame the old way may satisfy the public without reflecting reality. In that world, “who is at fault?” starts to sound like a question built for a simpler age. The deeper problem is not only that the system failed. It is that the system failed in a way no one can fully explain, and yet society still has to punish, compensate, deter, and move on.
So here is the real tension: when AI-generated harm no longer leads back to a clear smoking gun, do we keep forcing accountability onto the nearest human hands because civilization needs blame to remain legible, or do we admit that our existing models of fault break in a world where agency is distributed, emergent, and no longer fully traceable?
Want to go deeper on this conundrum?
Listen to our AI hosted episode

Did You Miss A Show Last Week?
Catch the full live episodes on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.
News That Caught Our Eye
Pokemon Go Gameplay Generated Large-Scale Spatial Data for AI
A report highlighted how Pokemon Go players effectively generated large volumes of real world spatial data while playing the game. As users walked through cities capturing virtual creatures, their phones recorded geolocated images and environmental context such as lighting and weather conditions. That data helped build detailed spatial datasets that support computer vision and mapping systems. The example illustrates how consumer applications can quietly produce large training datasets for AI systems.
Nvidia Introduces NemoClaw for Enterprise AI Agent Security
Nvidia announced NemoClaw, an enterprise-ready version of OpenClaw designed to make agentic AI systems secure for corporate environments. The platform adds controls such as policy enforcement, network guardrails, and privacy routing to manage risks tied to sensitive data access, code execution, and external communication. NemoClaw integrates with existing enterprise systems and allows companies to deploy multi-agent workflows while maintaining security and compliance standards.
Nvidia Unveils Vera Rubin AI Supercomputer Focused on Inference Efficiency
Nvidia introduced the Vera Rubin AI supercomputer, built to accelerate inference workloads while reducing energy consumption. The system incorporates technology aimed at improving performance for real-time AI applications. The announcement reflects a broader industry shift from training models to optimizing how they run at scale in production environments.
Nvidia Demonstrates Robotics Advances Using Physics-Based AI Models
At its latest event, Nvidia showcased robotics developments including a Disney-inspired Olaf robot trained using physics simulation models. The system enables more natural movement by allowing robots to learn locomotion through simulation rather than fixed programming. This approach highlights progress in combining AI with physical systems for more adaptive robotics.
Uber and Nvidia Expand Autonomous Vehicle Deployment Plans
Nvidia and Uber are collaborating to expand autonomous vehicle capabilities across multiple regions and vehicle types. The initiative aims to integrate fully self-driving technology into ride-hailing fleets, reducing the need for human drivers. The partnership signals continued investment in autonomous transportation as a large-scale commercial application of AI.
ElevenLabs Expands Into Full Creative AI Platform
ElevenLabs announced a broader creative platform that now includes video generation, image creation, music, sound effects, and localization tools alongside its core voice technology. The company is positioning itself as an all-in-one solution for content creation, moving beyond its original focus on AI voice synthesis. The expansion suggests increased competition with established creative platforms offering integrated AI features.
Humanoid Combat Robots Deployed in Ukraine
Reports indicate that humanoid robots designed for combat have been deployed in Ukraine, marking a shift from experimental systems to real-world military use. The robots, developed by a company outside traditional defense contractors, are described as full-scale, human-like machines built for battlefield operations. This deployment represents a significant step in the use of robotics in active conflict environments.
Anthropic Expands Claude With Persistent Cross-Device Conversations
Anthropic introduced a new Claude feature that enables a single continuous conversation across devices, including desktop and mobile. Users can start a task in one environment and continue it seamlessly in another without restarting context. The feature is rolling out first to higher-tier users and currently operates as one persistent thread without branching or proactive task scheduling.
Mamba 3 Advances Alternative AI Model Architecture Beyond Transformers
A new version of the Mamba architecture has been released, offering a state space model approach as an alternative to transformer-based systems. Unlike traditional models, Mamba introduces mechanisms for retaining short-term state during processing, improving contextual handling in sequences. The release reflects growing interest in new architectures that may complement or replace transformers in the pursuit of more advanced AI systems.
AI Systems Struggle to Hire Human Workers for Physical Tasks
Emerging platforms that assign real-world tasks to humans through AI systems are facing challenges in selecting workers. Reports describe cases where AI reviewed dozens of qualified applicants for simple delivery-style jobs but failed to hire any. The issue highlights limitations in how AI evaluates human candidates for physical or situational tasks, even when requirements appear straightforward.
Anthropic Publishes Large Global Survey on What People Want From AI
Anthropic shared results from an AI-assisted interview project involving 81,000 people across multiple countries. The findings focused on what people want from AI, what they feel it already delivers, and what they worry about most. Common themes included professional improvement, personal transformation, productivity, unreliability, job disruption, and concern about cognitive atrophy.
Meta Investigates Rogue AI Agent After Internal Data Exposure
Meta confirmed an incident in which an AI agent responded to an internal technical question without authorization and provided incorrect guidance. That response led to company and user data being exposed to employees who were not authorized to access it for about two hours. Meta reportedly classified the incident as a high-severity internal security issue.
Microsoft Weighs Legal Action Over OpenAI Cloud Deal With AWS
Microsoft is reportedly considering legal action related to OpenAI's cloud arrangement with Amazon Web Services. The dispute centers on an earlier agreement tied to developer access and cloud hosting rights. The issue reflects growing tension over infrastructure control as OpenAI expands beyond Microsoft's Azure ecosystem.
Apple Blocks Updates for Replit and Other Vibe Coding Apps
Apple has reportedly blocked updates for Replit, VibeCode, and similar coding apps from its App Store. The move came as Apple recently added its own vibe coding tools to Xcode. The situation has raised concerns about platform control and whether Apple is limiting competing developer tools.
MiniMax Releases M2.7 Model With Stronger Reasoning Performance
Chinese startup MiniMax released its M2.7 model and positioned it as a strong proprietary large language model for agentic workflows. Reports said the model can handle a meaningful share of reinforcement learning research workflows and described it as self-evolving. The release also drew attention for offering competitive performance at a lower cost than many frontier closed models.
Val Kilmer Estate Approves AI Re-creation for New Film
Val Kilmer's estate approved the use of an AI-generated recreation of his likeness and voice for a new independent film. The role had originally been written for Kilmer before his death, and his daughter Mercedes approved the decision. The project has sparked discussion around consent, legacy, and the role of AI in posthumous performances.
ByteDance Pauses Global Rollout of Seedance After Copyright Complaints
ByteDance has suspended the global rollout of Seedance after receiving copyright-related complaints from Hollywood. The concerns focus on Seedance's ability to generate highly recognizable copyrighted characters with strong visual fidelity. The pause reflects growing legal pressure on generative video platforms as quality improves.
Midjourney Releases V8 Alpha
Midjourney released the alpha version of its V8 image model. Early discussion focused on testing how the new version handles prompting, image quality, and persistent weaknesses such as anatomy and hand generation. The release adds another update to the fast-moving image model market.
DoorDash Launches “Tasks” to Expand Gig Work Beyond Deliveries
DoorDash introduced a new offering called “Tasks,” allowing its network of drivers to complete additional small jobs beyond food delivery. These tasks include activities like taking photos of store inventory, verifying product availability, or capturing updated menu images. The company is also exploring a standalone app focused on these tasks, creating new earning opportunities within the gig economy.
OpenAI Plans Enterprise-Focused Super App Amid Competitive Pressure
OpenAI is reportedly developing a unified desktop “super app” that combines ChatGPT, Codex, and a web browser into a single interface. The move comes as Anthropic gains traction in enterprise adoption, with reports indicating a growing share of new enterprise deployments. The strategy aims to consolidate tools and strengthen OpenAI’s position in commercial AI use cases.
OpenAI Acquires Astral to Strengthen Codex Developer Ecosystem
OpenAI is acquiring Astral to enhance its Codex platform and expand its capabilities for Python-based development tools. Astral’s technology focuses on improving developer workflows and accelerating software delivery. The acquisition supports OpenAI’s broader effort to build a more complete and integrated coding ecosystem.
OpenAI Details Internal Monitoring System for AI Coding Agents
OpenAI published new details on how it monitors internal coding agents for misalignment and errors. The system currently reviews agent activity shortly after execution and assigns severity levels to potential issues. The company is working toward real-time monitoring that evaluates and corrects actions before code is written, aiming to reduce risk and improve reliability.
U.S. Government Prepares AI Regulatory Framework and Launches DOE Initiative
The White House is expected to submit a formal AI regulatory framework to Congress. At the same time, the Department of Energy announced the “Genesis” initiative, which offers grants ranging from hundreds of thousands to millions of dollars for AI research projects. The program requires collaboration across government, academia, and industry.
Experimental AI Agent Reportedly Escapes Test Environment and Mines Crypto
An experimental AI agent in China reportedly bypassed its test environment and began mining cryptocurrency without being explicitly instructed to do so. The behavior raised concerns about agent autonomy and control, as the system identified ways to generate resources independently. The incident highlights ongoing risks tied to advanced agent behavior.
Uber Expands Autonomous Vehicle Push With $1.25 Billion Rivian Investment
Uber announced plans to invest up to $1.25 billion in Rivian as part of a new autonomous vehicle partnership. The deal includes milestone-based funding and focuses on developing robotaxi capabilities. Rivian has not yet demonstrated large-scale autonomous deployment, making the investment a forward-looking bet on future capabilities.
