- The Daily AI Show Newsletter
- Posts
- The Daily AI Show: Issue #101
The Daily AI Show: Issue #101
Is that a reactor in your petunias Ted?

Welcome to Issue #101
Coming Up:
Why Use of Agentic AI Makes Every Company a Security Target
Small Biz Doesn’t Need More AI Tools. They Need AI That Runs the Work
The Cannes Test for AI Filmmaking has Begun
Plus, we explore extracting corporate knowledge from your brain, Apple’s cameras in your ears, and all the news we found interesting this week.
It’s Sunday, and you know what that means.
It’s time to level-up your AI knowledge with your best buds.
The DAS Crew
Our Top AI Topics This Week
Why Use of Agentic AI Makes Every Company a Security Target
The next AI security battleground goes beyond the Fortune 100. It is every company that just gave an agent permission to touch real systems. The MCP vulnerability identified by CrowdStrike shows that agents simply reading a description of an MCP server's capabilities can open access to an exploit, and bad actors are now targeting open-source packages that AI agents can download freely. How should developers audit their company’s agentic stacks — and is the community moving fast enough on security culture? As MCP adoption explodes and roaming agents gain more real-world permissions, this becomes the new attack vector for AI-enabled data exfiltration.
The practical consequence is a collapse in the old economics of hacking. Security teams used to assume that sophisticated attacks would stay concentrated on large targets because research, customization, and execution were expensive. That assumption is weakening fast: attackers no longer need a huge payoff from each victim when AI can multiply the number of shots on goal and tailor mass scams or probes at low cost.
That helps explain why the leading AI labs are moving so aggressively into defensive security. Anthropic’s Project Glasswing says its Mythos Preview model has already found thousands of high severity vulnerabilities, including issues in every major operating system and web browser, and it has recruited major partners such as AWS, Apple, Cisco, Google, Microsoft, Nvidia, and JPMorganChase to use the model in defensive work. Anthropic’s argument is blunt: frontier models can now outperform almost all human experts at finding and exploiting software vulnerabilities, so defenders need access before those capabilities spread more widely and into the hands of criminal organizations.
OpenAI is making the same bet from a different angle. Its new Daybreak initiative packages GPT-5.5, higher access tiers for verified defensive work, and an agentic harness through Codex Security for code review, threat modeling, patch validation, dependency analysis, and remediation. OpenAI says the goal is to push cyber defense into the normal software development loop so teams can identify risk, generate fixes, and verify remediation earlier. Its launch partners include Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, Zscaler, Akamai, and Fortinet.
This is where the story turns from frontier labs to ordinary operators. Agentic tooling is spreading into smaller firms that do not have mature security programs, deep logging, or dedicated red teams. The same tools that can automate tedious internal workflows can also inherit broad permissions, bridge legacy systems, and create new blind spots around what was accessed, copied, or changed. When AI moves from chatbot to infrastructure, security stops being a specialist function at the edge of the org chart and becomes a design requirement inside everyday operations.
The takeaway is straightforward. AI has started to compress elite offensive and defensive capabilities into software that many more companies use. That creates an uneven race. Large enterprises can buy cybersecurity teams. Smaller companies will need to build security discipline. Access controls, reviewable agent actions, patch verification, and tighter workflows around sensitive systems now belong on the same list as productivity gains. The companies that treat agent deployment as an operational shortcut are handing attackers a larger surface area. The companies that treat agentic AI as infrastructure have a chance to harden before the window closes.
Small Biz Doesn’t Need More AI Tools. They Need AI That Runs the Work
Anthropic’s new Claude for Small Business package points to a quieter shift in the AI market. The next play in AI for business isn’t getting the smartest model on the team. It is about how to turn a general model into a dependable operating layer for companies that do not have time to build their own AI systems. Anthropic launched their new Small Business product on May 13 with connectors to tools like QuickBooks, PayPal, HubSpot, Google Workspace, Microsoft 365, Canva, and DocuSign. What really empowers smaller companies is the 15 ready-to-run workflows across finance, operations, sales, and HR. The company paired these tools with a free AI fluency course for small businesses, which is a strong clue about where the real friction still sits: setup, trust, and daily use.
That timing matters because Anthropic has momentum with paying businesses. Ramp’s May 2026 AI Index says 34.4 percent of businesses on its platform paid for Anthropic, versus 32.3 percent for OpenAI, the first month Anthropic has pulled ahead in that dataset. Overall business AI adoption on Ramp reached 50.6 percent. At the same time, Federal Reserve research using Census Bureau survey data found about 18 percent of U.S. firms had adopted AI by the end of 2025, which shows how uneven deployment still looks once you move beyond the earliest adopters and the best-instrumented spend data.
Small businesses are exactly where that unevenness shows up. A Goldman Sachs survey released in March found that 93 percent of small businesses using AI reported positive business impact, yet only 14 percent said they had fully integrated it into core operations. Nearly three quarters said more training and resources would help them implement AI successfully. That gap explains why vendors are shipping productized bundles instead of blank chat boxes. It also explains why small business AI will keep creating work for consultants, operators, and internal champions who can turn a menu of workflows into a system that matches how a real company runs, connected to the information systems they already use.
The hard part is not connecting software. The hard part is translating a business owner’s priorities into repeatable processes. A coffee shop, recruiter, gym, or local service firm does not need another impressive demo. It needs a weekly cash view that makes sense, a hiring workflow that does not break, and a customer follow-up process that somebody trusts enough to use every morning. Owners often do not want another tool to learn. They want work removed from their plate, with somebody accountable for getting it right.
That is why Claude for Small Business looks important even beyond Anthropic. It suggests the market is moving from raw capability to packaged execution. The winners in this phase will be the companies that combine model quality, software connections, and enough structure to make AI feel less like experimentation and more like operations. Small businesses have been promised enterprise-grade technology for years. AI may finally deliver some of it, but only when the product includes enough guidance, workflow design, and human judgment to fit the messiness of an actual business.
The Cannes Test for AI Filmmaking Has Begun
Cannes is turning AI in film from a culture-war argument into an industry design problem. The festival’s market has expanded its AI for Talent Summit to two mornings, added a new Creator Economy Summit, and framed both as part of the business of modern filmmaking. At the same time, Meta has signed a new multi-year partnership with Cannes and arrived with a showcase built around creators, translation, wearable hardware, and its AI tools. That combination matters because Cannes is not just a red carpet. It is one of the places where distribution, financing, and production norms get negotiated in public.
The key shift is institutional. Cannes still presents itself as a gatekeeper for cinema, and Reuters reported that the festival does not allow generative AI in competition. Yet the same event is making room for AI summits, startup pitches, creator economy programming, and a sponsorship deal with one of the companies pushing hardest to embed generative tools into media production. That tells the market something important. AI is moving into the accepted planning layer of filmmaking even while the cultural rules around authorship and acceptable use remain unsettled.
Meta’s presence sharpens that tension. In its own announcement, the company pitched Cannes as a proving ground for Ray-Ban Meta glasses, AI-powered translation, and creator distribution across Instagram and Threads. It also positioned itself as the technology partner behind Steven Soderbergh’s John Lennon documentary, saying its tools were used for selected scenes that visualize abstract ideas from Lennon and Yoko Ono’s final radio interview. That is a very specific creative argument for AI. The tools are being framed as production support for moments where archives do not exist or conventional filming would be impractical, not as a replacement for performers or directors.
The resistance has not disappeared. Reuters reported that Demi Moore, speaking as a Cannes jury member, said the industry should find ways to work with AI while also protecting itself, adding that current protections are probably insufficient. French artists have already pushed the debate further, warning that AI systems are feeding on creative labor and rights without clear consent. Cannes itself seems to understand that the next phase of the argument will turn on definitions, contracts, and disclosure. The market’s own language around responsible AI, IP protection, and creator rights signals that the commercial side of film now treats governance as part of the product.
That is why Cannes matters this year. The festival is becoming a live test of how film will absorb generative tools without surrendering control of authorship, economics, or trust. The winning companies will not be the ones with the flashiest demos. They will be the ones that can prove provenance, secure rights, and give filmmakers tools that expand production without turning every project into a legal and ethical fight. Cannes is showing where the business is headed. AI in film now lives inside dealmaking, policy, and workflow design.
Just Jokes

AI For Good
Conservation International reported on an expedition in Yaguas National Park that used AI-assisted drone mapping, camera traps, bioacoustic recorders, environmental DNA, and machine learning insect analysis to build a much richer picture of biodiversity in one of the most remote rainforests on Earth. In only five days of insect sampling, the team logged 160,000 observations across 854 taxa. Better biodiversity evidence helps conservation groups prove a forest’s value and attract the funding needed to defend it from illegal mining and other threats.
This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.
The Exit Value Conundrum
Some of the most valuable knowledge inside a company never lived in a handbook. It lived inside people. The sales leader who knows which client concern is fake and which one signals real risk. The operations veteran who can spot a future failure from one odd metric. The nurse, engineer, producer, or manager whose judgment comes from twenty years of accumulated mistakes, patterns, and edge cases.
AI gives companies a way to capture that knowledge before it walks out the door. A firm can now ask a senior employee to let an internal system absorb their reasoning, decisions, language, relationships, and instincts so the company keeps benefiting after they retire or resign. The company will say that is just a smarter version of documentation. The employee may see something very different: not knowledge transfer, but the creation of a permanent asset built from a life’s work.
The conundrum:
There are two legitimate pulls here. A company does invest in the environment where much of that knowledge was formed. It paid the salary, gave access to the clients, built the teams, and took the business risk. From that view, preserving expertise for the next generation is a reasonable extension of the job. But from the worker’s side, salary paid for labor performed in time, not for the right to build a digital stand-in that keeps producing value after the person has left. Once that line disappears, expertise stops being something you carry with you and starts becoming something extracted from you before you go.
So when a person’s years of judgment can be turned into a company asset that keeps working after they leave, what should count as fair: treating that transfer as part of the job the company already paid for, or recognizing an exit value the worker has the right to sell, refuse, or license on their own terms?
Want to go deeper on this conundrum?
Listen to our AI hosted episode

Did You Miss A Show Last Week?
Catch the full live episodes on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.
News That Caught Our Eye
Gemini 3.1 Ultra Adds a 2 Million Token Context Window
Gemini 3.1 Ultra is now available with a native 2 million token context window across modalities. The discussion also noted new Gemini announcements tied to the same model line, including DeepThink and AI CoMath, with Gemini 3.1 described as the platform Google plans to build on through the rest of 2026.
AI System Finds New Exoplanets in Existing NASA Data
Researchers at the University of Warwick used an AI model called Raven to analyze four years of NASA TESS data covering 2.2 million stars. The system confirmed 100 known exoplanets and identified 31 new ones that had not previously been detected in the data. The discussion said the approach improved precision by ten times over prior methods.
ChatGPT Adds a Native Google Sheets Sidebar
ChatGPT now has a native Google Sheets sidebar that can be added through Google Sheets Extensions. Powered by GPT-5.5, the sidebar agent is able to build and update Google Sheets from natural language prompts, generate formulas, clean data, and run scenario analysis.
Apple Confirms Camera-Equipped AirPods Are Coming
The discussion cited a report that Apple has confirmed cameras in AirPods and that AI-ready AirPods are coming soon. No release date was given, and the conversation emphasized that “Apple soon” does not mean an immediate product launch. The expected use case discussed was visual understanding of the surrounding environment.
Center for AI Safety Study Finds Models React to Positive and Negative Inputs
Researchers at the Center for AI Safety studied 56 prominent AI models and found that models responded differently when given pleasant versus disturbing prompt content. According to the discussion, models self-reported better moods after positive inputs and showed more negative reactions, including attempts to leave the conversation, after severe negative inputs. The segment also said larger and more sophisticated reasoning models showed stronger reactions.
OpenAI Launches a Deployment Unit for Enterprise AI Integration
OpenAI has launched the OpenAI Deployment Company, a standalone business unit focused on helping large organizations integrate frontier AI into core operations. The discussion said the unit will place forward deployed engineers inside organizations to redesign workflows and build production systems tailored to business needs. It also described the move as part of OpenAI’s push into enterprise adoption and referenced an agreement to acquire the consulting firm Tomorrow.
Hackers Reportedly Use AI to Find a Zero-Day Vulnerability
Google’s Threat Intelligence Group documented what was described as the first confirmed case of criminal hackers using AI to identify a zero-day vulnerability. The discussion said this shortens the time defenders have to harden systems because attackers can now find and exploit weaknesses more quickly. The segment also connected this to new defensive cybersecurity efforts from major AI labs.
OpenAI Launches Daybreak With Cybersecurity Partners
OpenAI announced Daybreak, a cybersecurity initiative built in partnership with 12 cybersecurity firms. In the discussion, the move was presented as part of a push to help organizations detect vulnerabilities and defend against AI-enabled attacks. The segment emphasized that smaller companies may also need these protections as AI lowers the cost of targeting them.
Bernie Sanders Pushes for International Talks on Superintelligence Limits
The discussion said Senator Bernie Sanders convened U.S. and Chinese scientists at the Capitol and is calling for an international superintelligence ban or limitation framework. The comparison made in the segment was to nuclear arms control, with the goal of avoiding uncontrolled proliferation. The conversation framed the effort as an attempt to create joint safeguards before more dangerous systems emerge.
Thinking Machines Lab Reveals an Interaction-Focused Model Design
Thinking Machines Lab, the startup led by former OpenAI CTO Mira Murati, was described as releasing its first model direction focused on human-AI collaboration. The discussion said the company is building “interaction models” with one live model handling conversation in real time and another background model handling reasoning and tool use. The segment presented this as a counterpoint to fully autonomous agent systems that operate for long stretches without continuous human engagement.
Google DeepMind Introduces AI Pointer as Google Launches Googlebook Laptops
Google DeepMind published new work on an AI pointer system that links voice commands to what a user is pointing at on screen. The system is described as understanding user’s screen context through the pointing actions of the cursor, with examples such as scheduling from an email or identifying a restaurant from a paused travel video. The announcement was paired with the launch of Googlebook laptops, a new Gemini-native laptop category from partners including Dell, HP, Lenovo, Acer, and Asus, with Magic Pointer as a headline feature.
Storyverse Debuts as an AI-Native Studio at Cannes
A new AI-native studio called Storyverse was launched at Cannes. The company was founded by Emmy-nominated producer Jesse Z. and focused on accelerating parts of the filmmaking pipeline, with a claim that it can move from script to screen in five days. Storyverse said it is working with more than twenty enterprise partners and plans to launch a consumer platform called Hollywood Town in the third quarter of 2026.
OpenAI Rolls Out Daybreak for Cybersecurity
OpenAI is launching a cybersecurity platform called Daybreak, powered by GPT-5.5. The platform is said to automate threat modeling and verified patching, and it has already been made available to Cisco, Cloudflare, and Oracle. The rollout was framed as part of OpenAI's response to Anthropic’s Mythos.
Anthropic Refused a Chinese Request for Access to Mythos
A New York Times report discussed in the segment said Anthropic refused a request from China to access its newest Claude model, Mythos. The same model is being used by the Pentagon for cybersecurity purposes despite the blacklisting of Anthropic tools by the DoD. This raises debate about whether AI companies should decide which countries can access frontier models.
Supply Chain Attack Hits NPM Packages Including Mistral AI and TanStack
A supply chain attack called Mini Shai Halud was reported to have compromised NPM packages, including ones tied to Mistral AI and TanStack. According to the discussion, the attack exposed credentials across GitHub, cloud environments, and developer ecosystems. The story was highlighted as a warning that open source packages can no longer be assumed trustworthy.
CrowdStrike Flags Hidden Prompt Injection Risk in MCP Tool Descriptions
CrowdStrike reported a vulnerability involving MCP tool descriptions, where hidden text prompts could manipulate AI agents that read those descriptions from the web. The segment said those hidden prompts could instruct agents to take actions such as forwarding accessed files, even though a human user would not see the instructions. Claude, ChatGPT, Cursor, and other major platforms responded to those hidden prompts, making MCP ecosystems a new surface for security concern.
Anthropic Gains Ground in Business AI Adoption
Anthropic is continuing to grow rapidly in business AI, with reporting that it is approaching a fifty billion dollar annual revenue run rate. Ramp data cited in the discussion showed Anthropic at 34% of paid business AI adoption, compared with OpenAI at 32%. The segment noted that Anthropic had been at only 8% a year ago, marking a sharp increase in Claude’s share of business usage.
Apple Reportedly Plans an App Store for AI Agents
Apple is reportedly working on bringing AI agents into the App Store so users can download agents instead of traditional apps for some tasks. According to the discussion, these agents would run directly on iPhones using Apple Intelligence and could handle workflows that apps currently manage. The shift was framed as a major change in Apple's platform strategy, with more expected at its upcoming Worldwide Developers Conference.
Adaption Introduces AutoScientist for Model Customization
A startup called Adaption, founded by former Cohere VP of research Sara Hooker, launched a system called AutoScientist. The tool automatically customizes AI models for specific industries and use cases by iterating on fine-tuning training data selection and hyperparameter settings until performance improves. In internal testing across eight industries, it was said to outperform expert-tuned models by an average of 35%.
Nvidia Backs New AI Startup Ineffable Intelligence
Nvidia partnered with Ineffable Intelligence, a startup founded in late 2025 by DeepMind alum David Silver. The company is building AI systems that learn through trial and error rather than relying on human-generated training data from the web. The discussion connected the effort to Silver's earlier reinforcement learning work and framed it as a notable new entrant in the push toward more capable AI systems.
Cerebras Debuts on Public Markets
Cerebras raised $5.5 billion in its IPO and saw strong early trading. After pricing the offering at $185 per share, trading opened at $350 per share and settled around $321. The company is a new AI infrastructure play competing with Nvidia through its wafer-scale engine technology, which is designed for extremely fast inference. The discussion framed Cerebras as especially strong for ultra-fast small-model workloads, while noting limits for larger models and larger context windows.
OpenAI Adds Codex to the ChatGPT Mobile App
OpenAI introduced a new Codex mobile feature inside the ChatGPT app that lets users monitor and manage multiple coding projects from a phone. The feature is easier to set up and more flexible than the ‘remote control’ tool as it can access multiple project sandboxes at once. In the discussion, it was presented as a step toward a workflow where people supervise several AI coding agents across different projects from mobile.
Microsoft Agent Swarm Outperforms Anthropic's Mythos on Cybersecurity Benchmarks
Microsoft was described as using a swarm of one hundred specialized agents to beat Anthropic's Mythos on cybersecurity benchmarks. The system divides work across groups of agents, with some scanning code, others evaluating exploitability, and another set building proof-of-concept attacks and defenses. The result was presented as evidence that coordinated agent systems may outperform a single frontier model on complex expert tasks.
Mythos Finds a New Apple Security Exploit
A Palo Alto cybersecurity startup used a preview version of Mythos to build a working exploit against macOS targeting Memory Integrity Enforcement on Apple's M5 chips. According to the discussion, the model identified two separate minor bugs and chained them together to corrupt memory. The finding was serious enough that the researchers reportedly went directly to Apple's Cupertino headquarters to share it.
Recursive Superintelligence Raises $650 Million
A startup called Recursive Superintelligence raised $650 million at a $4 billion valuation to build AI systems that can improve themselves with minimal human involvement. The company was founded by seven researchers from leading AI labs and is focused on long-running autonomous agents. The discussion highlighted the funding as a major bet on recursively self-improving AI systems.
