The Daily AI Show: Issue #72

The "Real" Thing

Welcome to Issue #72

Coming Up:

How GenSpark Makes Models Think Together

The App Store Inside ChatGPT

When AI Brings the Dead Back to Life

Lovable Cloud Turns Good Planning Into Great Products

Google’s Computer Use Model: Agents That Can See and Click

Plus, we discuss AI’s role in protecting wildlife, city-sized digital twins, AI avatars, and all the news we found interesting this week.

It’s Sunday morning.

With Q4 in full-swing, its time to start thinking about where you want to end 2025 on your AI journey.

There are still roughly 90 days, which is a ton of time to go from zero to hero.

This week’s issue can be your official kick off.

The DAS Crew - Andy, Beth, Brian, Eran, Jyunmi, and Karl

Our Top AI Topics This Week

How GenSpark Makes Models Think Together

One of the most promising new ideas in AI isn’t about bigger models. It’s about smarter collaboration. GenSpark’s new Mixture of Agents feature lets multiple large language models tackle the same task at once, then compare, reflect, and deliver a single, synthesized answer.

Instead of relying on one model’s judgment, GenSpark distributes the same prompt to several top systems like GPT-5, Claude Sonnet, and Gemini. It reviews each response, identifies where they agree or disagree, and generates a refined version that captures the strongest points from all. The goal is quality through consensus. An AI that double-checks itself before you even see the result.

This approach mirrors “mixture of experts” systems used in research, where different models specialize in reasoning, creativity, or precision. By blending them, GenSpark reduces the odds of error or hallucination while producing responses that feel more balanced and complete. It is also fast, since all models run in parallel before being merged.

In many ways, this signals where generative AI is headed: smaller models working together rather than one massive one trying to do everything. As costs fall and compute becomes cheaper, multi-agent reflection could become standard across AI tools, quietly boosting accuracy without users ever realizing how many systems worked behind the scenes.

The App Store Inside ChatGPT

OpenAI’s new Apps SDK transforms ChatGPT from a standalone assistant into a connected operating system. Instead of switching tabs between tools like Canva, Figma, Coursera, or Spotify, users can now access them directly within a chat. The goal is to make ChatGPT a single workspace for everything, including creation, communication, and transaction.

The SDK lets app developers from a team such as Figma integrate their fully interactive apps right inside the ChatGPT interface. Now a user can design a presentation with Canva, browse Zillow listings, book a flight on Expedia, or register for a Coursera class, all without leaving the OpenAI window. Each app provides its own interface allowing images, widgets, and buttons to appear inside conversations.

For application providers, this creates an entirely new distribution channel. Instead of driving users to websites, brands can meet customers where they already work. Imagine a donation app that lets you give to a nonprofit directly in ChatGPT, or a SaaS company’s customer service tool that handles billing questions in real time without sending users to a separate CS portal on the web.

The challenge will be adoption and trust. Some apps, like Canva or Figma, make more sense on their own platforms where users get access to full creative controls. Others, like booking or education apps, benefit from the simplicity of one-click integration in a popular platform. Over time, developers will learn which experiences belong inside the chat and which should stay outside it.

Google’s Computer Use Model: Agents That Can See and Click

Google’s new Gemini 2.5 Computer Use model marks a major step toward fully autonomous digital agents. Instead of relying on APIs or prebuilt integrations, this system can view a screen, interpret what it sees, and take real actions — clicking, typing, dragging, and confirming inputs in real time.

That visual understanding is powered by pixel-level precision. Unlike earlier systems that relied on simulated page maps, Gemini 2.5 actually sees what a user sees, right down to layout, color, and spacing. It can identify buttons, handle pop-ups, and navigate complex sites that lack developer APIs. For developers, it works inside Google AI Studio or Vertex AI, and it’s compatible with Chromium-based browsers like Chrome and Brave.

In testing, the model demonstrated a major leap in reliability. It not only interpreted screen states correctly but managed multiple tasks at once — comparing tickets on different sites, cross-referencing prices, and filling forms simultaneously. These capabilities move closer to true digital autonomy, where agents act like employees using standard web tools instead of custom automations.

The next leap will come from surpassing this ‘visual’ interaction with painted screens by accessing structured data behind the visual presentation. Future systems will likely shift from pixel interpretation to the Document Object Model (DOM), allowing direct reading and writing of web content in code without the intermediary visual layer. When that happens, websites will need to adapt their designs for AI data readability, just as they once optimized for mobile.

When AI Brings the Dead Back to Life

Sora 2’s latest updates do more than generate lifelike motion. They recreate the movements, expressions, and speech patterns of public figures and private individuals alike. The results are striking and unsettling. Within days of release, social media filled with realistic clips of Robin Williams, Martin Luther King Jr., and Bob Ross speaking lines they never said or acting in ways that rewrite their legacies.

For families, this can be deeply personal. Zelda Williams, Robin’s daughter, has publicly asked people to stop sending her AI videos of her father, saying it feels like a violation, not a tribute. The technology blurs the line between honoring memory and exploiting likeness. What happens when historical figures are edited into new speeches that reshape what people believe they stood for? Or when a loved one’s likeness becomes content for entertainment?

The ethical challenge is not just about consent but control. Even if OpenAI or other major developers enforce strict rules, open-source systems and smaller startups can replicate the same abilities with fewer guardrails. Once a voice or face can be cloned, it becomes nearly impossible to contain. This creates real risks, from emotional manipulation to political propaganda, in a world where seeing is believing.

Still, the potential for good exists. Families could use AI responsibly to preserve memories, historians could reconstruct lost footage for education, and museums could bring context to history in immersive ways. The tension lies between preservation and performance and between remembering and rewriting.

Lovable Cloud Turns Good Planning Into Great Products

AI development is shifting from complex code to clear thinking. The most powerful skill now may not be writing syntax, but writing structure. That begins with a solid Product Requirements Document (PRD). Tools like Lovable Cloud show what happens when the discipline and specifications of a PRD meets the speed of no-code AI development.

A PRD forces clarity. It defines what a product should do, who it serves, and how success is measured. When paired with Lovable Cloud, those details become cogent instructions for AI coding agents. Within hours, a builder can go from concept to live prototype without writing a single line of code. Databases, APIs, authentication, and vector search are generated automatically, all guided by the structures laid out in the PRD.

That combination turns planning into more immediate execution. Instead of jumping into a vague build and correcting along the way, the AI has a map to help save time, reduce credit costs, and minimize frustration. It also teaches better product thinking, since each iteration toward a final PRD spec can be refined directly in conversation with ChatGPT or another frontier model before development and deployment in Lovable.

The result is accessible creation. Anyone with a clear idea and a few well-written pages can now build complex tools: knowledge hubs, lead generators, client portals, or even subscription apps. Lovable Cloud lowers the barrier to building, while the PRD ensures you build something worth finishing.

Just Jokes

The “Real” Thing

Did you know?

The SMART Partnership and EarthRanger, a conservation product built by Ai2, launched the SMART–EarthRanger Conservation Alliance on October 10, 2025, creating a free, global support system that unites the world’s leading protected-area management tools to help rangers, park managers, and conservation teams protect wildlife and habitats.

The alliance pairs EarthRanger’s real-time data platform, which pulls together ranger patrol reports, camera traps, GPS collars, and remote sensing, with SMART’s field-tested monitoring and training resources. Together the tools let teams spot threats faster, prioritize patrols where they matter most, and share best practices across countries. By giving frontline teams better data and training, the program aims to speed responses to poaching, reduce human-wildlife conflict, and boost the odds that protected areas meet conservation goals.

This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.

The Mirror World Conundrum

Cities are beginning to run on intelligent digital twins. These are AI systems that could one day absorb traffic data, social media, local news, environmental sensors, even neighborhood chat threads. These twins don’t just count cars or track power grids; they interpret mood, predict unrest, and simulate how communities might react to policy changes. City leaders use them to anticipate problems before they happen: water shortages, transit bottlenecks, or public outrage.

Over time, these systems stop being just tools and start feeling like advisors. They model not just what people do, but what they might feel and believe next. And that’s where trust begins to twist. When an AI predicts that a tax change will trigger protests that never actually occur, was the forecast wrong, or did its quiet influence on media coverage prevent the unrest? The twin becomes part of the city it’s modeling, shaping outcomes while pretending to observe them.

The conundrum
If an AI model of a city grows smart enough to read and guide public sentiment, does trusting its predictions make governance wiser or more fragile? When the system starts influencing the very behavior it’s measuring, how can anyone tell whether it’s protecting the city or quietly rewriting it?

Want to go deeper on this conundrum?
Listen to our AI hosted episode

Did You Miss A Show Last Week?

Catch the full live episodes on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.

News That Caught Our Eye

OpenAI’s Dev Day Dominates AI Headlines
OpenAI’s long-awaited Dev Day featured the debut of Agent Builder, Chat Kit, and a new Apps SDK allowing external tools like Canva, Spotify, and Zillow to operate directly inside ChatGPT. Sam Altman described the rollout as “a lot of ships today,” signaling an aggressive expansion of OpenAI’s ecosystem.

Deeper Insight:
This transforms ChatGPT from a chat interface into a full application platform. Developers can now build interactive, AI-native experiences directly within ChatGPT which is blurring the line between assistant and operating system.

Abacus AI Pushes for Prosumer Market Share
Abacus AI announced new “Super Agent” features and pricing at just $10 per month, targeting individual and small business users. The move positions Abacus as a budget-friendly option in a market dominated by OpenAI and Anthropic.

Deeper Insight:
By undercutting competitors on price while offering comparable features, Abacus is going after the “prosumer” tier, a strategic play to capture creators and startups priced out of enterprise tools.

SoftBank Acquires ABB’s Robotics Unit for $5.4 Billion
SoftBank expanded its AI robotics portfolio with the purchase of ABB’s industrial robotics division, citing embodied AI as the next computing revolution.

Deeper Insight:
This signals deepening interest in physical artificial intelligence. By investing in both hardware and AI brains for robots, SoftBank is betting that humanoid and task-specific robots will become the next generation of hi-tech productivity tools.

Figure AI Deploys Robots in BMW Factories
Figure AI confirmed its humanoid robots are operating in BMW production lines and unveiled its newest model, Figure 3, featuring improved dexterity and reasoning.

Deeper Insight:
This milestone marks the move from proof-of-concept to real-world industrial deployment. Factory work is likely the first domain where humanoid robots achieve true commercial adoption.

China Restricts AI and Robotics Exports
China’s Ministry of Commerce announced new export controls covering advanced AI models, robotics components, and semiconductor tools, citing national security.

DDeeper Insight:
The policy mirrors U.S. export limits and deepens the global AI trade divide. Nations are locking down their compute and hardware stacks , the new currency of geopolitical power.

Anthropic Names Rahul Patil as CTO
Anthropic appointed Rahul Patil, formerly of DeepMind where he led infrastructure and engineering teams focused on scaling compute and AI model deployment, as their new Chief Technology Officer to guide engineering, infrastructure, and related technical divisions.

Deeper Insight:
This hire highlights Anthropic’s eminent role in the top tier of AI companies and is further indication of the fluid movement of top talent among them.

Google Unveils Gemini 2.5 Computer Use Model
Google released Gemini 2.5 with full browser automation. The model can read, click, type, and drag on websites without needing custom APIs, offering precise pixel-level control and parallel task handling.

Deeper Insight:
This puts Google in direct competition with OpenAI’s Agent Mode and Perplexity’s Comet. Its focus on safe automation marks a turning point for enterprise-grade AI agents that can navigate the web environment with human-like facility.

IBM and Anthropic Partner on AI-First IDE
IBM integrated Anthropic’s Claude into its AI-first IDE, bringing Claude’s reasoning abilities to enterprise software development.

Deeper Insight:
This collaboration gives Anthropic access to IBM’s massive enterprise developer base and cements Claude’s reputation as the “safer” enterprise assistant.

DeepMind Launches CodeMender to Fix Security Bugs Automatically
DeepMind introduced CodeMender, a model that finds, tests, and patches vulnerabilities in codebases.

Deeper Insight:
By merging reasoning with automation, DeepMind is positioning AI as an invisible cybersecurity layer that maintains code integrity without human review.

Google Quantum AI Researchers Win Nobel Prize in Physics
Three scientists affiliated with Google Quantum AI received the Nobel Prize for work proving quantum behaviors can be engineered, not just observed. This is foundational to today’s quantum computing research.

Deeper Insight:
This recognition validates Google’s long-term quantum research investment and could boost investor confidence in Google and the next wave of quantum startups.

MrBeast Warns AI Could Disrupt Creators
YouTube’s biggest creator, MrBeast, publicly voiced concern that AI-generated video content could threaten millions of creative jobs. He called the moment “scary times” for independent creators.

Deeper Insight:
When top influencers start sounding alarms, the conversation shifts from tech hype to livelihood risk. This foreshadows a looming cultural collision between AI efficiency and human creativity.

Anthropic Ordered to Pay $1.5 Billion in Copyright Case
A landmark ruling fined Anthropic $1.5 billion over training data use, marking one of the largest AI copyright judgments to date.

Deeper Insight:
The case highlights the growing “ask forgiveness, not permission” culture among AI firms and the rising financial stakes of unlicensed training.

Amazon Launches Quick Suite for Enterprise AI
Amazon unveiled Quick Suite, a comprehensive AI workspace for business users. The suite bundles Amazon Q (its generative AI assistant) with enterprise-ready productivity tools, allowing users to analyze data, generate content, and automate workflows directly inside AWS and Amazon WorkDocs. Quick Suite also introduces “Q Flow,” a feature that connects data across internal systems for instant insights.

Deeper Insight:
Quick Suite is Amazon’s most aggressive move yet into enterprise productivity, going head-to-head with Microsoft Copilot, Google Gemini Enterprise, and OpenAI’s business tools. By unifying AWS analytics, document creation, and task automation under one umbrella, Amazon is positioning itself as the go-to AI provider for companies that already rely on its cloud infrastructure.

Americans Say AI Shouldn’t Replace Certain Jobs
A new Pew Research report found that most Americans draw a clear line on where AI should and shouldn’t be taking over. While respondents supported AI in fields like customer service, finance, and logistics, strong opposition emerged for AI replacing teachers, nurses, therapists, and childcare workers. Many said these roles depend on empathy, trust, and moral judgment that AI can’t replicate.

Deeper Insight:
The results underscore the emotional boundaries people place around automation. While productivity tools are being embraced, public resistance to AI in care and education shows that human connection remains a defining line.

Air Street Capital Report Sparks Debate Over AI Leadership
Air Street Capital’s annual State of AI Report continued to drive discussion. It showed that 44% of U.S. companies now pay for AI tools, up from just 5% a year ago, and that DeepMind is currently doubling performance per dollar-cost faster than OpenAI. The report also emphasized that open-weight models in China are growing rapidly, now accounting for 40% of fine-tuning activity on Hugging Face.

Deeper Insight:
The report reframes AI competition around efficiency rather than just size. Open-weight collaboration is accelerating innovation, while U.S. labs remain focused on proprietary frontier models, setting up a philosophical and economic divide that will shape the next phase of the AI race.

Google Expands Gemini Enterprise Integrations
Google rolled out expanded integrations for Gemini Enterprise, including tighter links between Gemini and Gmail, Docs, Sheets, and Slides. The company is positioning Gemini as a “workforce multiplier,” capable of reading company data, summarizing meetings, and generating analysis across files and emails.

Deeper Insight:
Gemini’s focus on context-rich enterprise automation shows Google is doubling down on productivity AI. It’s a direct challenge to Microsoft 365 Copilot, but with the advantage of Google’s search and contextual understanding already built in.

OpenAI and DeepMind Comparison Heats Up
Following Air Street’s report, debate grew over which company, OpenAI or DeepMind, is advancing faster. The study found DeepMind’s efficiency gains occur every 3.4 months compared to OpenAI’s 5.8, implying that DeepMind’s research pace is outpacing OpenAI’s despite less hype.

Deeper Insight:
If the trend holds, DeepMind may quietly take the lead in sustainable AI progress. While OpenAI focuses on scale and ecosystem reach, DeepMind’s disciplined, data-driven R&D could prove the more durable long-term strategy.