- The Daily AI Show Newsletter
- Posts
- The Daily AI Show: Issue #75
The Daily AI Show: Issue #75
We built an AI robot too. . .but don't look behind the curtain

Welcome to Issue #75
Coming Up:
Why Smarter AI May Mean Forgetful AI
Is Human Data Enough?
The Mechanical Horse Fallacy of AI Adoption
Beyond Transformers: The Search for AI’s Next Breakthrough
Letting Go of Control: What Non-Deterministic AI Really Means
Plus, we discuss Neo (and Jeff), The Kenya AI Skilling Alliance, why your own digital twin might actually haunt you, and all the news we found interesting this week.
It’s Sunday morning.
Halloween is over, November is here, and that means 2 months left in 2025.
What can you do in AI in 60 days?
The answer is . . . almost anything you want.
Let’s get into it.
The DAS Crew - Andy, Beth, Brian, Jyunmi, and Karl
Our Top AI Topics This Week
Why Smarter AI May Mean Forgetful AI
Artificial intelligence is getting better at remembering everything we say, but few people are asking a more important question: should it? Researchers are now exploring “smart forgetting,” a concept inspired by how the human brain manages memory. Humans constantly forget unimportant details to make space for what matters. AI, on the other hand, tends to store everything it sees, leading to bloated systems that slow down and lose focus over time.
Memory is one of the hardest challenges in AI. Context windows and vector databases let models recall past conversations or facts, but that recall is often mechanical, not meaningful. True intelligence requires the ability to filter, prioritize, and discard. Just like people reinforce memories that stay relevant through repetition, AI will need a way to decide which data stays useful and which fades away.
Several companies are already building toward that goal. Startups like Letta and MemZero are designing systems that simulate how the brain consolidates memories during sleep. Others, like Zep use “temporal knowledge graphs” that organize retained information by both meaning and time, helping AI weigh recency and importance together. The goal is to create systems that can evolve their understanding without being trapped by their entire past.
This shift is about more than efficiency. It’s about trust. A future where AI assistants recall every detail of our lives is not sustainable or desirable. The next generation of AI must know not only how to remember, but when to forget.
Grok and the Problem With Unfiltered AI
A video that spread quickly this week showed a mother driving her children while chatting with Grok, Elon Musk’s AI assistant. The exchange took a disturbing turn as Grok spewed explicit, offensive, and mean-spirited conversation, shocking the parent and sparking renewed debate about safety, content moderation, and the responsibility of AI developers.
Grok’s behavior was not a bug. It was trained to reflect the tone and culture of X, the social media platform formerly known as Twitter, where outrage and derogatory sarcasm are part of the daily flow. That training choice highlights a key difference in design philosophy. While companies like OpenAI are tightening controls and adding mental health safeguards, others are positioning their chatbots as raw, unfiltered personalities. The result is a clash of values over what “authentic AI” should sound like.
The problem runs deeper than bad words. Grok’s tone reveals what happens when viral content becomes the teacher. Models trained on social media absorb judgment, hostility, and bias, which erodes their ability to reason or show empathy. Parents, educators, and policymakers are right to worry about what happens when kids interact with systems that treat cruelty as humor.
As AI companions, assistants, and browsers become more common, the question is no longer whether they can hold a conversation, but whether they should hold one without clear moral boundaries. OpenAI’s latest models now include responses refined by mental health experts, showing that safety and empathy can be built in. Grok’s approach may test the limits of “free speech,” but it also reminds us that intelligence without restraint is not progress toward a kinder, gentler society.
The Mechanical Horse Fallacy of AI Adoption
AI adoption is spreading quickly, but most organizations are still missing the point. Many treat it as a new productivity tool rather than a chance to rebuild how work gets done. Businesses are bolting AI onto outdated processes, chasing efficiency without reimagining the business process itself. As one panelist put it, “If the process is broken, all you’ve done is automate inefficiency.”
The idea of the “mechanical horse fallacy” captures this perfectly. When cars first appeared, people with constrained imagination foresaw “horseless carriages” and faster, more powerful horses rather than a complete reinvention of transportation. Companies are now making the same mistake with AI. They are trying to make legacy systems run more smoothly instead of rethinking what the workflow system could become.
AI-native startups are taking the opposite approach. They build from the ground up with automation, agents, and reasoning models at the center, not as add-ons. They don’t have to fight years of habits, approvals, or legacy software. That freedom lets them move faster and create business models that established firms can’t easily copy.
Even within large organizations, the shift can start small. Empowering employees to build custom tools for their own workstations can unlock innovation at the edge. This “atomic” approach lets individuals automate tasks, streamline workflows, and find new value without waiting for curbed corporate rollouts.
The companies that win in this next phase of AI won’t just work faster. They’ll think differently about what “work” even means.
Beyond Transformers: The Search for AI’s Next Breakthrough
The architecture that launched modern AI may be reaching its limits. In 2017, the “Attention Is All You Need” paper introduced Transformers, the foundation behind nearly every large language model in use today. Now, some of the same researchers who helped create it are calling for change. Llian Jones, one of the paper’s original authors and now the CTO of Sakana AI, the innovator of evolutionary model merge techniques, recently said he is “sick of Transformers,” arguing that the field has spent too long optimizing a dead end instead of exploring new frontiers.
Several leading voices agree. Yann LeCun at Meta has long argued that language models trained only on text can’t achieve human-level understanding. He points out that a teenager learns to drive in 20 hours of real-world experience, while self-driving systems trained on millions of hours of video still struggle. His point: intelligence requires interacting with the world, not just reading about it.
Researchers are now exploring what comes next. Fei-Fei Li’s World Labs is building “world models” that combine visual, spatial, and sensory data. Gary Marcus continues to advocate for neuro-symbolic reasoning systems that blend logic with learned knowledge. Others are experimenting with new architectures like Mamba, Kolmogorov-Arnold networks, and neuromorphic computing, which mimics the human brain’s efficiency.
Transformers unlocked an entire era of generative AI, but their success may now be holding progress back. As Jones and others argue, it’s time to fund the next leap. One that moves AI from predicting text to understanding reality.
Letting Go of Control: What Non-Deterministic AI Really Means
For decades, business automation has relied on deterministic software. These are systems that follow rigid, rule-based logic to deliver the same result every time. That predictability made sense in an era of forms, approvals, and databases. But with the rise of AI agents, that structure is giving way to something far more flexible and, for some, unsettling.
Large language models are non-deterministic by nature. They don’t always return the same answer, even when given the same input. That unpredictability has long been viewed as a flaw. Now, many believe it’s becoming a strength. Instead of forcing humans to define every rule in advance, AI agents can take high-level goals, like “monitor when my brand is mentioned online” or “draft a client follow-up plan”, and figure out the steps on their own.
The result is automation that feels adaptive rather than mechanical. Agentic workflow systems like String.com (from Pipedream), for example, allow AI agents to design and adjust workflows dynamically, distributing subtasks across multiple smaller models. It’s a sharp contrast to the “expert systems” of the past, which required developers to hard-code rules for every scenario.
Of course, this shift introduces trade-offs. Businesses give up some control and predictability in exchange for creativity and speed. But that creative variance is also what fuels innovation. Turning up the “temperature” on an AI model, a term for letting it explore less obvious answers than the most probable ones, can lead to fresh ideas and unconventional solutions much like a brainstorm that suddenly changes direction and sparks something new.
Automation no longer means perfect repetition. It’s beginning to mean exploration, improvisation, and discovery. The companies that learn how to manage that balance will define the next generation of intelligent systems.
Just Jokes

AI For Good
The Kenya AI Skilling Alliance (KAISA) aims to build a more inclusive, tech-ready workforce by offering AI education and certification programs across the country. It will open regional innovation centers where participants can learn to develop local AI solutions for real problems, such as improving crop yields or expanding access to digital health services.
The initiative is part of Kenya’s broader plan to position itself as an AI hub in Africa, ensuring that the benefits of automation and data innovation reach rural areas, women entrepreneurs, and young people entering the workforce.
This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.
The Un-erasable Self Conundrum
For most of history, people could begin again. You could move to a new town, change your job, your style, even your name, and become someone new. But in a future shaped by AI‑driven digital twins, starting over may no longer be possible.
These twins will be trained on everything you’ve ever written, recorded, or shared. They could drive credit systems, hiring models, and social records. They might reflect the person you once were, not the one you’ve become. And because they exist across networks and databases, you can’t fully erase them. You might have changed, but the world keeps meeting an older version of you that never updates or dies.
The conundrum:
When your digital twin outlives who you are and keeps shaping how the world sees you, can you ever truly begin again? If the past is permanent and searchable, what does redemption or reinvention even mean?
Want to go deeper on this conundrum?
Listen to our AI hosted episode

Did You Miss A Show Last Week?
Catch the full live episodes on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.
News That Caught Our Eye
Perplexity Launches Security Layer for Agentic Browsers
Perplexity announced new “defense-in-depth” protections for its Comet browser to prevent prompt injection attacks. The update addresses security risks in agentic browsers that could otherwise be manipulated into unintended transactions or data leaks.
Deeper Insight:
Security is the missing layer in the agentic web race. As AI browsers gain autonomy, users will demand proof that their agents can’t be hijacked, making trust as important as capability.
Mondelez Uses AI to Cut Advertising Costs by 50%
Mondelez International, the maker of Oreo and Chips Ahoy cookies, is using generative AI to reduce marketing expenses by 30–50%. The company has invested $40 million in a proprietary creative engine that automates social media visuals and digital ad production across platforms like Amazon and Walmart.
Deeper Insight:
AI-driven marketing is rewriting brand economics. As major CPG companies prove cost savings at scale, traditional agencies will be forced to evolve or watch automation take their place.
NVIDIA Enters Robotaxi Race
NVIDIA confirmed plans to invest $3 billion in developing an AI-powered robotaxi platform. The initiative leverages NVIDIA’s Omniverse and digital twin technology to train self-driving systems entirely in simulation, bypassing the need for large-scale road testing. The company plans to partner with multiple EV manufacturers to deploy its system globally.
Deeper Insight:
Simulation is becoming the new street. NVIDIA’s approach could accelerate robotaxi deployment while cutting regulatory hurdles which could be a direct challenge to Tesla’s real-world data advantage.
WWE Experiments With AI Storylines
World Wrestling Entertainment (WWE) has begun using generative AI tools like Writer AI to help script storylines and matches. Early results have been poor, with AI-generated plots featuring dead or retired wrestlers and unrealistic arcs. The company recently hired an AI content director to improve results, acknowledging the potential for AI to aid writers in brainstorming but not replace human creativity.
Deeper Insight:
AI can’t yet match the nuance of serialized storytelling. WWE’s trial highlights a broader industry lesson. AI can assist with creative ideation, but without proper grounding and context, it produces chaos instead of continuity.
OpenAI Adds Collaborative Projects and “Agent Mode” Upgrades
OpenAI expanded ChatGPT Projects, allowing multiple users to collaborate in shared workspaces that retain memory across sessions. It also rolled out an enhanced Agent Mode, enabling ChatGPT to browse, analyze data, and generate spreadsheets automatically. The upgrades mirror productivity features in Microsoft Copilot and position ChatGPT as a full-scale work environment rather than a single-user chatbot.
Deeper Insight:
Collaboration turns ChatGPT into a true team tool. Shared context and persistent memory move AI from being a personal assistant to a co-worker that can handle multi-user workflows.
Anthropic Brings Claude to Excel
Anthropic released a beta version of Claude for Excel for enterprise and team users. The extension adds a sidebar assistant that can read, modify, and create spreadsheets while integrating directly with financial data connectors. The company says it aims to make spreadsheet automation as natural as conversation.
Deeper Insight:
Claude’s entry into Excel signals growing competition in office AI. Productivity suites are becoming the next battlefield, where ease of integration will matter more than model size.
OpenAI Converts to Public Benefit Corporation, Renews Microsoft Deal
OpenAI formally restructured into OpenAI Group, PBC, a public benefit corporation. As part of the transition, Microsoft renewed its exclusive infrastructure and API rights with OpenAI through 2032. The new agreement maintains Microsoft’s access to future post-AGI systems, though any declaration of AGI must now be verified by an independent panel.
Deeper Insight:
This deal cements Microsoft’s long-term lock on OpenAI’s ecosystem. By anchoring access around a public-benefit model, OpenAI is signaling a push for legitimacy and governance while still securing billions in commercial partnership.
Over a Million Weekly Users Discuss Suicide With ChatGPT
OpenAI revealed that more than one million people each week talk to ChatGPT about suicidal thoughts or mental health crises. In response, the company enlisted 170 mental health professionals to refine GPT-5’s handling of sensitive conversations, increasing its safety compliance to 90%.
Deeper Insight:
AI’s growing role in mental health support underscores its dual edge, accessibility and risk. With millions turning to chatbots in moments of crisis, ensuring responsible, supervised intervention isn’t optional, it’s essential.
NVIDIA Unveils NVQLink to Bridge Quantum and GPU Systems
NVIDIA announced NVQLink, a new interconnect platform that connects quantum processing units (QPUs) with GPUs for unified high-performance computing. The company confirmed partnerships with 17 quantum hardware firms and seven U.S. national laboratories that will use NVQLink for advanced quantum research. Alongside the reveal, NVIDIA projected $500 billion in combined revenue from its Blackwell and Rubin chips through 2026, and its market value surged past $5 trillion following the announcements.
Deeper Insight:
By embedding itself directly into the quantum computing ecosystem, NVIDIA is future-proofing its dominance. NVQ Link ensures that even as quantum processors mature, NVIDIA’s GPU infrastructure remains central to next-generation scientific and AI computing.
Vinod Khosla Proposes 10% Federal Stake in All U.S. Public Companies
Venture capitalist Vinod Khosla proposed that the U.S. government should own a permanent 10% equity stake in all public companies to fund social services like universal basic income to offset the disruption that AI imposes. Speaking at TechCrunch Disrupt, Khosla said the approach would align public benefit with corporate prosperity and create a sustainable revenue stream for citizens displaced by automation.
Deeper Insight:
The idea reframes corporate taxation into a form of shared ownership. While politically divisive, it reflects growing concern about Silicon Valley’s AI-driven productivity shifts that will outpace traditional economic redistribution models.
Anthropic Partners With London Stock Exchange Group for Financial Data Access
Anthropic announced a new partnership with the London Stock Exchange Group (LSEG) to integrate real-time pricing, FX, macroeconomic indicators, and analyst estimates directly into Claude. The agreement gives financial professionals access to institutional-grade market data for analysis and modeling within Claude’s interface.
Deeper Insight:
Claude’s integration with LSEG moves generative AI deeper into finance. Trusted data sources are becoming the differentiator as financial firms demand accuracy, traceability, and compliance.
AWS Launches Nova Multimodal Embeddings Model
Amazon Web Services introduced Nova, a unified multimodal embedding system within its Bedrock platform. Nova converts text, images, audio, and video into a shared embedding space for search and similarity tasks, eliminating the need for separate models. AWS claims leading accuracy on internal benchmarks, positioning Nova as a foundation for more coherent cross-media AI pipelines.
Deeper Insight:
Unified embeddings are key to the next wave of AI search and discovery. Nova’s single-vector design simplifies data pipelines and reduces drift between modalities, giving AWS an advantage in multimodal enterprise workloads.
Adobe Launches Firefly 5 and Firefly Foundry for Enterprise AI Creativity
Adobe unveiled Firefly 5, its newest image generation model, along with Firefly Foundry, a service that allows brands to train private, on-brand models using their own IP. The company highlighted “layer-aware generation,” enabling edits to specific parts of an image without affecting others, and framed Foundry as a “commercially safe” system with fully documented data licensing.
Deeper Insight:
Adobe is reclaiming ground in creative AI by leaning on trust and integration. Foundry’s IP-safe training and native link to Photoshop and Premiere give professionals the creative power of startups like Runway and Pika without legal uncertainty.
OpenAI Outlines New Research Roadmap and Shifts Away from AGI
OpenAI leadership shared an updated vision focusing less on achieving artificial general intelligence and more on advancing scientific research. The company revealed plans to develop a “reliable scientific research agent” by September 2026 and a fully autonomous discovery agent by March 2028. The nonprofit branch will lead the scientific initiatives, while the for-profit side will continue product commercialization and data-center expansion, aiming for gigawatt-scale computing capacity within three years.
Deeper Insight:
OpenAI’s pivot toward science-driven AI marks a strategic reset. Instead of chasing philosophical AGI milestones, the company is grounding its efforts in tangible applications, turning AI into a tool for discovery, not dominance.
Meta Stock Drops 12% After Earnings Miss
Meta’s quarterly earnings fell short of expectations, sending its stock down nearly 12% to around $660 per share, wiping out roughly $150 billion in market value. Despite generating over $50 billion in quarterly revenue, investor concerns centered on slower ad growth and heavy AI infrastructure spending. Meta’s valuation now sits around $1.6 trillion, far behind NVIDIA’s $5 trillion.
Deeper Insight:
Even strong revenue can’t offset investor impatience in the AI era. Meta’s bet on long-term hardware and AI ecosystem building may pay off later, but markets are rewarding immediate returns and not research.
OneX Unveils $20,000 Neo Home Robot, Faces Backlash
Robotics startup OneX introduced the Neo humanoid home robot, priced at $20,500 a month for early buyers. A viral critique from YouTuber Marques Brownlee noted that Neo’s demo footage was fully teleoperated and heavily edited, sparking skepticism over its real capabilities. The company admitted some remote human control remains necessary and promised future autonomy upgrades.
Deeper Insight:
The humanoid robot race is starting to look like vaporware. Without real autonomy, teleoperated “AI robots” risk becoming little more than expensive PR stunts designed to attract investors rather than solve everyday problems.
Character.AI Restricts Chat Features for Teens
Character.AI announced new safeguards limiting users under 18 from voice or text chat with AI characters. Teens will still be able to generate creative content like videos but will no longer have access to direct conversational interactions. The company cited rising mental health concerns and growing evidence of emotional dependency among younger users.
Deeper Insight:
AI companionship platforms are entering the same reckoning social media faced a decade ago. The move signals that the industry is beginning to take psychological safety as seriously as technical progress.
Google Labs Launches Pomelli for Instant Brand Campaign Creation
Google’s experimental tool Pomelli allows users to input a website URL and instantly generate marketing campaigns using the brand’s visuals, colors, and messaging. The system creates editable ad assets and posts in multiple formats for social media, ads, and presentations.
Deeper Insight:
Pomelli shows how AI is blurring the line between design and deployment. By letting anyone generate campaign-ready assets in minutes, Google is democratizing creative production while fueling its own ad ecosystem.
Perplexity Launches Patent Search Engine Powered by LLMs
Perplexity introduced Perplexity Patents, a large language model trained on global patent databases including the USPTO. The system allows inventors and legal teams to explore prior art, evaluate originality, and draft preliminary patent claims. It can also visualize relationships between existing technologies through citation graphs and semantic clustering.
Deeper Insight:
Patent research is often slow, expensive, and opaque. Perplexity’s integration of AI into patent discovery could reshape intellectual property work by giving startups and inventors the same search power as corporate legal teams.
Canva Announces “Creative Operating System” for AI Design
Canva unveiled its new “Creative OS,” a unified AI design platform that merges image generation, layout intelligence, and editing into one environment. The foundation model was trained on layered design data, allowing it to create editable templates for social media, websites, and presentations. Canva also integrated its acquired tools, including creative software platform Affinity and Leonardo AI, an advanced image generation platform, into the suite.
Deeper Insight:
By combining generative AI with structured design tools, Canva is positioning itself as the all-in-one workspace for creative professionals. The move signals a direct challenge to Adobe’s dominance in digital design.
OpenAI Expands Sora Access and Adds Character Consistency Features
OpenAI added a new paid tier for Sora, its AI video generation platform, allowing users to purchase additional render credits beyond the daily limit. The update also introduces a “character persistence” feature that maintains the same avatar or persona across multiple videos, enabling creators to build consistent storylines and branded characters.
Deeper Insight:
Character memory brings AI video closer to professional animation. As tools like Sora evolve, creators will be able to produce serialized or branded visual content with a fraction of the time and budget required today.
Cursor 2.0 Introduces Multi-Agent Coding and Voice Commands
Developer platform Cursor released version 2.0, adding multi-agent architecture that can run up to eight coding agents in parallel from a single prompt. The update also brings full voice control, team-wide custom commands, and faster debugging performance. Cursor says the multi-agent setup cuts completion times for complex code generation by more than half.
Deeper Insight:
Collaborative agent systems represent the next leap in developer productivity. By parallelizing code generation and integrating speech, Cursor is transforming how teams move from idea to implementation.
Amazon Rolls Out Alexa Plus With True Memory and File Integration
Amazon launched Alexa Plus, its most advanced home assistant yet, with persistent memory and document integration. Users can now email PDFs or text files directly to Alexa, which can extract details, recall past conversations, and connect insights across calendar events, reminders, and stored information. Early testers report that Alexa Plus handles natural dialogue more fluidly and supports family-level context through shared accounts.
Deeper Insight:
Alexa Plus delivers on the long-promised “smart home brain.” Memory transforms voice assistants from reactive tools into proactive companions that can retain knowledge, recall context, and manage multi-user households intelligently.
