- The Daily AI Show Newsletter
- Posts
- The Daily AI Show: Issue #35
The Daily AI Show: Issue #35
Anguilla is making money from what?

Welcome to #35.
In this issue:
The Hottest AI Skill in 2025? It’s Not Prompting.
Shadow AI Is Already in Your Business: The Question Is, Now What?
What DeepSeek’s Success Says About the Future of AI
Would You Trust an AI Version of Yourself?
Plus, we discuss Sam’s finger pointing at DeepSeek, Brian’s AI assistant clone, Karl sets Operator out to run CoPilot tasks, Eran’s own shocking response to an AI tool, the reason Anguilla is loving AI more than most, AI’s role after death, and all the news we found interesting this week.
It’s Sunday morning
February is here, and AI is politely ignoring the fact that your ‘New Year, New Me’ plan is already on pause.
It’s OK!
We are here to fill you up with all the AI good vibes.
The DAS Crew - Andy, Beth, Brian, Eran, Jyunmi, and Karl
Why It Matters
Our Deeper Look Into This Week’s Topics
The Hottest AI Skill in 2025? It’s Not Prompting
AI adoption is accelerating, but the most valuable skill isn’t learning how to code or writing the perfect prompt, it’s understanding how to effectively integrate AI into real-world work. The ability to manage AI tools, structure workflows, and ensure outputs are accurate and useful is what separates those who thrive in this AI-driven economy from those who struggle to keep up.
The conversation around AI skills has moved beyond just prompt engineering. Professionals who can identify the right AI tools, optimize their usage, and seamlessly blend AI with human expertise are becoming indispensable. Whether in sales, marketing, research, or operations, those who understand how to direct AI toward meaningful outcomes will have a massive advantage.
WHY IT MATTERS
AI is Not a Replacement, it’s an Accelerator: Knowing how to combine AI with human expertise is the key to increasing efficiency without sacrificing quality.
Tool Overload Creates Opportunity: Businesses are flooded with AI tools but lack clear strategies for implementation. Those who can cut through the noise and apply AI effectively will be highly valued.
Beyond Prompting, Process Matters More: Knowing how to frame problems, structure workflows, and oversee AI-generated results is more important than simply writing good prompts.
Communication and Critical Thinking Set You Apart: AI can produce content, but professionals who can interpret, refine, and apply AI-generated insights will stay ahead.
Adaptability is the Real Superpower: AI tools change constantly. The ability to learn, experiment, and adjust is what keeps professionals competitive in this evolving landscape.
Shadow AI Is Already in Your Business:
The Question Is, Now What?
Employees are using AI tools to increase efficiency, automate tasks, and optimize workflows, often without their employers knowing. Shadow AI refers to the unapproved, unmonitored use of AI in the workplace, a trend that is growing as individuals adopt AI to manage workloads and streamline processes. While this can boost productivity, it also presents risks for companies that fail to track and manage these tools effectively.
Shadow AI isn’t inherently good or bad.
Its impact depends on whether businesses take a proactive or reactive stance. Companies that embrace AI literacy and transparent policies can harness these innovations while ensuring compliance and security. Those that ignore it risk losing not just an employee when someone leaves, but entire workflows, automation systems, and institutional knowledge that have been quietly built in the background.
WHY IT MATTERS
Companies May Be Losing More Than Employees: When employees leave, they may take custom AI workflows with them, leading to unexpected productivity losses.
AI Literacy Gaps at the Top: Many executives are unaware of how employees are using AI, leading to missed opportunities and poor AI governance.
Security and Compliance Risks: Unapproved AI tools could introduce data privacy concerns or security vulnerabilities, creating risks companies don’t even realize exist.
A Culture Shift Is Needed: Instead of banning AI, businesses should focus on transparency, structured implementation, and clear guidelines to encourage responsible usage.
Workplace AI Adoption Is Inevitable: Employees will continue to bring in AI tools, whether sanctioned or not. The best strategy is to engage, educate, and integrate AI properly rather than fight against its adoption.
What DeepSeek’s Success Says About the Future of AI
DeepSeek’s rapid ascent to the number one spot in the App Store in just three days has sparked conversations about AI adoption, open-source competition, and the broader impact on the AI landscape. With over 2.6 million downloads within days, DeepSeek’s success raises critical questions.
Was this purely a curiosity-driven phenomenon, or does it signal a shift in user behavior away from dominant platforms like ChatGPT?
Beyond its rapid adoption, DeepSeek represents a significant milestone for China’s AI ambitions. The model has reached near-parity with top Western AI systems at a fraction of the compute cost, showcasing major advancements in efficiency. However, questions remain about whether DeepSeek’s momentum will lead to sustained user adoption or if it’s simply the latest novelty in AI’s fast-moving landscape.
WHY IT MATTERS
Challenging the AI Status Quo: DeepSeek’s success highlights a growing appetite for alternatives to dominant Western AI platforms, potentially reshaping the competitive landscape.
Reasoning Models Enter the Mainstream: Users unfamiliar with deep reasoning models are now engaging with them for the first time, creating demand for AI that "thinks" rather than just responding.
China’s AI Efficiency Gains: DeepSeek achieved near-parity with top Western AI models while operating on lower-cost infrastructure, suggesting a shift in global AI competitiveness.
Data Privacy and Security Concerns: With rising concerns over data governance, Western users may hesitate to fully embrace AI systems developed outside established regulatory frameworks.
Implications for Open-Source AI: DeepSeek’s release could fuel innovation in open-source AI, forcing competitors like OpenAI and Anthropic to accelerate their own model advancements.
Would You Trust an AI Version of Yourself?
AI voice cloning technology is more accessible than ever, allowing anyone to create digital versions of their own voice for assistants, content creation, or business automation. Tools like ElevenLabs, Resemble AI, and OpenAI’s voice models make it easy to generate realistic voice clones, but this raises ethical and security concerns. The ability to create near-perfect voice replicas has implications for fraud, personal identity protection, and even workplace automation.
While AI voice assistants can increase efficiency, reduce manual workloads, and improve accessibility, they also introduce risks. Companies are exploring voice cloning for training, customer service, and personal branding, but without clear safeguards, cloned voices could be misused.
The conversation is shifting from can we do it? to should we do it? and how do we control it responsibly?
WHY IT MATTERS
Voice Authentication is No Longer Secure: Scammers can now replicate voices convincingly enough to bypass traditional voice verification systems, making security measures like safe words or multi-factor authentication critical.
Ownership and Licensing Questions: When a company clones an employee’s voice for training materials or marketing, it raises legal and ethical concerns about who controls that digital likeness.
AI Assistants Are Becoming More Personalized: Voice cloning can make AI assistants sound like real people, increasing engagement but also blurring the line between human and machine interactions.
Industry Use Cases Are Expanding: Businesses are exploring AI voices for automated customer service, sales calls, and content creation, reducing costs but raising questions about transparency and consent.
Trust and Public Perception: If AI-generated voices become indistinguishable from real ones, society may need new digital watermarking or verification systems to maintain trust in audio communications.
Just Jokes

Open AI Warns DeepSeek ‘Distilled’ Its AI Models
Did you know?
The Caribbean island of Anguilla is experiencing a significant economic boost due to the surge in demand for .ai domain names, driven by the artificial intelligence boom. Control of the .ai internet address was allocated to Anguilla in the 1990s.
Since ChatGPT's debut around two years ago, the interest in AI-related domain names has increased substantially. This rise in demand has led to a fourfold increase in Anguilla's earnings from web domain registration fees, reaching $32 million last year, which now constitutes about 20% of its government revenue.
The government collects fees both for renewing and registering new .ai domains. Anguilla has recently contracted Identity Digital, a U.S.-based company, to manage the growing domain registrations, enhancing revenue potential and domain security.
The island’s Premier Ellis Webster notes that while this income supports key governmental projects and infrastructure, it is essential not to rely solely on this revenue source due to potential future shifts in industry trends.

HEARD AROUND THE SLACK COOLER
What We Are Chatting About This Week Outside the Live Show
Brian Cloned Himself
On Friday’s show, Brian was demoing an AI Assistant that you could talk to like Advanced Voice Mode about past show topics.
But the process actually started earlier in the week.
The first hurdle was getting an ElevenLabs premier voice clone which required 3 hours of training audio and another 6 hours to process. From there he had to wait another 48 hours for ElevenLabs to approve his voice for public use.
Meanwhile, he built an AI assistant in Synthflow which was a fairly straightforward process, but ended on a sour note when he couldn’t get his ElevenLabs voice to sync as it was stated to work.
The final piece was deploying the iframe widget on the DAS website, which worked, but not while streaming live for the show.
Overall, Brian said he wouldn’t use Synthflow again, but it was an overall success because it showed what is possible . . .for good and for bad.
Eran Was Pleasantly Surprised by Office 365 CoPilot
Eran mentioned how easy it was for him to create a slide deck in PowerPoint. He said it took 60 seconds and would have normally taken him more than an hour.
He ended by saying he felt it did 95% of the work and allowed him to focus more on the last 5% to take it from good to great.
Karl Used Operator to Run CoPilot Commands.
In a natural attempt to see if he could “break” Operator, he decided to create several tasks in Office. To quote Karl, he said, “OMG, it was pretty good. Not only can it get the task done, but it can use Microsoft CoPilot 365 quite well. A personal AI Agent that can operate in-platform agents!! CRAZY!”
This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.
The AI Afterlife Conundrum:
AI is making it possible for people to create lifelike digital avatars of themselves, trained on their speech, memories, and personality to continue interacting with loved ones after they pass away. These AI clones could provide comfort, preserve wisdom, and keep family connections alive across generations. However, as the world changes, these avatars will face questions and challenges their human counterparts never experienced. Over time, they may evolve beyond who the person truly was, raising deep ethical and emotional questions about identity, agency, and control.
The conundrum: If AI avatars allow loved ones to "live on" after death, who decides how they evolve in a future they never existed in? Should the deceased have absolute control over their AI replica’s behavior, locking it in time forever? Or should the living be able to update and shape these digital ghosts, even if it means rewriting a person’s legacy into something they never were?
News That Caught Our Eye
OpenAI Releases O3 Mini and O3 Mini-High, Raising the Bar for Compact AI Models
OpenAI has officially launched O3 Mini and O3 Mini-High, two new lightweight models designed to balance efficiency and performance. The O3 Mini model is optimized for cost-effective general use, while O3 Mini-High offers enhanced reasoning and contextual understanding, making it a strong competitor in the compact AI model space.
Deeper Insight:
This release signals OpenAI’s shift toward making powerful AI more accessible at lower costs. By offering a compact model with competitive capabilities, OpenAI is positioning itself against emerging challengers like DeepSeek and Mistral. The “Mini-High” variant, in particular, suggests OpenAI is refining model differentiation, ensuring that users who need enhanced reasoning don’t have to upgrade to the full-scale GPT-4 class models. If performance benchmarks hold, this could also accelerate AI adoption in enterprise environments where cost and efficiency are top priorities.
NVIDIA’s Stock Plunge and Recovery
NVIDIA’s stock took a hit amid concerns over DeepSeek’s disruptive potential and ongoing U.S.-China trade tensions. However, the stock has started to rebound, reflecting confidence in NVIDIA’s continued dominance in AI chips despite emerging competition. It closed Friday at $120.07, down from its all time high of $149 on January 6th. Keep in mind, 1 year ago the stock was at $61.53.
Deeper Insight:
This event underscores the volatility of the AI market and how geopolitical shifts directly impact major tech companies. The AI hardware race isn’t just about innovation, it’s about supply chains, government policies, and international competition. NVIDIA’s ability to recover signals investor faith in its long-term positioning, but competitors like AMD and domestic Chinese chipmakers could challenge its stronghold.
OpenAI Launches ChatGPT Gov to Strengthen Public Sector Presence
OpenAI announced ChatGPT Gov, a version tailored for U.S. government agencies. The model emphasizes security and compliance, with usage already reported in military research labs and state governments.
Deeper Insight:
This move is a strategic counter to concerns about AI’s role in national security. By aligning itself with the U.S. government, OpenAI not only reinforces its credibility but also distances itself from foreign AI developments like DeepSeek. The endorsement from federal agencies could pave the way for OpenAI’s deeper integration into critical government operations.
Hugging Face and Hyperbolic Make DeepSeek R1 More Accessible
Hugging Face and Hyperbolic have stepped in to provide controlled access to DeepSeek R1 via cloud services, allowing users to run the model on U.S. servers with added security. This effort seeks to address privacy concerns surrounding the Chinese-developed model.
Deeper Insight:
This move highlights the growing demand for open AI models while also reflecting industry concerns over data privacy. If DeepSeek gains traction in enterprise settings, it could force regulatory discussions on AI data sovereignty and international AI collaboration.
Microsoft and OpenAI Investigate DeepSeek’s Alleged Use of OpenAI Data
Reports indicate that Microsoft and OpenAI are investigating whether DeepSeek trained its models using OpenAI’s outputs, potentially violating service agreements. The investigation focuses on whether OpenAI data was used for distillation, a process where a smaller model learns from a larger one.
Deeper Insight:
If proven, this case could set a precedent for AI intellectual property disputes. It also highlights the challenges in tracking data usage in AI development. As companies push toward AI transparency, proving data lineage will become a key regulatory issue.
Brussels AI Lab Explores Child-Like Learning for Language Models
Researchers in Brussels are studying ways to train AI in a manner similar to how children learn language. By integrating environmental interactions and a more holistic understanding of meaning, this approach aims to reduce hallucinations and biases while making AI more energy-efficient.
Deeper Insight:
This could represent a paradigm shift in AI development. Instead of just predicting the next token, models trained this way would have deeper contextual awareness, potentially solving long-standing issues like misinformation and ambiguous responses. If successful, this could challenge the transformer-based dominance in AI.
Figure AI Creates Humanoid Safety Standards Initiative
Figure AI announced the Center for the Advancement of Humanoid Safety, aimed at establishing industry benchmarks for safety in embodied AI systems. Unlike existing AI safety standards, which focus on language models, this initiative targets robots and autonomous systems.
Deeper Insight:
As humanoid robots become more advanced, concerns over real-world risks, such as malfunctioning or unintended actions, are growing. Establishing clear safety metrics early could prevent regulatory roadblocks in the future. If widely adopted, these standards could shape how robotics companies develop and deploy AI-powered machines.
Reid Hoffman’s Manus AI Raises $24 Million for AI Drug Discovery
Reid Hoffman’s new venture, Manus AI, secured $24 million to focus on AI-driven drug discovery, particularly targeting aggressive cancers. The company aims to speed up the identification of promising compounds using AI.
Deeper Insight:
This is yet another sign that AI is transforming biotech. Drug discovery is a complex, expensive process, and AI’s ability to analyze massive datasets could significantly reduce costs and time-to-market. If successful, this could be a milestone in AI-assisted medical breakthroughs.
Pika 2.1 Brings Higher-Quality AI Video Generation
AI video generation platform Pika has released Pika 2.1, boasting 1080p resolution, improved character consistency, and smoother motion. The update brings Pika closer to industry leaders like OpenAI’s Sora and Google’s VideoFX.
Deeper Insight:
As AI-generated video becomes more realistic, industries like marketing, filmmaking, and social media are being reshaped. The battle for AI video supremacy is heating up, and companies that master character consistency and seamless motion will gain an edge in mainstream adoption.
Google Expands Free Access to Thinking Experimental Model
Google’s Gemini Thinking Experimental Model is now free for users in AI Studio, allowing the public to test its advanced reasoning capabilities. Early feedback highlights its improved logical consistency and structured explanations.
Deeper Insight:
This move reflects Google’s strategy to regain its AI credibility after previous missteps with Bard. By making cutting-edge reasoning tools widely available, Google is positioning itself as a serious competitor to OpenAI and Anthropic in the reasoning model space.
Did You Miss A Show Last Week?
Enjoy the replays on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.