- The Daily AI Show Newsletter
- Posts
- The Daily AI Show: Issue #36
The Daily AI Show: Issue #36
Ted Lasso smells potential with AI

Welcome to #36.
In this issue:
AI Overload: Sorting Through the Model Boom of 2025
o3 Minis and Deep Research: Smarter AI or Just Slower Answers?
From Search to Structured Analysis: How AI Research Tools Compare
Plus, we discuss AI tutor takeover, does OpenAI have a sales agent, Perplexity’s push for dominance, AI-created religion, and all the news we found interesting this week.
It’s Super Bowl Sunday morning
AI doesn’t watch football, but it has analyzed every stat, ad strategy, and the best time to grab snacks.
Mmm, nachos!
The DAS Crew - Andy, Beth, Brian, Eran, Jyunmi, and Karl
Why It Matters
Our Deeper Look Into This Week’s Topics
AI Overload: Sorting Through the Model Boom of 2025
The AI landscape has never been more crowded. Over the past few months, new models have launched at an unprecedented pace, making it difficult for even the most engaged users to keep up. From DeepSeek’s rise to OpenAI’s expanding lineup, and Google’s relentless push with Gemini, the rapid development has led to an overwhelming number of choices.
The LLM Arena leaderboard highlights just how competitive the space has become. Gemini 2.0 Flash Thinking currently sits at the top, followed closely by DeepSeek R1, GPT-4o, and various iterations of OpenAI’s latest models. Despite the flood of new options, mainstream adoption still favors ChatGPT, with businesses and everyday users defaulting to OpenAI’s ecosystem out of convenience and familiarity.
While competition fuels innovation, the sheer number of models raises questions about differentiation. Are these releases truly groundbreaking, or is the industry becoming oversaturated with models that offer marginal improvements? More importantly, for the average user, do these models have clear, distinct use cases that justify switching platforms?
WHY IT MATTERS
The Race for the Best Model is Heating Up: New releases from OpenAI, Google, DeepSeek, and others are constantly reshuffling AI rankings, but user adoption is not keeping pace with the volume of models being introduced.
Open Source is Closing the Gap: DeepSeek R1’s performance against proprietary models shows how open-source AI is becoming more competitive, challenging the dominance of major tech companies.
Confusion is Growing: With so many options available, users are struggling to determine which models best suit their needs. Feature overlap and inconsistent naming make it harder to compare models effectively.
Specialization is Key: While general-purpose AI remains dominant, niche models optimized for research, reasoning, and creative applications are gaining traction, pushing AI adoption beyond simple chat interactions.
The Future of Model Differentiation: Companies will need to focus on more than just raw performance. The success of a model will depend on accessibility, integration, and real-world utility rather than leaderboard rankings alone.
o3 Minis and Deep Research: Smarter AI or Just Slower Answers?
The release of OpenAI’s o3 Mini, o3 Mini High, and o3 Deep Thinking models marks a shift in AI reasoning capabilities. Unlike previous models, these versions don’t just process text, they engage in simulated reasoning, pausing, reflecting, and adjusting their thought processes in real time. This allows for more dynamic decision-making, a critical step toward more advanced agentic workflows.
Deep Research, a new framework layered onto multiple OpenAI models, further enhances these capabilities by integrating deeper source validation and more structured outputs. The ability to turn Deep Research on or off provides flexibility, but questions remain about how these models evaluate their sources and whether their self-directed reasoning aligns with human expectations.
WHY IT MATTERS
Simulated Reasoning Sets a New Standard: Unlike past models that relied on explicit chain-of-thought prompts, o3 models can self-correct and refine their reasoning mid-process.
Deep Research Enhances Transparency: Users can now access more detailed citations and structured results, though source validation still lacks the robustness of traditional search engines.
Agentic AI Moves Closer to Reality: These models lay the groundwork for AI systems that can plan, adapt, and execute tasks without requiring human intervention at every step.
Speed vs. Depth Trade-Offs: More reasoning isn’t always better. Some tasks benefit from quicker responses, while others require the depth of o3 Deep Thinking.
Business and Research Applications Expand: o3 models, paired with Deep Research, open up new possibilities for competitive analysis, content strategy, and technical research.
From Search to Structured Analysis: How AI Research Tools Compare
AI-driven research tools are redefining how students, professionals, and businesses gather, analyze, and synthesize information. Deep research models from OpenAI, Google’s Gemini, Stanford’s Storm, and Perplexity all offer different approaches to AI-powered research, each with distinct strengths. While OpenAI’s Deep Research model focuses on reasoning and analysis, Gemini’s Deep Research excels in retrieving a vast number of sources. Perplexity provides fast, citation-backed summaries, and Stanford’s Storm emphasizes academic rigor with structured insights.
The conversation around these models highlights a shift in how AI is being used, not just as a search tool, but as an active research assistant. Whether it’s drafting papers, summarizing complex topics, or generating insights from large datasets, AI is moving beyond information retrieval into structured reasoning and iterative analysis.
WHY IT MATTERS
AI Research is Becoming More Sophisticated: Models now go beyond simple search results, offering structured, multi-source analysis tailored to academic and business needs.
Customization is the Key Differentiator: OpenAI’s model iterates on research questions, Google provides broad source retrieval, and Perplexity delivers fast, citation-backed answers.
Academic and Business Applications Diverge: Students and researchers benefit from iterative reasoning, while businesses favor AI tools that integrate with existing workflows like Google Docs and Perplexity Spaces.
The Cost Factor: OpenAI’s Deep Research remains exclusive to high-tier users, while Gemini and Perplexity offer lower-cost access to research-focused AI tools.
Job Market Disruptions: AI’s ability to perform mid-level analyst work raises concerns about automation replacing research-heavy roles, accelerating shifts in employment structures.
Just Jokes

Did you know?
A 2024 study from Harvard University found that AI tutoring is twice as effective as active classroom learning. Researchers tested three different teaching methods: passive lectures, active classroom participation, and AI-powered tutoring.
The students who learned with AI tutors significantly outperformed the others, showing higher retention rates, better understanding of concepts, and improved problem-solving skills.
One of the key findings was that AI tutoring was just as effective as one-on-one instruction from a human tutor. The AI adapted to each student’s learning pace, identified weak spots, and provided personalized feedback. This level of individualized instruction is difficult to achieve in traditional classrooms.
The study suggests that AI-driven education could help close learning gaps and make high-quality tutoring available to a wider audience.
With more schools integrating AI tools into their curriculums, this research highlights how AI could transform education in the near future.

HEARD AROUND THE SLACK COOLER
What We Are Chatting About This Week Outside the Live Show
OpenAI Sales Agent
Karl shared a video from OpenAI’s presentation in Tokyo that demos a new Sales agent. It takes contact form information, enriches the data, and drafts a follow up email all within the system.
This is just a demo and no other information has come out about a release, but it shows where AI agents are going and how they will quickly disrupt business processes like sales and marketing.
Brian Thinks Perplexity Isn’t Leaving Enough Awesomeness For the Rest of Us
Perplexity has been on fire recently with the inclusion of both Deepseek R1 and OpenAI o3-mini reasoning models as the orchestrating LLMs behind Perplexity Pro Search, plus the addition of Gemini 2.0 Flash as a selectable AI model alongside GPT-4o, Claude 3.5 Sonnet, and Grok-2, while also releasing updated versions of their own models, Sonar and Sonar Pro, for use through the API.
Days later, they moved forward again with the release of Sonar Reasoning Pro through the Perplexity API; this model combines reasoning skills trained with R1 plus real-time web data.
Brian has been playing with all of them in Make, and is particularly impressed with the big leap Perplexity made from their best previous API model.
Beth says “Yes, Please!” to Google’s Recent Smart Glasses Patent With Adaptive Optics
This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.
The AI-Created Religion Conundrum
AI has the capacity to analyze all human belief systems, extract common patterns, and even generate entirely new philosophies or religions. Some people might find deep meaning in an AI-designed spiritual framework, one that eliminates contradictions, adapts to new knowledge, and provides a structured path to purpose. Unlike traditional religions, which evolved over centuries through human experience and interpretation, an AI-generated belief system could be built with logic, balance, and inclusivity from the start.
But if people begin to follow an AI-created religion, questions arise about its legitimacy. Can something created by an artificial system hold the same spiritual weight as faiths rooted in human history, tradition, and personal revelation?
Does the source of a belief matter if it provides meaning and moral guidance?
Or would such a movement signal the ultimate loss of human-driven spirituality, replacing organic faith with a synthetic construct?
The conundrum: If an AI-generated religion brings people comfort, purpose, and ethical guidance, should it be considered as valid as human-founded spiritual traditions? Or does faith require something beyond computational logic, making an AI-created belief system fundamentally hollow?
News That Caught Our Eye
Hugging Face Releases Open Research Assistant with OpenAI’s Assist
Hugging Face introduced Open Deep Research, an open-source alternative to proprietary AI research tools. This makes advanced AI search and synthesis available to independent researchers, universities, and businesses that prefer open-source solutions. Hugging Face's Open Deep Research uses OpenAI's large language models, such as GPT-40, o1, and o3-mini, through an API. This reliance on OpenAI's models allows the tool to perform tasks like web browsing and research report generation. However, Hugging Face has designed the tool to eventually support open-weight models.
Deeper Insight:
By decentralizing deep research capabilities, this initiative reduces dependency on major AI firms. It could drive innovation by giving startups and smaller organizations access to research-grade AI. If it gains traction, open-source deep research could change the balance of power in AI research and analysis.
ByteDance Unveils OmniHuman, a New Deepfake Generator
ByteDance introduced OmniHuman, an AI system capable of generating hyper-realistic deepfake videos. Unlike previous models, OmniHuman can modify body proportions, adjust movement styles, and blend different video formats, including cartoons and real footage. The system, trained on 19,000 hours of video, can also alter movements in existing clips.
Deeper Insight:
OmniHuman takes deepfake realism to another level, making AI-generated video more convincing than ever. This has massive implications for entertainment and gaming but also raises concerns about misinformation and identity fraud. Ten U.S. states have already passed anti-AI impersonation laws. Regulators may need to act quickly before deepfake technology becomes a major societal risk.
Anthropic’s Unhackable AI? $15,000 Bounty Goes Unclaimed
Anthropic’s Claude AI models have gone through 10,000 adversarial hacking attempts with no successful break-ins. The company initially offered $15,000 to any developer who could jailbreak Claude, but no one succeeded. Now, Anthropic is opening the challenge to the public.
Deeper Insight:
Anthropic’s approach, which trains AI within constitutional guardrails rather than relying on post-training reinforcement learning, appears to be working. If Claude remains resistant to manipulation, it could set a new standard for AI safety. However, some worry that overly restrictive AI models may struggle to handle complex real-world ethical decisions.
MIT AI Model Maps How Genes Influence Disease
MIT researchers developed an AI model that analyzes 3D genomic structures to understand how DNA sequences affect gene expression. This breakthrough could help researchers identify mutations linked to diseases more efficiently.
Deeper Insight:
AI is playing a bigger role in personalized medicine. By modeling genetic interactions more precisely, this research could lead to faster drug development and better early disease detection. In the future, AI may help doctors create treatments tailored to individual genetic profiles.
California State University Rolls Out ChatGPT EDU for 500,000 Students
California State University (CSU) is now the first major university system to fully integrate ChatGPT Edu, OpenAI’s specialized education model. More than 460,000 students and 63,000 faculty members across 23 campuses will have access.
Deeper Insight:
This marks a turning point in AI adoption in education. Instead of banning AI, CSU is incorporating it at scale. If successful, this could reshape how universities teach, moving toward AI-assisted research, coursework, and problem-solving rather than traditional memorization-based learning.
Figure AI Drops OpenAI as Its Robotics Partner
Figure AI, a leading robotics startup, has ended its partnership with OpenAI. The company will develop its own in-house AI models instead. Its CEO promised to unveil something "never seen before" in humanoid robotics within 30 days.
Deeper Insight:
This decision reflects a trend where AI companies are shifting away from external providers in favor of proprietary systems. If Figure AI succeeds, it could mark a turning point for robotics companies that want to control their own AI infrastructure rather than relying on firms like OpenAI and NVIDIA.
AI-Powered Breast Cancer Screening Shows Real-World Impact
A study in Germany involving 460,000 women found that AI-assisted mammogram screenings improved breast cancer detection rates without increasing false positives. The AI acted as a second reviewer, working alongside human doctors.
Deeper Insight:
This study reinforces the value of AI as an augmentation tool rather than a replacement for doctors. AI is proving useful in diagnostics, but its biggest impact may be in enhancing human decision-making rather than making independent calls.
Humanity’s Last Test: AI Now Scores 26% on Human-Level Intelligence Benchmark
A new AI evaluation known as "Humanity’s Last Test" is making waves. The test is designed with human expertise across multiple fields, making it nearly impossible for any single person to pass. Just weeks ago, leading AI models scored between 3% and 8%, but recent advances in reasoning AI have pushed scores to 26%.
Deeper Insight:
This milestone suggests AI is moving toward a level of reasoning and problem-solving that was previously out of reach. If AI continues to improve on a test designed to challenge human intelligence at its highest levels, we could soon see AI systems that surpass human experts in research, planning, and decision-making.
Did You Miss A Show Last Week?
Enjoy the replays on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.