- The Daily AI Show Newsletter
- Posts
- The Daily AI Show: Issue #63
The Daily AI Show: Issue #63
Mission Accomplished! - Sam and OpenAI

Welcome to Issue #63
Coming Up:
Two Years of Daily AI Conversations
AI Personalized Media: Connection or Isolation?
How AI Could Redefine Aging at Home
Plus, we discuss the preemptive celebration of GPT 5, how AI is being used to preserve history, the pros and cons of AI being used for historical justice, and all the news we found interesting this week.
It’s Sunday morning.
Sit back and relax and help us celebrate 2 years of The Daily AI Show.
This one is for you!
The DAS Crew - Andy, Beth, Brian, Eran, Jyunmi, and Karl
Why It Matters
Our Deeper Look Into This Week’s Topics
Two Years of Daily AI Conversations
The Daily AI Show Turns 2
Two years ago, the AI landscape looked very different. GPT-3.5 had just arrived, DALL·E 2 was newly available to the public, Midjourney was racing through version updates, and the phrase “prompt engineering” was still finding its way into common vocabulary. It was a moment of rapid acceleration, with new tools, new ideas, and a sense that the world was tilting toward something big.
Fast forward to today, and the pace has only quickened. We’ve moved from chatbots as a novelty to integrated AI systems, from tinkering with image models to building complex multi-modal applications, and from isolated experimentation to a thriving global conversation. Along the way, hundreds of thousands of listeners and participants have joined in to share insights, challenge ideas, and push the conversation forward.
This anniversary is not just about looking back at the milestones, but also about looking ahead. The next two years will bring breakthroughs that feel impossible now, shifts in how we work and create, and opportunities to shape how AI is built and used. The community that has formed around these conversations is proof that curiosity and collaboration can keep pace with even the fastest-moving technology.
Here’s to the next chapter, with more exploration, more learning, and more shared discoveries.
AI Personalized Media: Connection or Isolation?
Social media is built around feeds that surface what people want to watch, read, and share. AI video generation is about to take that a step further. New tools will soon make it possible to create high-quality videos on demand, tailored to a person’s interests, mood, or goals. Instead of just curating content from existing creators, platforms could generate it instantly for each viewer.
This shift to just-in-time media personalization opens new possibilities. Education platforms could deliver lessons in the style, pace, and language that work best for a learner. Shopping experiences could feature interactive product videos that match with a person’s preferences and needs. News and sports updates could be presented in real time with visuals and narration built for each audience member. For businesses, it means the ability to scale highly-personalized marketing content production without a traditional agency or studio.
It also raises new questions. Hyper-personalized feeds could reduce shared cultural moments and make it harder to see perspectives outside our own. Creator economies may need to adapt if platforms rely more on generated content than human-made media. The quest will be balance the efficiency and creativity gains of AI generation with the losses in social value of human connections and shared experience.
WHY IT MATTERS
Content Can Become Truly Personal: AI video can adapt in style, format, and language to meet each user’s needs and preferences.
Opens New Creative Models: Individuals and small teams can produce high-quality content without large budgets or crews.
Education and Training Gain New Tools: Lessons can be tailored to how a person learns best, improving engagement and retention.
Cultural Dynamics Will Shift: Hyper-personalized feeds may change how trends spread and how communities form.
Businesses Can Scale Faster: AI reduces production costs and timetables, making it easier to reach audiences with targeted, timely content.
How AI Could Redefine Aging at Home
The idea of aging at home has always carried emotional and practical appeal. New advances in AI and robotics are bringing this vision closer to reality, offering tools that can help older adults maintain independence, dignity, and quality of life without leaving familiar surroundings.
AI-powered homes could combine wearables, voice assistants, smart appliances, and even embodied robots to assist with daily routines. These systems might remind residents to take medication, monitor health metrics, help prevent accidents, and connect them with family or caregivers. A robot could handle physical tasks like carrying groceries, retrieving items from high shelves, or providing stability while moving around.
Privacy and human connection in the connected smart-home remain central concerns, with omnipresent AI assistance also portending 24-7 surveillance. Some residents may prefer systems that anonymize video feeds, use non-intrusive sensors, or focus on voice interaction trained to engender comfort and dignity. Others may welcome always-on conversational AI that can facilitate social engagement, coordinate visits, or mediate important healthcare discussions.
The technology also has potential beyond elder care. Similar systems could support people with disabilities, those recovering from injuries, or anyone who benefits from extra assistance in daily life. The goal is not just automation but a richer, safer, and more connected living experience.
WHY IT MATTERS
Aging in Place Becomes Easier: AI can make it possible for more people to live safely in their own homes for longer.
Enhances Quality of Life: From reminders to mobility assistance, AI can reduce stress for both residents and caregivers.
Preserves Privacy and Dignity: Thoughtful design can ensure monitoring supports safety without unnecessary intrusion.
Encourages Social Connection: AI can help maintain human interaction by facilitating communication with family and friends.
Broad Applications: These systems can serve not only the elderly but also individuals with disabilities or short-term health challenges.
Just Jokes

Did you know?
The Illinois Holocaust Museum is using AI to expand its interactive survivor testimonies beyond the Holocaust. The museum has added a digital interface that allows visitors to ‘ask’ questions of Kizito Kalima, a survivor of the Rwandan genocide. This builds on the museum’s earlier use of holographic interviews to preserve survivor voices. With this addition, it is the first time the museum has included a non-Holocaust narrative in its AI-powered archive.
This development marks an important shift in how institutions use AI to preserve history. It brings broader, newer narratives into the conversation while maintaining the emotional impact and memory of survivors. It also shows how AI can be a tool for education, inclusion, and empathy, especially as direct witness testimony becomes rarer.
This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.
The Justice Mirror Conundrum
AI now gives ordinary people access to powerful investigative tools. Public records, property transfers, court filings, genealogies, and financial histories can all be analyzed at scale. This opens the door to surfacing long-buried injustices, land theft, exclusion, exploitation, erased contributions. Patterns that were once too complex or buried too deep can now be uncovered with a prompt.
For many, this feels like long-overdue progress. The ability to expose harm no longer rests solely with governments or academics. But turning on that spotlight comes with a price. AI does not draw moral lines between perpetrators, bystanders, or beneficiaries. The same data that uncovers stolen land or suppressed voices might also reveal how your own family, workplace, or neighborhood quietly profited. The lines blur fast.
What happens when the tools you use to seek justice for others bring uncomfortable truths about your own story?
The conundrum
If you want AI to surface hidden injustices and hold others accountable, are you also willing to let it judge you by the same standard, or does justice lose meaning when we only aim it outward?
Want to go deeper on this conundrum?
Listen/watch our AI hosted episode
News That Caught Our Eye
OpenAI Releases Open Source Models, GPT-OSS 120B and 20B
OpenAI quietly launched two open source models: GPT-OSS 120B and GPT-OSS 20B. The smaller model is already integrated into Windows by Microsoft, and the larger one runs on a single H100 GPU or via HuggingFace.
Deeper Insight:
After years of holding back on open sourcing, OpenAI now enters the ring with highly performant local models. This could give developers greater flexibility while also serving as a counter to growing open source momentum from Mistral and Meta.
OpenAI Usage and Valuation Soar
ChatGPT now reports 700 million weekly users, up from 500 million earlier this year. OpenAI’s valuation is approaching $500 billion.
Deeper Insight:
Despite competition from Anthropic and Google, OpenAI's continued user growth means it remains the most widely adopted general AI platform. A potential IPO would reshape the AI investment landscape.
Google Launches Gemini DeepThink for Ultra Tier
Google rolled out its highest-tier AI service, DeepThink, for subscribers paying $250/month. This marks a shift from Google’s usual freemium approach to a more exclusive paywall.
Deeper Insight:
DeepThink carves out a premium tier in the AI model arms race. The $250/month price tag signals that elite access is becoming the new norm, pushing AI toward luxury-tier differentiation.
EU AI Act Now Fully in Effect
As of August 2, the EU AI Act is active. It classifies AI systems into four risk levels and requires transparency, documentation, and risk mitigation for high-risk systems. Mistral remains one of the few EU companies claiming full compliance.
Deeper Insight:
Global developers now face stricter guardrails in the EU. The Act is already influencing how companies deploy models, especially in sensitive sectors like hiring and finance.
Google and UC Riverside Unveil Real-Time Deepfake Detector
A joint project between Google and UC Riverside produced a real-time video deepfake detector with 98% accuracy, identifying face swaps, synthetic videos, and background manipulation.
Deeper Insight:
This is a major step toward building reliable trust signals into AI-generated media. Expect to see this tech integrated into social platforms and newsrooms as election cycles ramp up.
Perplexity Accused of Bypassing Anti-Scraping Controls
Cloudflare called out Perplexity for circumventing robots.txt files and other scraping blocks. Perplexity reportedly rotates IPs and mimics human behavior to gather data from restricted sites.
Deeper Insight:
This exposes a deeper tension: AI platforms need fresh data, but publishers want control. Until regulation or compensation frameworks are standardized, expect more conflict between AI startups and web gatekeepers.
Perplexity Expands with OpenTable and Multi-Agent Platform
Amid controversy, Perplexity added OpenTable integration for reservations and acquired Invisible to power multi-agent orchestration in its Comet browser.
Deeper Insight:
Even as they face criticism, Perplexity is moving fast. Their aggressive roadmap could make them a serious agent platform contender alongside OpenAI and Google.
Eleven Labs Enters AI Music Generation with Licensed Datasets
Eleven Labs launched a music generator trained with licensed data from major indie catalogs (Merlin, Kobalt), aiming to create royalty-safe AI music.
Deeper Insight:
Unlike other tools that face legal risk, Eleven Labs’ approach opens the door to fully licensed AI music. This could legitimize AI-generated soundtracks in commercial use.
Genie 3 Brings Real-Time, Persistent World Generation
Google DeepMind's Genie 3 creates interactive worlds from text prompts that persist across actions and remember user input, showcasing a leap in user-navigable generative 3D environments.
Deeper Insight:
Genie 3 hints at the future of games, training, and simulation. It could drive personalized gameplay, immersive training for robots, and new AR/VR experiences.
Anthropic Releases Opus 4.1, Optimized for Coding
Anthropic quietly pushed out Opus 4.1 with better code editing precision and reduced over-editing. It's especially tuned for developers using Claude Code.
Deeper Insight:
Opus 4.1 sharpens Claude’s already strong coding capabilities. For power users on Claude Code’s $200 tier, this is a meaningful upgrade—and a warning shot at GitHub Copilot while Chat GPT-5 lays a new claim to being the SOTA AI coder.
AI and Science: Meteorite Materials and Next-Gen Batteries
Researchers used AI to understand a hybrid material from meteorites with unusual heat resistance, potentially unlocking advances in neuromorphic computing and wearable electronics. Separately, NJIT used AI to model new multivalent battery materials with much higher capacity than lithium-ion.
Deeper Insight:
AI continues to accelerate materials science. These studies could lead to better energy storage, quantum computing components, and even Matrix-style thermal-electric devices powered by body heat.
Did You Miss A Show Last Week?
Enjoy the replays on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.