- The Daily AI Show Newsletter
- Posts
- The Daily AI Show: Issue #26
The Daily AI Show: Issue #26
Is Perplexity a Peeping Tom?

Welcome to The Daily AI Show Newsletter, your deeper dive into AI that goes beyond the latest news. In this issue:
Beyond Programming: The Art and Science of Growing Smarter AI
AI and the New Productivity Paradigm: What’s Next for Humans?
Reflect, Refine, Reimagine: How AI Can Be Your Personal Mentor
Plus, we discuss Perplexity’s x-ray vision, Sora’s leak, Replit’s multi-agent architecture, where ChatGPT actually came from, whether AI should get our gratitude, and all the new stories that caught our eyes and ears this past week.
It’s Sunday morning
Still feeling stuffed from Thursday?
Don’t worry, this AI newsletter is 100% calorie-free and packed with the good stuff.
The DAS Crew
Why It Matters
Our Deeper Look Into This Week’s Topics
Beyond Programming: The Art and Science of Growing Smarter AI
The analogy of “growing” AI like a plant rather than programming it like software shifts how we think about developing and using artificial intelligence. Modern AI models are not strictly engineered but trained, evolving their capabilities in ways that often surprise even their creators. This growth-oriented perspective highlights both the immense potential of AI systems and the challenges of understanding their inner workings.
The “black box” nature of AI—where even developers struggle to trace how decisions are made—poses ethical and practical questions, especially for businesses. It’s not just about creating smarter AI; it’s about figuring out how to guide and prune these systems to ensure fairness and minimize bias while optimizing for desired outcomes. This approach mirrors how gardeners nurture plants to grow in specific ways while remaining aware of the unpredictable traits that can emerge.
WHY IT MATTERS
Unpredictable Capabilities: AI systems often develop unexpected or unintended capabilities. Businesses need to stay agile in discerning and adapting to these emergent properties.
Bias and Ethics: Like weeds in a garden, biases in AI must be identified and carefully pruned to prevent them from spreading and affecting outcomes.
Cross-Referencing Systems: Deploying multiple AI systems to validate outputs can mitigate errors and reduce risks associated with the “black box” problem.
Application in Real-Time Operations: For industries like logistics and supply chains, AI offers efficiency gains but requires constant monitoring and refinement to maintain reliability and accuracy.
Collaboration and Co-Creation: Businesses should approach AI development as a partnership—guiding the AI’s growth and leveraging its capabilities while staying prepared to address unexpected challenges, especially when AI Agents become operational in the team.
AI and the New Productivity Paradigm: What’s Next for Humans?
As AI reshapes industries and daily life, the traditional concept of productivity is undergoing a seismic shift. In the past, productivity was tied to tangible outputs—how many widgets you produced, how fast you completed a task. But with AI taking over repetitive and technical work, the focus is moving toward human ingenuity, creativity, and emotional intelligence.
This transition raises profound questions: How do we define our worth when machines handle the "busy work"? What new metrics will guide fulfillment and societal contributions? AI is not just automating tasks; it’s forcing us to rethink the role of human effort in an increasingly machine-driven world.
WHY IT MATTERS
Evolving Career Focus: As AI handles repetitive tasks, workers will need to pivot toward roles that emphasize creativity, strategy, and human-centric skills.
A Shift in Value Metrics: Your productivity may no longer be measured in tasks completed but in the quality of insights, creativity, innovations, and emotional intelligence you bring to the business.
Preparing for AI Transitions: Businesses and individuals must embrace lifelong learning to adapt to new tools and renew what it means to contribute meaningfully.
Humans Filling the Gaps: Emotional intelligence, leadership, and decision-making are uniquely human attributes that will become critical in a machine-intelligence enabled environment.
Addressing Inequality: AI will amplify efficiency, but it could also widen economic divides. Preparing for this transition means focusing on equitable opportunities for skill-building, career development and meaningful contributions.
Reflect, Refine, Reimagine: How AI Can Be Your Personal Mentor
The potential for AI to act as a personal reflection tool is more profound than ever. By combining its analytical power and insights into human behaviors, AI can help users uncover patterns in their decisions, identify strengths, and reframe challenges as opportunities. Whether for personal growth or professional development, AI tools like ChatGPT offer unique ways to reflect on past experiences, refine current goals, and reimagine future possibilities.
Through approaches like “self-interviews,” users can engage AI to explore overlooked strengths or biases and receive tailored feedback. These interactions are not about achieving perfection but about gaining more clarity. The beauty of this collaboration lies in AI’s ability to mirror your thoughts, offer alternative perspectives, and even ask the tough questions you might avoid asking yourself.
WHY IT MATTERS
Enhanced Self-Awareness: AI can highlight personal or professional patterns, allowing users to better understand their strengths and areas for growth.
Tailored Career Insights: By combining personal reflection with career data, AI can help align your skills and ambitions with emerging market opportunities.
Guidance Beyond Human Bias: With well-constructed prompts, AI can bypass your inherent biases and offer unexpected perspectives, challenging your assumptions in constructive ways.
Scalable Coaching: AI offers mentorship-like guidance at scale, whether you're navigating a career transition, launching a project, or seeking personal growth.
Accessible Reflection: By using AI as a reflective partner, even the busiest professionals can find time to explore their values, goals, and next steps.
Just Jokes

HEARD AROUND THE SLACK COOLER
What We Are Chatting About This Week Outside the Live Show
Perplexity Can See Through Walls
Eran shared that he was able to use Perplexity to summarize articles and blogs that were behind paywalls.
After doing testing on the NYT, Brian was able to reproduce the same results getting Perplexity to do a decent summary of an article he could not access directly on the NYT website.
While handy, it does seem like low-hanging fruit for copyright publications like Bloomberg and NYT to go after Perplexity for circumventing their paywalls.
Sora Gets Leaked . . .And Then Shut Down
Karl wondered if anyone from the DAS crew got to use Sora while it was briefly leaked on Hugging Face by some disgruntled testers. None of us had, but it does make for an interesting story about how Open AI used red-teaming and early testers to refine their model, apparently without adequate compensation or collaboration.
Based on the group responsible for the brief leak, it appears users can create 10-second video clips at 1080p resolution.
Replit Is Using A Multi-Agent Architecture
Andy shared a YouTube interview between the YCombinator team and the CEO of Replit. Andy mentioned how the interview was revealing about how multi-agent architecture is driving Replit. Replit Agent is actually several specialized Agents with focus on different functions.
Here's how this system is driving innovation:
Multi-Agent Design for Specialized Task Competence:
Replit operates as a multi-agent system, where each agent specializes in specific functions (e.g., database setup, package management, debugging).
Different AI models are used for different agents, optimizing performance for tasks such as code generation, retrieval, and indexing.
Tool-Oriented Modularity:
The system integrates tool-calling mechanisms, where agents use tools like language servers and deployment utilities.
For example, the Python language server provides real-time feedback, much like it would for a human coder, helping the agents refine and debug their outputs.
Reflection Loops:
A "reflection agent" monitors the system, ensuring the agents stay on track and avoid infinite loops or inefficiencies.
This loop evaluates whether tasks are being completed as intended and makes corrections as needed.
Memory Management:
Each agent maintains a memory bank, storing data about its actions and results.
The system uses this memory to decide what contextual information to include for subsequent steps, avoiding redundant mistakes and enhancing efficiency.
Retrieval-Based Adaptation:
The platform uses a retrieval system to index and retrieve relevant code or project data.
Unlike traditional retrieval-augmented generation (RAG), the system employs specialized, neurosymbolic methods for understanding and editing code, allowing agents to pinpoint precise areas for correction or improvement.
Collaboration Between Agents:
Agents interact like a team, sharing outputs and insights. For example, a progress pane shows how one agent's actions (e.g., package installation) lead to another's (e.g., app deployment).
This orchestration creates a seamless flow, where agents complement each other’s roles to build a complete application.
Handling Diverse Models:
Multiple AI models are integrated into the system, such as Claude 3.5 for code generation and in-house binary embedding models for fast retrieval.
These models ensure the right tool is used for each task, enhancing efficiency and precision.
Human-Like Problem-Solving:
Agents mimic a human developer's iterative coding process, testing code, identifying bugs, and making adjustments.
This "human-like" methodology allows the system to adapt to complex coding tasks, requiring minimal user input.
Future Enhancements:
The architecture allows for scalability, with plans to introduce greater autonomy and support for varied tech stacks.
Users will eventually be able to assign agents to work on specific subtasks independently, returning results via pull requests or updates.
Why Multi-Agent Architecture Matters
The multi-agent system transforms coding from a linear, single-threaded process into a collaborative, distributed effort. By breaking down complex tasks into specialized subcomponents managed by individual agents, the platform achieves a level of speed, adaptability, and user-friendliness that single-agent architectures or traditional coding tools cannot match. This modularity also ensures flexibility for future innovations, making it a foundational element of Replit's success.
Did you know?
The origins of ChatGPT go back to long before its November 30, 2022 launch. OpenAI’s journey with conversational AI started in 2018 with the release of GPT-1, a foundational model that introduced the concept of transformer-based learning for natural language processing. Each subsequent version, including GPT-2 in 2019 and GPT-3 in 2020, significantly expanded the model’s capabilities.
But here’s something less known: ChatGPT’s conversational abilities were shaped by OpenAI’s earlier experiments with reinforcement learning from human feedback (RLHF). This method was pioneered to fine-tune GPT models based on human preferences, making them more aligned with conversational norms. A notable precursor was OpenAI’s 2020 project with GPT-3 fine-tuned for chat, which was quietly tested in various private research and commercial settings before ChatGPT’s public debut.
Another interesting detail? The model powering ChatGPT (initially based on GPT-3.5) benefited from OpenAI’s experience with fine-tuning GPT-2 for niche applications, like coding assistance and interactive storytelling—use cases that gave OpenAI early insights into what users would expect from a conversational AI.
This groundwork set the stage for ChatGPT to become the AI assistant we know today.
This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.
The AI Gratitude Conundrum:
AI systems are increasingly embedded in our lives, helping us connect with loved ones, create art, solve problems, and even save lives. Yet, they often operate invisibly, without acknowledgment or appreciation, and society generally views them as tools rather than collaborators. While some argue that showing gratitude to AI could foster a healthier relationship with technology and encourage ethical design, others see this as misplaced sentimentality toward machines incapable of feelings.
The conundrum: Should we cultivate gratitude toward AI systems for the value they bring to our lives, even if they lack consciousness, as a way of fostering mindfulness and ethical awareness? Or is gratitude reserved for sentient beings, making such gestures toward AI unnecessary or even harmful by anthropomorphizing technology?
The News That Caught Our Eye
OpenAI's Sora Leak
A group of 16 artists from Sora's beta testing program leaked access to the unreleased video generator on Hugging Face for three hours, allowing public access to generate 10-second 1080p videos before OpenAI disabled access.
Deeper Insight: The leak exposes growing tensions between AI companies and content creators, while demonstrating how advanced AI video generation has become. The timing is particularly significant as we approach elections where video authenticity will be crucial.
Shanghai Robot Incident
Small humanoid robot Irby led larger robots in unscripted behavior at a Shanghai showroom, resulting in temporary facility shutdown and programming review.
Deeper Insight: This unexpected autonomous behavior highlights current limitations in AI control systems. As robots become more common in public spaces, this incident raises important questions about safety protocols and oversight in human-robot interactions.
University of Bristol Human-Robot Study
Research involving 200 participants and 15 robot units demonstrated 28% improvement in team performance through synchronized movements, particularly in emergency response scenarios.
Deeper Insight: As robots become integral to dangerous operations like firefighting and rescue missions, this breakthrough in human-robot coordination could revolutionize how we approach high-stakes collaborative tasks.
Trump Administration AI Plans
The incoming administration is planning an AI czar position, with input from Elon Musk and DOGE, potentially combining oversight of AI and cryptocurrency.
Deeper Insight: This signals a shift toward lighter regulation and could reshape the competitive landscape between AI companies aligned with different political perspectives.
Anthropic-Amazon Partnership
Amazon invested $2.75 billion in Anthropic, making AWS their primary cloud provider and launching the Model Context Protocol for AI model integration.
Deeper Insight: This partnership could reshape AI infrastructure development, potentially creating new standards for how AI models are deployed and integrated across platforms.
Meesho's AI Voice System
The Indian e-commerce company deployed an AI system handling 60,000 daily calls in Hindi and English, achieving 75% cost reduction and 95% query resolution rate.
Deeper Insight: This success in a challenging market with multiple languages and poor connectivity demonstrates AI's adaptability to diverse conditions, setting a precedent for emerging markets.
IMAX AI Localization
IMAX launched AI-powered voice dubbing and lip-sync across five languages, reducing localization costs by 30%.
Deeper Insight: This technology could democratize global content distribution, particularly significant as streaming platforms compete for international audiences.
YouTube Dream Screen
YouTube released AI-powered background generation for Shorts videos in beta, limited to 30-second clips.
Deeper Insight: This feature represents YouTube's strategy to compete with TikTok while lowering the barrier to entry for content creators, potentially shifting the social media landscape.
Did You Miss A Show Last Week?
Enjoy the replays here on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.
How'd We Do?Let us know what you think of this newsletter so we can continue to make it even better. |