- The Daily AI Show Newsletter
- Posts
- The Daily AI Show: Issue #97
The Daily AI Show: Issue #97
Where did my job go?

Welcome to Issue #97
Coming Up:
The Vanishing First Job
AI Has Spread Faster Than Public Trust
AI Just Gave Climate Science a Better View of the Ocean
Plus, we discuss AI’s version of decluttering, AI cancer predictions using a child’s leukemia cells, what happens when AI agents negotiate for you, and all the news we found interesting this week.
It’s Sunday!
Rise, and more importantly, don’t forget to shine.
The DAS Crew
Our Top AI Topics This Week
The Vanishing First Job
The first rung of the white collar ladder is giving way, and young workers can feel the collapse before economists settle the full argument about artificial intelligence. Employers spent two decades talking about talent pipelines, early potential and on-the-job development. Many now want something closer to a finished product that leverages AI. That shift is leaving new graduates stuck behind a gate in the market that calls some jobs entry level while screening for experience, software fluency and immediate high-volume output.
The data already show the strain. The national unemployment rate stood at 4.3 percent in March, according to the Bureau of Labor Statistics. But for recent college graduates, the New York Fed put unemployment at 5.7 percent in the fourth quarter of 2025, with underemployment at 42.5 percent, the highest level since 2020. In plain terms, plenty of graduates are working, but far too many are landing in jobs that pay poorly and do not require the degree they just spent years and large sums of money to earn.
Tech offers the clearest picture of the shift, even if the pattern reaches well beyond Silicon Valley. SignalFire’s 2025 talent report found that new graduates account for only 7 percent of hires at Big Tech firms, down 25 percent from 2023 and more than 50 percent from 2019. At startups, new graduates make up less than 6 percent of hires, with hiring down 11 percent from 2023 and more than 30 percent from 2019. Revelio Labs found entry-level postings down more than 35 percent from January 2023, with highly AI-exposed entry-level roles down more than 40 percent. That points to a market where companies are trimming junior roles first, especially those jobs built around routine digital tasks, the very tasks that have served as the training ground that builds the knowledge foundation of the cadence of corporate workflows.
AI is part of that story because it changes the economics of training. A manager who once hired an analyst to clean data, draft research notes or write first-pass code can now hand part of that work to software and hand the rest to a smaller number of experienced employees. LinkedIn described this year’s labor market as being in a rotation of where opportunities form, with AI influencing the shape of jobs even when it is not the main source of weak hiring. The World Economic Forum has found that 40 percent of employers expect to reduce headcount where AI can automate tasks. Companies are building leaner teams, and beginners are the first group to lose their footing when every hire is expected to deliver from day one.
That creates a deeper problem than one rough season for graduates. Entry-level jobs have long served as the training ground where workers learn judgment, context and the unwritten rules of an industry through experience. When firms cut that layer, they save money in the present and shrink their future bench at the same time. Colleges will have to respond with stronger work-based learning, internships and portfolio-driven programs. Employers will have to decide whether they still want a next generation of talent or only a market full of workers trained by somebody else. The old bargain between education and employment is fraying, and the damage is showing up first in the inboxes of people applying for their first real job.
AI Has Spread Faster Than Public Trust
America’s AI debate has entered an awkward phase. The tools are spreading fast, the companies building them are talking like inevitability has settled the argument, and the public remains unconvinced.
The new Stanford AI Index captures that disconnect in unusually sharp terms. Nearly two-thirds of Americans expect AI to reduce the number of jobs over the next 20 years. AI experts are less pessimistic, and they expect the technology to move through the workplace much faster than the public does. Stanford’s broader takeaway is even more telling: AI capability, investment and deployment keep moving ahead, while the systems meant to explain, govern and evaluate the technology are falling behind.
That gap matters because AI is no longer a niche product used by engineers and early adopters. Stanford says generative AI reached 53 percent population adoption in three years, faster than the personal computer or the internet did on the same measure. And yet the United States, for all its swagger about leading the AI race, ranks only 24th in adoption at 28.3 percent, far behind places such as Singapore and the United Arab Emirates. The country that produces many of the headline models is still struggling to build a broad social consensus around using them.
The numbers from Pew help explain why. In a 2025 survey highlighted this year, 50 percent of Americans said they were more concerned than excited about AI in daily life, while only 10 percent said they were more excited than concerned. At the same time, awareness and exposure keep rising. Nearly half of Americans say they have heard a lot about AI, up sharply from 2022. More than 60 percent report interacting with AI at least several times a week, though Pew notes that people probably undercount how often they encounter it through embedded systems such as recommendations, rankings and navigation. Americans are using AI while remaining uneasy about what it is doing to work, judgment and control.
That unease is now showing up inside companies, too. Gallup reported this week that half of employed Americans use AI in their role at least a few times a year, with frequent use continuing to rise. But the same polling found a durable bloc of holdouts who resist AI because they do not trust the outputs, do not see the value, or worry about ethics and privacy. Adoption is expanding, yet confidence is not expanding at the same rate. That is a management problem as much as a technical one. Companies can push tools into workflows. They cannot assume employees will accept the logic behind them.
There is a geopolitical layer here as well. Stanford says Chinese and American frontier models are now separated by only narrow margins on major benchmarks, and the number of AI researchers moving to the United States has dropped 89 percent since 2017. The old American assumption was that better models would naturally translate into durable leadership. The harder task now is building legitimacy at home while competition tightens abroad.
The next chapter in AI will not be decided by benchmark charts alone. It will be decided by whether institutions can make these systems legible, useful and trustworthy to the people expected to live with them. The technology already has distribution. It still needs public permission.
AI Just Gave Climate Science a Better View of the Ocean
The ocean has long been one of climate science’s blind spots. It covers about 70 percent of the planet, absorbs roughly 90 percent of the excess heat trapped by greenhouse gases, and moves that heat through currents that can shift by the hour. Yet many of the tools used to watch those movements have worked on slower schedules, returning snapshots of a system that behaves more like a live feed. A new AI method called GOFLOW offers a sharper view. It uses deep learning and thermal imagery from existing geostationary weather satellites to map ocean surface currents at kilometer scale and hourly intervals, giving researchers a way to observe fast-moving features that have been difficult to capture in real time.
That matters because ocean currents do more than animate weather maps. They redistribute heat, carbon and nutrients. They shape marine ecosystems and influence the exchange between the sea and the atmosphere. NOAA uses current data for shipping, navigation, search and rescue, and oil spill response because small changes in water movement can carry people, cargo and pollution in very different directions. The trouble is that some of the most important currents are narrow, short-lived and easy to miss. The new paper in Nature Geoscience says those smaller features play an outsized role in vertical mixing, helping determine how heat and carbon move between the shallow ocean and deeper waters.
GOFLOW is notable partly because it does not depend on a brand new satellite mission. The system draws on weather satellites that are already in orbit. NOAA’s GOES-East imagery updates every five to ten minutes depending on the view, and the researchers trained their model to infer surface velocity fields from changing temperature patterns in those images. Scripps described the advance in practical terms: weather satellites have been watching the ocean for years, but the breakthrough came from learning how to translate those thermal patterns into current maps. In a field where new capability often requires new hardware, this is a software story with scientific consequences.
The larger significance is easy to miss amid the novelty. AI is becoming part of the measurement layer of science. It is helping researchers infer physical processes that matter to forecasts, emergency response and climate models from data streams that already exist but have not been fully decoded. The authors say GOFLOW could support Earth’s global need for system forecasting, pollution mitigation and marine ecosystem monitoring, while the underlying data products and code are being made publicly available for other researchers. That combination of scientific utility and wider access gives the method a chance to travel quickly.
Earth Day rhetoric often drifts toward abstraction. This year’s theme, “Our Power, Our Planet,” lands better when power means the ability to see the planet clearly enough to act. Better climate policy still depends on politics, money and public will. Better observation depends on tools. GOFLOW will not settle the argument over what to do about a warming world. It does something equally valuable. It makes one of the planet’s most consequential systems easier to observe while it is still changing.
Just Jokes

AI For Good
Huntsman Cancer Institute announced an AI-powered “lab-on-a-chip” platform called μPharma that predicts how a child’s leukemia cells will respond to targeted therapies in under four hours, compared with conventional methods that can take days. The team says the platform could help doctors choose faster, more targeted treatments for children with T-cell acute lymphoblastic leukemia while reducing unnecessary treatments and side effects.
This Week’s Conundrum
A difficult problem or question that doesn't have a clear or easy solution.
The Invisible Discount Conundrum
For years, most markets have worked on a simple social fiction: the listed price is close enough to the real price. Some people negotiate better than others, but most of us still live in a world where the number on the page means roughly the same thing for everyone.
AI agents break that norm. Once personal agents can negotiate your rent renewal, challenge hospital bills, rewrite vendor contracts, squeeze lower insurance premiums, and scan for hidden fees in real time, the posted price starts to matter less than the quality of the software fighting on your behalf. The people with the best agents will quietly save money everywhere. The people without them will keep paying the default rate, often without knowing how much they are leaving on the table.
The conundrum:
On one side, this looks like progress. If AI can help ordinary people negotiate like elites, why should anyone defend a world where institutions profit from people who are too busy, too polite, or too uninformed to push back? But on the other side, once constant negotiation becomes normal, shared pricing starts to collapse. Fairness becomes private. Transparency gets weaker. And the people who cannot afford strong agents, or do not know how to use them, end up subsidizing everyone else.
So what should society protect once AI turns negotiation into an invisible layer beneath everyday life: the freedom to let agents fight for every possible advantage, or the expectation that the price on the page should still mean roughly the same thing for everyone?
Want to go deeper on this conundrum?
Listen to our AI hosted episode

Did You Miss A Show Last Week?
Catch the full live episodes on YouTube or take us with you in podcast form on Apple Podcasts or Spotify.
News That Caught Our Eye
Officials Press Tech CEOs on AI Security Ahead of Mythos Release
A report mentioned that J.D. Vance and Bessent are pressing tech CEOs on AI security ahead of Anthropic’s Mythos release. The discussion framed Mythos as part of a new level of cybersecurity risk tied to more capable models. The segment treated this as evidence that security concerns around frontier models are being taken seriously at a high level.
Anthropic Expands Managed Agents
Anthropic has put its managed agents architecture into public beta, giving developers a way to deploy long running agents without handling the backend infrastructure themselves. The system supports hours-long sessions with state retention in a sandboxed code execution environment.
College Graduates Face a Tougher Entry Level Job Market
A Guardian article highlighted how American college students and recent graduates are struggling to find entry level work in a shrinking market shaped by AI and hiring automation. One graduate said she had applied to more than ninety jobs, been ghosted by many employers, and received automatic rejections from more than half of them. The discussion emphasized that students may now need internships, portfolios, and public proof of skills alongside a degree just to get past automated screening.
Alberta Says AI Helped Cut Government Software Costs
An Alberta official said the provincial government had originally been quoted fifty four million dollars to replace one government computer system. According to the post discussed on the show, public servants instead built replacement systems using AI, with a final cost of about two point six four million dollars. The example was presented as a case for using AI to find efficiency gains inside government rather than relying only on outside contractors.
Attack Reported at Sam Altman’s Home
Sam Altman said someone threw a Molotov cocktail at his home. In response, he shared a personal post emphasizing that his family is behind the public profile he has as the CEO of OpenAI. The discussion also referenced reports of a second attack involving gunfire directed toward the house. Participants said at least one suspect had been taken into custody and that the alleged grievance appeared to date back to earlier anti AI views.
Stanford Releases Its 2026 AI Index
Stanford’s Human Centered AI group released its 2026 AI Index, a major annual report on the state of artificial intelligence. The discussion highlighted several findings, including a widening gap between how AI experts and the public view AI, a sharp drop in AI researchers relocating to the United States, and evidence that China has nearly closed the benchmark gap with leading US models. The report was also described as showing a jagged frontier, with AI improving rapidly in some tasks while still failing badly at others.
OpenAI Memo Takes Aim at Anthropic’s Strategy
An internal memo attributed to OpenAI’s chief revenue officer criticized Anthropic for being too narrowly focused and for making poor compute decisions. The discussion framed the leak as part of a broader pattern of public positioning ahead of expected AI company IPO activity. The memo was treated less as a private operational update and more as a message intended to shape perception of the competitive landscape.
Chinese AI Competition Keeps Intensifying
A Fortune article discussed major developments in Chinese AI, including strong momentum around token economics and the broader commercial race. The conversation tied that reporting to a larger theme that Chinese models are catching up quickly and often reach parity with leading US systems within months. The segment also noted that Chinese companies are now mixing open and closed model strategies rather than relying on open releases alone.
Claude Desktop Adds a Full Terminal in Code Mode
Anthropic updated Claude Desktop to include a full terminal inside the code tab. The discussion described the new experience as much closer to a full IDE, with support for multiple terminals and a more complete coding workflow inside the desktop app. The update was framed as a direct move toward keeping users inside Claude’s own interface rather than sending them back out to a separate terminal window.
Meta Extends Broadcom AI Chip Partnership Through 2029
Meta said it expanded its partnership with Broadcom to co-develop multiple generations of its MTIA AI chips through 2029. The plan starts with more than one gigawatt of capacity, with a broader multi gigawatt rollout planned later. The discussion positioned the deal as part of Meta’s effort to build more of its own AI compute base across its products and services.
Google Releases Gemini Robotics ER to Developers
Google made Gemini Robotics ER available through the Gemini API and Google AI Studio. The model was described as a reasoning system for robots with stronger visual and spatial understanding, better task planning, and the ability to detect whether a task succeeded. The discussion said it can work with tools like Google Search and help robots interpret cluttered scenes, read gauges, and plan multi step physical actions.
Gemini Desktop App Launches on Mac
Google launched a Gemini desktop app for Mac. The first version was described as a lightweight desktop experience focused mainly on chat rather than a full workspace like Claude or ChatGPT. Its distinguishing feature is that it can quickly see what is on the user’s screen and work alongside Google apps and browser activity.
Higgsfield Introduces a Marketing Studio for Product Videos
Higgsfield released a new Marketing Studio feature for creating product marketing videos. The tool was shown generating stylized ads from product images or product page links without requiring much prompting. The discussion framed it as a low cost way to produce polished promotional video content for physical products.
Anthropic Releases Claude Opus 4.7
Anthropic released Claude Opus 4.7, and the discussion described it as a stronger public frontier model with better visual reasoning and top performance on a vibe coding benchmark. The hosts said the pricing stays the same as 4.6, but the model can use more tokens and may burn through context faster, especially in higher thinking modes. They also connected the release to broader signs that capability improvements based on learnings from the Mythos Model are finding their way into Opus and Sonnet releases.
White House Prepares Mythos Access for Federal Agencies
The White House is preparing to give federal agencies access to Anthropic’s Mythos system. The segment said Dario Amodei was headed to the White House to discuss the matter with Chief of Staff Susie Wiles. The story was framed as especially notable because Anthropic had previously been declared a national security risk and blackballed by the government.
Anthropic Executive Leaves Figma Board
Anthropic’s chief product officer left Figma’s board. The discussion suggested the move may be tied to Anthropic developing a design product that could compete with Figma. The hosts treated the departure as a sign that AI generated design and workflow graphics are becoming more central to the company’s product direction.
OpenAI Expands Codex Into a Broader Desktop Product
OpenAI expanded Codex into a broader desktop experience, though it is still primarily focused on software developers. The discussion said the updated product includes an in app browser, markup tools, parallel multi agent workflows, persistent memory across sessions, and a large set of plugins. It was described as OpenAI’s answer to Anthropic’s agentic coding and cowork tools in the Claude Desktop App.
Google Adds New AI Mode Capabilities to Search
Google is adding new capabilities to AI Mode in Search. The discussion described a new side window that opens alongside search results and supports richer, more interactive AI responses. The hosts framed it as another step in reducing friction between traditional Google search and Gemini style assistant workflows.
Luma and Wonder Project Launch Innovative Dreams
Luma partnered with Wonder Project to launch Innovative Dreams, a filmmaking workflow that combines live performance with generated environments and digital character layers. The segment emphasized that actors can perform inside immersive virtual settings instead of reacting only to placeholders or green screens. It was presented as a production model that could speed up creative iteration while keeping human performers and editors central to the process.
Perplexity Launches Personal Computer for Mac
Perplexity launched Personal Computer for Mac subscribers. The product was described as an always-on agent running on a dedicated Mac mini that can work across local files, native apps, and multiple frontier models through the Perplexity platform. The hosts focused on the promise of a persistent autonomous system, while also questioning what useful day to day tasks people would actually trust it to handle.
Salesforce Unveils a Headless AI Access Model
Salesforce introduced a headless approach that exposes its platform through APIs, MCPs, and command line interfaces instead of requiring the normal browser based interface. The discussion highlighted the idea that an agent can now interact with Salesforce data, workflows, and tasks directly. The hosts treated it as a major sign that enterprise platforms are adapting for a future where software is increasingly used by agents as well as by people.
