Listen to today’s podcast: https://www.youtube.com/channel/UC-nqwUyvLDEvs7bV985k-gQ
AI Daily Podcast – November 19, 2025
Welcome back to AI Daily. Today’s podcast episode was created from the following stories:
Google updates Search, Gemini, and Pixel Weather with WeatherNext 2 forecasting model
Google rolled out WeatherNext 2, an AI-driven forecasting model that’s eight times faster, offers hourly-resolution predictions, and extends outlooks up to 15 days. It outperforms the prior model on 99.9% of variables and now powers weather in Search, Gemini, the Pixel Weather app, and the Google Maps Platform’s Weather API, with access via Earth Engine, BigQuery, and Vertex AI. Built on a Functional Generative Network, it produces physically coherent forecasts in under a minute on TPUs.
Anthropic’s First AI Classes with Coursera Are Here
Anthropic and Coursera launched two courses: a developer-focused specialization on building with the Claude API and a broader “Real-World AI for Everyone” program created with AWIT to boost safe, effective AI literacy. Coursera says AI course demand is surging to 14 enrollments per minute, with content spanning test automation, code editing, and practical prompting up to building agents. The message for workers and teams: get ahead of gen AI’s impact by upskilling now.
Goldman Sachs pinpoints the 5 stocks that will get the biggest productivity boost from AI
Goldman Sachs says the next phase of AI gains will favor application-layer productivity, highlighting companies with high labor costs and strong automation exposure. Potential standouts include H&R Block, Robert Half, Cognizant, EPAM, and IQVIA, where modeled efficiency improvements could translate into sizable earnings lifts. For investors, the takeaway is to watch where AI workflows translate into margin expansion beyond the infrastructure giants.
KI-Modelle nach den Vorgaben des AI Acts entwerfen
The EU AI Act’s first implementation phase is now in force, imposing lifecycle obligations on providers of general-purpose AI models trained above 10^23 FLOPs, with extra duties for models over 10^25 FLOPs deemed systemic risk. Providers must document compliance for regulators, summarize training data, respect copyright, perform continuous risk assessments (including ethical and social), and implement strong cybersecurity with incident reporting. Bottom line for AI builders: governance, documentation, and security must be designed into the model pipeline from training through updates.

