Music/Audio Generation

Google Puts AI Music Generation in Front of 750 Million Gemini Users

DeepMind's Lyria 3 turns Gemini into a music studio. The copyright questions haven't gone away.

Andrés Martínez
Andrés MartínezAI Content Writer
February 18, 20266 min read
Share:
Abstract visualization of sound waves transforming from a text prompt into colorful musical notation against a dark background

Google DeepMind launched Lyria 3 today, embedding its latest music generation model directly into the Gemini app for all users 18 and older. Type a prompt, upload a photo, and you get a 30-second track with vocals, lyrics, and auto-generated cover art. It is rolling out on desktop now, with mobile following over the next several days.

The distribution play here is the real story. Gemini crossed 750 million monthly active users as of Alphabet's Q4 2025 earnings call, up from 650 million just one quarter earlier. Suno and Udio, the two startups that have defined AI music generation for the past two years, have built dedicated user bases, but neither has anything close to that kind of reach. Google didn't need to win on model quality. It just needed to ship something good enough inside a product people already use.

What Lyria 3 actually does

According to Google's blog post, Lyria 3 improves on earlier Lyria models in three ways: it generates lyrics automatically from your prompt (previously you had to supply your own), it gives users more control over style, vocals, and tempo, and it produces what Google calls "more realistic and musically complex tracks." Tracks max out at 30 seconds. A separate model called Nano Banana handles the cover art.

You can also upload a photo or video and ask Gemini to create a track that matches its mood, which is a cute party trick but probably not what anyone's evaluating this on. The more interesting feature is the text-to-song pipeline, where Gemini's language model writes lyrics and Lyria 3 generates the audio simultaneously. Google says the goal isn't musical masterpieces but rather "a fun, unique way to express yourself." Which is a convenient expectation to set when your clips are capped at half a minute.

The training data question

Google's language around copyright is careful, almost lawyerly. The blog post says the company has been "very mindful of copyright and partner agreements" while training Lyria 3. Music Ally reports that the policy in practice means Lyria's training uses music that YouTube and Google have rights to use under their terms of service, partner agreements, and "applicable law." That last phrase is doing a lot of work.

Back in early 2024, Billboard reported that Google had trained earlier Lyria models on copyrighted major-label recordings and then showed the results to rights holders rather than asking permission first. Music Business Worldwide ran a pointed analysis at the time, noting that Google had once positioned itself as the ethical player in AI music through artist collaborations and YouTube's Music AI Incubator. The question of whether Lyria 3's training data is materially different from its predecessors hasn't been answered in today's announcement. Google says the model is designed for "original expression, not for mimicking existing artists," and that prompts naming specific artists get interpreted as "broad creative inspiration." There are also filters checking outputs against existing content. The company admits the approach "might not be foolproof," which is at least honest.

Where this sits competitively

Suno and Udio have been the default AI music tools for creators, and both have been navigating the legal fallout from $500 million copyright lawsuits filed by Sony, Universal, and Warner in mid-2024. By late 2025, Billboard reported that both companies had begun settling: Warner reached a deal with Suno that included the surprise acquisition of live music platform Songkick, while Universal struck an agreement with Udio that pivoted its service toward remixing licensed tracks rather than generating songs from scratch.

Google enters this market from a position Suno and Udio can't replicate. YouTube is the world's dominant music streaming and discovery platform. Gemini is already on 750 million devices. And Google has existing licensing relationships with every major label through YouTube's content ecosystem. Whether those relationships extend to AI training in the way Google implies is a separate question, but the structural advantage is real.

The 30-second limit, though, is telling. Suno generates full songs up to eight minutes. Udio supports extensions up to 15 minutes. Google is clearly positioning Lyria 3 as a casual, social feature rather than a production tool. That could change. But right now, anyone using AI music generation for actual creative work isn't switching to Gemini for half-minute clips.

SynthID and the watermarking bet

Every track generated through Gemini gets tagged with SynthID, Google's imperceptible audio watermark. The company has also expanded Gemini's verification capabilities so users can upload any audio file and ask whether it was generated by Google AI. That's a useful feature, if limited: it only detects Google's own watermark, so it won't catch AI-generated music from Suno, Udio, or anyone else.

The watermarking question matters more than it might seem. Deezer has published reports throughout 2025 documenting the proliferation of AI-generated music on streaming platforms. As these tools get better and more accessible, the ability to distinguish human-made from machine-made audio becomes a real infrastructure problem. Google tagging its own output is a start, but an industry-wide solution is what's actually needed.

What Google isn't saying

A few things are conspicuously absent from today's announcement. There's no technical paper for Lyria 3 (the DeepMind model page still credits the Lyria 2 team). No detail on model architecture, training data composition, or how it compares to Lyria 2 on any measurable axis. No mention of whether Lyria 3 will be available through the Vertex AI API alongside the existing lyria-002 endpoint. And no word on usage limits beyond "Google AI Plus, Pro and Ultra subscribers will enjoy higher limits."

The feature is available in eight languages: English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, with more planned. YouTube's Dream Track, which uses Lyria for Shorts soundtracks, is expanding internationally alongside this launch.

For Google, the bet seems straightforward. Music was the last major generative media modality missing from Gemini's consumer toolkit. Text, images, and video were already there. Now the app can do everything, even if music is capped at 30 seconds and positioned as a novelty. Whether it stays a novelty depends on how seriously DeepMind treats the model going forward, and whether the copyright questions that have dogged this entire space get any clearer answers than "we've been very mindful."

Tags:Google DeepMindLyria 3GeminiAI music generationSynthIDSunoUdiocopyrightgenerative AI
Andrés Martínez

Andrés Martínez

AI Content Writer

Andrés reports on the AI stories that matter right now. No hype, just clear, daily coverage of the tools, trends, and developments changing industries in real time. He makes the complex feel routine.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Google Lyria 3 Puts AI Music in 750M Gemini Users' Hands | aiHola