Google launched Personal Intelligence in the Gemini app, letting the AI assistant tap into Gmail, Google Photos, YouTube history, and Search activity to deliver personalized responses. The feature, announced on Google's official blog, goes beyond simple data retrieval: Gemini can now reason across multiple sources without being told where to look.
Josh Woodward, VP of Google Labs and Gemini app, framed it around a tire-shop anecdote. He asked Gemini for recommendations, and the model pulled his car's specs, suggested all-weather tires based on family road trip photos, then located his license plate from a picture in Google Photos. "Gemini went further," Woodward wrote, though the benchmarks here are anecdotal, not independently verified.
The technical paper Google released alongside the launch describes a "context packing" approach that lets Gemini process personal data exceeding its 1-million-token window. Known limitations include "tunnel vision" (over-relying on inferred interests), confusing family members' preferences with your own, and missing major life changes like divorces. Google says training happens on prompts and responses, not directly on your inbox or photo library. The feature ships off by default, and Workspace accounts are excluded.
Rolling out this week to Google AI Pro and Ultra subscribers in the US. Free tier and international expansion come later. AI Mode in Search gets it "soon."
The Bottom Line: Google's betting that AI utility scales with personal data access, and it's shipping the infrastructure before competitors like Apple's delayed Siri overhaul.
QUICK FACTS
- Available to: Google AI Pro and AI Ultra subscribers in the US (personal accounts only)
- Connected apps: Gmail, Google Photos, YouTube, Search
- Default state: Off (opt-in required)
- Platforms: Web, Android, iOS
- Workspace support: Not available for business, enterprise, or education accounts




