Monday, April 27


Google rolled out a new update to the Gemini app, called ‘Personal Intelligence’ to its Nano Banana 2 image generation model. With this update, Gemini can now pull context directly from services like Google Photos to create images that actually reflect your real life. You no longer need to upload reference images or explain everything in detail. A simple prompt like “design my dream house” can now generate results based on your taste and lifestyle.

Gemini can now pull from Google Photos to create personalised AI images instantly. (Google)

For the past seven years, I have tracked consumer tech through constant shifts in hardware, platforms, and the way people actually use devices. Covering everything from budget gear to flagship hardware, I focus on what readers need to know, not on buzzwords or launch cycle hype. My expertise spans gaming laptops and chairs, high-performance PCs, gaming monitors, printers, smartwatches, earphones, headphones, Bluetooth speakers, tablets, and more, with a particular emphasis on how these products hold up in daily use. Reviews, explainers, buying guides, and news pieces all share the same goal: giving readers enough detail to make confident decisions without wading through fluff. Away from deadlines, I spend a lot of time gaming and watching films and anime, which naturally filters back into the work. Performance, comfort, display quality, and sound are judged the way players and viewers experience them, not just by lab numbers, which keeps my coverage grounded in real scenarios rather than just benchmarks.

Read moreRead less

This is powered by Nano Banana 2, Google’s latest image model, which works with Personal Intelligence to fill in gaps using data across your Google account. The goal is to cut down the effort needed for prompts while making the final output feel more relevant and useful.

Another standout feature is the ability to include real people in generated images. By linking Google Photos, Gemini can recognise tagged photos of friends and family and use them as references. You can also tweak results, swap images, or try different styles like watercolour or clay animation.

Under the hood, the system also uses metadata like photo labels and activity context to identify people and preferences, helping the model generate more accurate and consistent visuals while maintaining high-quality outputs with faster image generation speeds.

Google says privacy is still a priority here. Your personal photos are not used to train AI models, and the feature is completely opt-in.

The rollout has already started for Gemini AI Plus, Pro, and Ultra users in select regions, with a wider release expected soon. This move shows Google pushing towards more context-aware AI tools that create outputs which feel less generic and far more personal.

This update could also make everyday creative tasks faster, especially for casual users who want quick results without learning prompt tricks. It lowers the barrier to entry, making advanced image generation feel more accessible and practical.



Source link

Share.
Leave A Reply

Exit mobile version