Bengaluru: When Indian developers previously queried Google’s AI models, their requests travelled thousands of kilometres to servers in other countries before returning with responses. That status quo changed a bit when Google unveiled four announcements at its developer-focused I/O event in Bengaluru on Wednesday, with one of the announcements being the localisation of AI processing within India’s borders.“Indian developers can now use the powerful AI capabilities of Gemini 2.5 Flash here in India,” Bikram Singh Bedi, vice president of Google Cloud Asia Pacific, told TOI. “Processing will now be available in India, and this is going to be critical from a perspective of data residency as well as low latency.”The announcement addresses two critical concerns for Indian businesses – data residency regulatory concerns and latency issues. Previously, queries to Google’s AI models would route through servers in the US or other global regions, not anymore though. “Certain applications need low latency, especially the ones where you’re looking for real-time responses,” Bedi explained.The importance of low latency becomes clear when considering real-world applications. For video streaming services, even milliseconds of delay can mean the difference between smooth playback and frustrating buffering. Financial trading platforms require split-second responses, whilst customer service chatbots need immediate responses to maintain natural conversations. Manufacturing systems monitoring equipment breakdowns cannot afford delays that might result in costly production stoppages.The second major announcement centred on Firebase, Google’s popular development platform. “We have deeply integrated the Gemini 2.5 Pro into our development platforms – both AI Studio and Firebase Studio,” Bedi revealed. “Developers can now use multimodal prompts — video, image, speech, text — and they can build full-stack AI applications with AI-generated templates and powerful agentic features.”The integration, Bedi said, allows developers to give simple prompts directly within the code editor to generate complete applications. Hardware constraints, a perennial concern for Indian developers targeting budget smartphones, formed the backdrop to announcement number three. Google unveiled Gemma 3, the newest member of its open-source family, and highlighted the Gemma 3n variant optimised for devices with as little as 2 giga byte of RAM. “Gemma 3 is significantly ahead of anything else out there and they’re supporting 140 languages, including six Indian languages,” Bedi said.Skills, rather than silicon, framed the final set piece. Last year’s Gen AI Exchange programme—an online academy and hackathon series launched by Google and supported by the central govt—registered 270,000 learners and reached five million developers through satellite events. “Courses completed have topped thirty thousand, but that is only the warm-up,” Bedi said, announcing a second edition hackathon that opens for entries next month. Winners will receive Google Cloud credits, mentoring, and a fast track to showcase their projects at next year’s I/O. The exchange, launched first in India and now spreading worldwide, is designed to close what Google and other analysts peg as a severe skills gap across IT and security roles. According to Bedi, enterprises in India are rapidly adopting AI in a slew of different verticals. “Look at Federal Bank of India – they are leveraging our AI to improve customer service. They have this friendly AI personal assistant called Fedi,” Bedi explained. “They are seeing a 25% rise in customer satisfaction and 50% saving in customer care cost.”Mahindra & Mahindra is another example of a large Indian conglomerate leveraging AI in diverse ways, said Bedi. “They are using our Google Cloud Vertex AI platform for cutting-edge work in R&D, engineering, simulations, and manufacturing plants. They’re looking at use cases like zero breakdown, energy consumption optimization, among others,” Bedi said.Uttar Pradesh, Bedi said, is building an open agricultural network on Gemini “to put micro climate data and market prices in every farmer’s pocket”. Such examples, he argued, show that generative AI has moved from a curiosity to a basic requirement for organisations and state govts that want to stay competitive.