Artificial Intelligence (AI) is transforming the way we live, work and communicate. But the same technology that powers smarter services and faster decisions is also giving cybercriminals powerful new tools. From deepfake videos to AI-written scam messages, fraudsters are exploiting the speed, scale and sophistication of AI to target ordinary users, businesses and even governments. In this new landscape, digital literacy is no longer a luxury—it is a frontline defence.
One of the most alarming trends is the rise of AI-driven phishing. Traditionally, scam emails and messages were easy to spot because of poor grammar, awkward language and generic content. Today, cybercriminals are using AI language models to generate flawless, personalised messages that mimic the tone of banks, government departments or even close friends. These messages can refer to recent events, local issues or personal details scraped from social media, making them far more convincing than the crude scams of the past.
Deepfakes—realistic but fabricated audio and video created with AI—have added another dangerous dimension. Criminals can now clone a person’s voice or face with just a few seconds of online footage. There are already documented cases where employees received a phone call from what sounded exactly like their CEO, instructing them to urgently transfer large sums of money. In several instances abroad, companies have lost hundreds of thousands of dollars before realising they had been deceived by an AI-generated voice.
Social media has become fertile ground for AI-powered fraud. Bots can create fake profiles at scale, generate believable posts and even hold basic conversations. Fraudsters use these accounts to spread disinformation, lure people into investment scams or trick them into sharing sensitive information. AI tools can also instantly translate scam content into multiple languages, allowing the same fraudulent scheme to be run simultaneously across regions and communities.
Financial fraud, too, is evolving. AI can analyse large volumes of stolen data—such as leaked email addresses, passwords and bank details—to identify the most profitable targets. It can test thousands of password combinations automatically, breaking into poorly secured accounts in minutes. Criminals are even using AI to design more convincing fake websites and mobile apps that look identical to genuine banking or shopping platforms.
Amid this rapid technological shift, digital literacy has emerged as a critical shield for citizens. Digital literacy is not only about knowing how to use a smartphone or a computer. It includes understanding how online platforms work, recognising warning signs of fraud, questioning too-good-to-be-true offers and knowing how personal information can be misused. A digitally literate user is more likely to pause before clicking a suspicious link, to verify a caller’s identity and to double-check a website’s authenticity.
For young people, who are often labelled “digital natives”, there is a dangerous assumption that familiarity with gadgets automatically translates into safety online. In reality, many students freely share personal information on social media, click on random links and download unverified apps. Schools and colleges need to incorporate basic cybersecurity and AI awareness into their curriculum, teaching students about password hygiene, privacy settings, fake news and the dangers of oversharing.
Older citizens are equally, if not more, vulnerable. Many first-time internet users depend on smartphones for banking, payments and communication but may not fully understand the risks. Fraudsters frequently target them with fake lottery messages, fraudulent loan offers, or impersonation calls from supposed bank officials and government officers. Community-level awareness programmes—through local newspapers, radio, television, panchayat meetings and civil society groups—can play a vital role in reaching this segment.
Governments and institutions are beginning to respond. Several countries have issued advisories on deepfakes and AI-driven scams, urging people to treat unsolicited calls and messages with caution. Banks and telecom operators are sending alerts about common fraud patterns and urging customers to report suspicious activity. However, enforcement and regulation alone cannot plug every gap in a fast-moving digital ecosystem. Ultimately, informed users are the strongest line of defence.
Media organisations have a special responsibility in this context. As trusted sources of information, newspapers can demystify AI for the public—explaining both its benefits and its risks in clear, accessible language. Regular columns on cyber safety, fact-checking sections that expose viral hoaxes, and detailed reports on new fraud tactics can help build public resilience. Highlighting real-life case studies—without sensationalism but with clear lessons—can make the threat tangible and encourage safer behaviour.
On an individual level, a few simple habits can significantly reduce risk: using strong, unique passwords and enabling two-factor authentication; updating software regularly; avoiding public Wi-Fi for sensitive transactions; and never sharing one-time passwords or banking details over the phone or chat. Before responding to any urgent request for money or personal data, it is wise to verify through an independent channel—by calling back on a known number, visiting the official website, or speaking to a trusted family member.
AI will continue to advance, and so will the frauds built on it. The answer is not to fear technology, but to engage with it thoughtfully. In the battle against AI-enabled cybercrime, awareness is not just power; it is protection.
(The Author is a Microsoft Certified System Engineer and tech enthusiast)

