The Securities and Exchange Board of India (SEBI) has escalated 1,33,000 misleading or manipulative social media posts related to the securities market to concerned social media platform providers (SMPP) as of February 28, 2026, the Minister of State for Finance said before the Parliament.
On the use of artificial intelligence to track misleading securities-related content across digital platforms, he said, “SEBI is currently not using any AI tools to track misleading securities-related content across digital platforms.”
The disclosure is in the midst of the development that the “finfluencers” have changed the methods for the consumption of financial information. “The rise of finfluencers has fundamentally altered how retail investors access and act on financial information, shifting influence from regulated intermediaries to largely unregulated digital voices,” noted Sanjay Israni, Partner, Desai & Diwanji.
Sanjay explains that the SEBI’s regulatory response—especially the move to restrict associations between intermediaries and unregistered finfluencers—signals a clear intent to address not just misconduct, but the underlying incentive structures driving such content. “By targeting referral-based earnings, affiliate models, and undisclosed promotions, the framework attempts to curb the monetisation of misleading advice,” he added.
The Ministry has further said that all complaints received by SEBI pertaining to the securities market are dealt with through SCORES, an online platform that enables investors to lodge complaints, follow up on them, and track the status of redressal.
The development highlights a two-pronged approach by SEBI. Firstly, proactively alerting social media platforms about misleading or manipulative content linked to the securities market, while secondly, not yet deploying artificial intelligence tools to monitor such content across digital platforms.
Shyam Arora, CEO and Founder of Meon Technologies, said, “The current approach relies more on identifying issues and escalating them to platforms for action. Investor complaints, on the other hand, are handled through SCORES, SEBI’s online grievance system.”
He added, “It allows investors to register and track complaints efficiently, but it works after the issue has already impacted users. In this setup, action typically happens after the issue has already occurred, rather than stopping it early.”
Highlighting the infrastructure and cost challenges involved, Arora said, “Dealing with a high volume of data to monitor all platforms in real-time, a huge amount of unstructured data has to be processed. For this, high computing power is required, which is a costly affair.”
Sanjay argues that AI becomes an indispensable tool for enforcement. “It can enable large-scale monitoring, detect patterns of manipulation, and flag high-risk content across platforms in real time,” he added.
However, he flags that the AI cannot be a substitute for a regulatory framework. “Financial content often operates in grey areas where context, intent, and disclosure determine its legality—elements that automated systems may not fully capture,” Sanjay notes.
Arora suggested that a more advanced infrastructure would be required to move beyond a complaint-based mechanism like SCORES and towards a real-time monitoring framework.
Beyond infrastructure, synergy needs to be built between AI systems and regulators before deploying artificial intelligence to track misleading securities-related content across digital platforms.
“A balanced approach is therefore essential. AI should serve as a force multiplier for surveillance and prioritisation, while enforcement remains human-led, supported by legal standards and platform-level accountability,” argues Sanjay.
Effective regulatory frameworks must align law, technology, and incentives to address investor protection in an evolving digital financial ecosystem.

