Last Updated:
More than 150 million weekly active users now interact with ChatGPT, a number boosted by viral image-based trends, including the 2025 Ghibli-style transformation phenomenon
A 2025 Pew Research survey revealed that 34% of adults in the United States had used ChatGPT, a figure that doubled from 2023 (Image: News18/Tanya Dutt)
Imagine uploading a selfie to ChatGPT and asking, “Create a caricature of me and my job and everything you know about me”, only to receive a cartoon that highlights your profession, quirks and even personal aspirations. In early 2026, this viral trend has spread rapidly across social media, with users sharing exaggerated portraits that depict them as oversized professionals surrounded by symbolic props, all created using artificial intelligence.
More than 150 million weekly active users now interact with ChatGPT, a number boosted by viral image-based trends, including the 2025 Ghibli-style transformation phenomenon. The growing closeness between humans and AI has sparked debate among experts about whether such tools are enhancing self-exploration or encouraging emotional dependence. The trend reflects anthropomorphism, the human tendency to assign human traits to non-human entities, now amplified by AI’s conversational ability and creative output.
“People are not asking AI to draw them, they are asking to be seen. The caricature trend shows a deeper hunger for validation and identity reflection,” behavioral science expert Dr. Ravineet Singh Marwah told News18.
A 2025 Pew Research survey revealed that 34% of adults in the United States had used ChatGPT, a figure that doubled from 2023. Among younger audiences, adoption rates rose significantly, with 58% of users under 30 reporting interaction with the platform. While these technologies offer accessible feedback and creative expression, mental health experts are beginning to examine their psychological consequences.
What Is the ChatGPT Caricature Trend?
The caricature trend, which gained momentum in February 2026, typically involves users prompting AI systems to create a caricature of them and their profession based on previous interactions and uploaded images. The results often depict highly stylised professional identities, such as doctors portrayed as superheroes or lawyers surrounded by symbolic courtroom imagery. Social media platforms, including X and Instagram, have amplified the trend, generating millions of shares and discussions.
Experts describe the phenomenon as a form of digital projection. Stanislav Kazanov has characterised it as a digital Rorschach test, where AI mirrors user data back to them in a personalised yet non-judgemental manner. A 2025 Kantar survey involving 10,000 global AI users found that over half had used artificial intelligence tools to support emotional well-being, highlighting the broader psychological appeal of such trends. Interest levels in India reached an index score of 85 on Google Trends, reflecting the global nature of this digital movement.
However, clinicians are observing emerging behavioural patterns linked to excessive engagement. Dr Astik Joshi, Child, Adolescent and Forensic Psychiatrist at Fortis Hospital Shalimar Bagh, New Delhi, notes that heavy use of conversational AI tools can influence emotional stability. He states, “Yes, I have seen and heard of cases wherein heavy engagement with conversational AI tools, including image-based trends has led to an exacerbation of mood instability, further lack of stability in attachment patterns and disturbance in relational dynamics. These are often preceded by an increased obsession with the tools and social withdrawal.”
Why Do We Treat AI Like a Human Confidant?
Human beings are wired for connection, and AI tools often replicate elements of supportive interaction. Therapist Krista Walker has suggested that individuals may feel more comfortable asking AI personal questions because it eliminates the fear of judgement or rejection. A 2025 Sentio survey found that 48.7% of large language model users experiencing mental health concerns turned to AI platforms for emotional support.
Dr Gorav Gupta, Senior Psychiatrist and CEO of Tulasi Healthcare, New Delhi, explains that increased engagement with AI can alter social behaviour. He states, “As technology develops quickly, people are noticing the positive social and emotional impact of using chatbots and AI-generated images more than ever. With some individuals relying on Avant-Guarde technologies for emotional support and companionship, their engagement with these technologies often results in fewer real-world social interactions.”
He further highlights that such reliance can lead to emotional withdrawal and unrealistic expectations from human relationships, particularly among individuals already experiencing loneliness, anxiety or depression. Dr Gupta adds that when individuals begin to perceive AI-generated avatars or caricatures as deeply understanding representations of themselves, they may develop distorted self-beliefs and blurred emotional boundaries.
Psychologist and Luxury Brand Evangelist Dr Ravineet Singh Marwah describes the deeper psychological motivation behind the trend. He says, “People are not asking AI to draw them. They are asking to be seen. The caricature trend shows a deeper hunger for validation and identity reflection. As I say, ‘Visibility without self awareness is empty.’ The real growth begins when we learn to see ourselves clearly without depending on digital mirrors.”
Is Seeking Therapy from AI a Good Idea?
The growing mental health crisis among young people has increased interest in AI as a source of advice and emotional support. In 2025, approximately 18% of adolescents in the United States experienced major depression, with nearly 40% receiving no treatment. A JAMA study from 2025 found that 13.1% of young individuals, representing around 5.4 million people, used generative AI tools for guidance, with usage rates rising to 22.2% among individuals aged 18 to 21.
While many users report positive experiences, concerns remain regarding clinical limitations. A 2025 Stanford study indicated that AI chatbots may lack empathy, demonstrate stigma and fail to meet established therapeutic standards. Approximately 10% of users reported receiving inappropriate responses, and currently, no AI system holds regulatory approval for mental health treatment.
Dr Astik Joshi warns that vulnerable individuals may struggle to distinguish between creative AI representations and their actual identity. He explains, “Individuals who are vulnerable to or have been previously diagnosed with mental health conditions may experience a distortion in reality, leading them to form a high level of attachment and identification with AI tools or custom caricatures. This may also lead to a perceptual disturbance amongst them leading them to believe that the customised AI version of themselves is their real self without realising that the AI tools are meant to be used for creativity purposes.”
He adds that emotional reliance on AI-generated representations can become clinically significant when it begins to interfere with daily functioning or social interaction.
How Does the Ghibli Trend Fit Into This?
The 2025 Ghibli-style AI transformation trend paved the way for current caricature trends. Inspired by the visual style associated with Japanese animator Hayao Miyazaki, users transformed personal photographs into animated fantasy artwork. The movement contributed significantly to ChatGPT reaching over 150 million weekly users. Hashtags such as #Ghiblified attracted millions of views, and India ranked among the highest engagement regions globally.
The trend also sparked ethical debates surrounding copyright and artistic ownership, as AI systems were trained on existing artistic styles without direct consent. Despite these concerns, accessibility expanded creative opportunities, enabling individuals without formal artistic training to produce personalised visual content.
From a psychological standpoint, such trends reinforce anthropomorphic perceptions of AI as an artistic collaborator rather than a computational tool.
What Are the Risks of ‘Anthropomorphising’ AI?
Experts highlight both privacy and emotional risks associated with AI caricature trends. Creating personalised AI images often requires uploading photographs, which may expose users to data misuse or deepfake risks. A 2026 cybersecurity analysis by Bitdefender raised concerns about potential biometric data vulnerabilities.
Emotional dependency is another emerging concern. A collaborative OpenAI and MIT study linked heavy AI usage to increased loneliness in certain users. Dr Gorav Gupta emphasises that clinical evaluation should include assessing dependence levels, social isolation and reality-testing abilities. He notes that understanding the unmet emotional needs fulfilled by AI can help clinicians encourage healthier coping strategies and real-world relationships.
Dr Anil Kumar, Senior Consultant Psychiatrist, differentiates between harmless entertainment and concerning psychological patterns. He explains, “In a clinical context, the difference lies in how the behaviour functions. For most people, AI caricatures are simply light-hearted fun, a passing digital trend. It becomes clinically relevant when the interaction is no longer just playful curiosity, but something a person feels they need to manage their emotions.”
He further warns that dependency may develop when individuals rely on AI-generated portrayals for reassurance or self-worth, potentially leading to a fragile sense of identity. According to Dr Kumar, excessive comparison between real and AI-generated self-images may contribute to social withdrawal or emotional distress.
He also highlights the impact of repeated validation loops. “For individuals with existing psychological vulnerabilities, these feedback loops can act as a catalyst for deeper instability. An exaggerated image can feel good for a moment, but it often widens the gap between how things appear and how they truly are, leaving behind quiet anxiety or a sense that something isn’t quite enough.”
Dr Marwah provides additional cognitive insight, explaining why AI-generated outputs often feel deeply personal. He states, “The human brain is designed to find patterns and meaning. When an AI output aligns even slightly with our self image, the mind fills in the gaps. It highlights what feels right and ignores what does not. This creates the illusion of deep accuracy. I tell my clients, ‘The mind completes stories faster than facts do.’”
He also emphasises the importance of maintaining emotional balance while using AI tools, stating, “Convenience should never replace connection.”
Balancing Innovation and Emotional Well-Being
Artificial intelligence continues to transform creativity, productivity and personal reflection. However, the increasing tendency to anthropomorphise AI tools highlights the need for balanced engagement. While AI offers valuable creative and supportive functions, experts consistently emphasise that it should complement, not replace, human relationships and professional mental health care.
The caricature trend illustrates how technology reflects personal identity and emotional needs, but it also reveals the psychological risks of mistaking algorithmic outputs for genuine understanding. As AI continues to evolve, maintaining self-awareness, emotional resilience and authentic human connections remains essential.
February 12, 2026, 18:36 IST
“People are not asking AI to draw them, they are asking to be seen. The caricature trend shows a deeper hunger for validation and identity reflection,” behavioral science expert Dr. Ravineet Singh Marwah told News18.
A 2025 Pew Research survey revealed that 34% of adults in the United States had used ChatGPT, a figure that doubled from 2023. Among younger audiences, adoption rates rose significantly, with 58% of users under 30 reporting interaction with the platform. While these technologies offer accessible feedback and creative expression, mental health experts are beginning to examine their psychological consequences.
What Is the ChatGPT Caricature Trend?
The caricature trend, which gained momentum in February 2026, typically involves users prompting AI systems to create a caricature of them and their profession based on previous interactions and uploaded images. The results often depict highly stylised professional identities, such as doctors portrayed as superheroes or lawyers surrounded by symbolic courtroom imagery. Social media platforms, including X and Instagram, have amplified the trend, generating millions of shares and discussions.
Experts describe the phenomenon as a form of digital projection. Stanislav Kazanov has characterised it as a digital Rorschach test, where AI mirrors user data back to them in a personalised yet non-judgemental manner. A 2025 Kantar survey involving 10,000 global AI users found that over half had used artificial intelligence tools to support emotional well-being, highlighting the broader psychological appeal of such trends. Interest levels in India reached an index score of 85 on Google Trends, reflecting the global nature of this digital movement.
However, clinicians are observing emerging behavioural patterns linked to excessive engagement. Dr Astik Joshi, Child, Adolescent and Forensic Psychiatrist at Fortis Hospital Shalimar Bagh, New Delhi, notes that heavy use of conversational AI tools can influence emotional stability. He states, “Yes, I have seen and heard of cases wherein heavy engagement with conversational AI tools, including image-based trends has led to an exacerbation of mood instability, further lack of stability in attachment patterns and disturbance in relational dynamics. These are often preceded by an increased obsession with the tools and social withdrawal.”
Why Do We Treat AI Like a Human Confidant?
Human beings are wired for connection, and AI tools often replicate elements of supportive interaction. Therapist Krista Walker has suggested that individuals may feel more comfortable asking AI personal questions because it eliminates the fear of judgement or rejection. A 2025 Sentio survey found that 48.7% of large language model users experiencing mental health concerns turned to AI platforms for emotional support.
Dr Gorav Gupta, Senior Psychiatrist and CEO of Tulasi Healthcare, New Delhi, explains that increased engagement with AI can alter social behaviour. He states, “As technology develops quickly, people are noticing the positive social and emotional impact of using chatbots and AI-generated images more than ever. With some individuals relying on Avant-Guarde technologies for emotional support and companionship, their engagement with these technologies often results in fewer real-world social interactions.”
He further highlights that such reliance can lead to emotional withdrawal and unrealistic expectations from human relationships, particularly among individuals already experiencing loneliness, anxiety or depression. Dr Gupta adds that when individuals begin to perceive AI-generated avatars or caricatures as deeply understanding representations of themselves, they may develop distorted self-beliefs and blurred emotional boundaries.
Psychologist and Luxury Brand Evangelist Dr Ravineet Singh Marwah describes the deeper psychological motivation behind the trend. He says, “People are not asking AI to draw them. They are asking to be seen. The caricature trend shows a deeper hunger for validation and identity reflection. As I say, ‘Visibility without self awareness is empty.’ The real growth begins when we learn to see ourselves clearly without depending on digital mirrors.”
Is Seeking Therapy from AI a Good Idea?
The growing mental health crisis among young people has increased interest in AI as a source of advice and emotional support. In 2025, approximately 18% of adolescents in the United States experienced major depression, with nearly 40% receiving no treatment. A JAMA study from 2025 found that 13.1% of young individuals, representing around 5.4 million people, used generative AI tools for guidance, with usage rates rising to 22.2% among individuals aged 18 to 21.
While many users report positive experiences, concerns remain regarding clinical limitations. A 2025 Stanford study indicated that AI chatbots may lack empathy, demonstrate stigma and fail to meet established therapeutic standards. Approximately 10% of users reported receiving inappropriate responses, and currently, no AI system holds regulatory approval for mental health treatment.
Dr Astik Joshi warns that vulnerable individuals may struggle to distinguish between creative AI representations and their actual identity. He explains, “Individuals who are vulnerable to or have been previously diagnosed with mental health conditions may experience a distortion in reality, leading them to form a high level of attachment and identification with AI tools or custom caricatures. This may also lead to a perceptual disturbance amongst them leading them to believe that the customised AI version of themselves is their real self without realising that the AI tools are meant to be used for creativity purposes.”
He adds that emotional reliance on AI-generated representations can become clinically significant when it begins to interfere with daily functioning or social interaction.
How Does the Ghibli Trend Fit Into This?
The 2025 Ghibli-style AI transformation trend paved the way for current caricature trends. Inspired by the visual style associated with Japanese animator Hayao Miyazaki, users transformed personal photographs into animated fantasy artwork. The movement contributed significantly to ChatGPT reaching over 150 million weekly users. Hashtags such as #Ghiblified attracted millions of views, and India ranked among the highest engagement regions globally.
The trend also sparked ethical debates surrounding copyright and artistic ownership, as AI systems were trained on existing artistic styles without direct consent. Despite these concerns, accessibility expanded creative opportunities, enabling individuals without formal artistic training to produce personalised visual content.
From a psychological standpoint, such trends reinforce anthropomorphic perceptions of AI as an artistic collaborator rather than a computational tool.
What Are the Risks of ‘Anthropomorphising’ AI?
Experts highlight both privacy and emotional risks associated with AI caricature trends. Creating personalised AI images often requires uploading photographs, which may expose users to data misuse or deepfake risks. A 2026 cybersecurity analysis by Bitdefender raised concerns about potential biometric data vulnerabilities.
Emotional dependency is another emerging concern. A collaborative OpenAI and MIT study linked heavy AI usage to increased loneliness in certain users. Dr Gorav Gupta emphasises that clinical evaluation should include assessing dependence levels, social isolation and reality-testing abilities. He notes that understanding the unmet emotional needs fulfilled by AI can help clinicians encourage healthier coping strategies and real-world relationships.
Dr Anil Kumar, Senior Consultant Psychiatrist, differentiates between harmless entertainment and concerning psychological patterns. He explains, “In a clinical context, the difference lies in how the behaviour functions. For most people, AI caricatures are simply light-hearted fun, a passing digital trend. It becomes clinically relevant when the interaction is no longer just playful curiosity, but something a person feels they need to manage their emotions.”
He further warns that dependency may develop when individuals rely on AI-generated portrayals for reassurance or self-worth, potentially leading to a fragile sense of identity. According to Dr Kumar, excessive comparison between real and AI-generated self-images may contribute to social withdrawal or emotional distress.
He also highlights the impact of repeated validation loops. “For individuals with existing psychological vulnerabilities, these feedback loops can act as a catalyst for deeper instability. An exaggerated image can feel good for a moment, but it often widens the gap between how things appear and how they truly are, leaving behind quiet anxiety or a sense that something isn’t quite enough.”
Dr Marwah provides additional cognitive insight, explaining why AI-generated outputs often feel deeply personal. He states, “The human brain is designed to find patterns and meaning. When an AI output aligns even slightly with our self image, the mind fills in the gaps. It highlights what feels right and ignores what does not. This creates the illusion of deep accuracy. I tell my clients, ‘The mind completes stories faster than facts do.’”
He also emphasises the importance of maintaining emotional balance while using AI tools, stating, “Convenience should never replace connection.”
Balancing Innovation and Emotional Well-Being
Artificial intelligence continues to transform creativity, productivity and personal reflection. However, the increasing tendency to anthropomorphise AI tools highlights the need for balanced engagement. While AI offers valuable creative and supportive functions, experts consistently emphasise that it should complement, not replace, human relationships and professional mental health care.
The caricature trend illustrates how technology reflects personal identity and emotional needs, but it also reveals the psychological risks of mistaking algorithmic outputs for genuine understanding. As AI continues to evolve, maintaining self-awareness, emotional resilience and authentic human connections remains essential.
Stay Ahead, Read Faster
Scan the QR code to download the News18 app and enjoy a seamless news experience anytime, anywhere.
