top of page

When it comes to mental health, ChatGPT has experts stumped and Gen Z hooked:

  • JCI Blog
  • 2 minutes ago
  • 3 min read

More and more young people are turning to ChatGPT for a therapist or friend–could it be hurting more than it’s helping?


By Simi Situ



On May 7th, New York Magazine published “Everyone Is Cheating Their Way Through College: ChatGPT has unraveled the entire academic project,” a lengthy feature detailing the role of ChatGPT in the lives of students and their professors. The author, James D. Walsh, paints a bleak picture: students relying on ChatGPT for every assignment with no regard for their academic pursuits; professors unable to punish students for using AI to generate their essays, while also not being able to reward those who don’t; institutions putting the onus on their staff rather than implementing policy to curb or encourage the use of generative AI for schoolwork.


While it seems ChatGPT is now just another norm in academia, there is still a conversation to be had about why students seem so drawn to using the technology in the first place. During Mental Health Awareness Month this past May, more researchers weighed in on the place of AI as a therapy alternative to combat loneliness, anxiety, and depression, especially in young people. However, even though ChatGPT may help students in the short term, its effects may have long-term negative consequences for learning and mental health.


Apart from better grades, faster assignments, or sheer laziness, one of the testimonials in the article from Sarah, a college freshman, paints another picture of how ChatGPT has contributed to her struggle with phone addiction:


“Toward the end of the semester, [Sarah] began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. ‘I spend so much time on TikTok,’ she said. ‘Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.’”

Sarah is part of a generation that is no stranger to digital dependency issues. Over 80% of Gen Z reports having an “unhealthy relationship” with their phone. After years of experts expressing worry about the negative effects of apps such as Instagram, Snapchat, and TikTok, ChatGPT has entered the fold as an unpredictable factor in how young people engage in social–and antisocial–experiences. 


After its launch in 2022 as a free chatbot, OpenAI’s ChatGPT evolved from a companion to assist with work to a social crutch for users who seek it. Young people are using AI chatbots as a form of talk therapy, to seek support for difficult social situations, or as another digital friend. 


Research has proven that without the proper guardrails, generative AI models actually make it more difficult for students to learn. One study found that students who use ChatGPT as a study assistant have worse testing outcomes. Another found that while ChatGPT could potentially improve scores, the tool showed limitations when it came to complex thinking tasks. 


Sarah’s testimonial that she can’t seem to get off her phone demonstrates the feedback loop that keeps Gen Z in a uniquely toxic cycle: 1) Young people turn to social media apps for entertainment and socialization. 2) They become addicted to these apps and deprioritize work and school as a result, and 3) ChatGPT enters as a shortcut to maintain these unsustainable habits while driving Gen Z towards outcomes that are ultimately unhealthy.


So, how do we solve this problem? If young people are facing rising rates of mental health issues, and social media in combination with ChatGPT seem to make these effects worse, what can a student, parent, or educator do to turn things around?


For one, more AI probably isn’t the answer. While ChatGPT is more accessible in terms of immediacy and cost to the user, the infrastructure needed to keep patients safe is sorely lacking. Aside from the model's tendency to create misleading or harmful information, there is also little to no regulation on user privacy or guidelines for when human intervention is needed in more drastic situations. To avoid these pitfalls, young users may need to look to chatbots outside of OpenAI’s ChatGPT, for example, some companies or institutions that specialize in teletherapy offer AI chatbots that are safer, more reliable, and can redirect to a human counterpart in more drastic scenarios.


On the other hand, a complete technology detox may not be the solution either. Social media still stands as a way for young people to connect, make friends, and feel less alone–especially if their homes or schools are unsafe environments.


To address the growing reliance on chatbots as assistants, therapists, or innovative cheating tools, the solution probably lies somewhere in the middle: Allowing young people to explore and experiment with new technology, while still equipping them with the knowledge to be discerning about misinformation and empowering them to keep themselves and their peers safe online.


 
 
 
BE IN TOUCH
  • White LinkedIn Icon
  • White Twitter Icon
imageedit_4_4176264056.png

CALIFORNIA | 1513 6th St, Ste 204, Santa Monica, CA 90401

NEW YORK | 370 Lexington Ave. #2001, New York, NY, 10017

Tel: 310.922.3312, Fax: 310.496.1335

info@jcipr.com

© 2024 JCI WORLDWIDE 

BE IN TOUCH
  • White LinkedIn Icon
  • White Twitter Icon
bottom of page