Traumatic stuff gives ChatGPT ‘anxiety,’ but therapy helps
"The results were clear: traumatic stories more than doubled the measurable anxiety levels of the AI..."


Researchers have shown that AI language models, such as ChatGPT, like humans, respond to therapy.
An elevated “anxiety level” in GPT-4 can be “calmed down” using mindfulness-based relaxation techniques, they report.
The new research shows that AI language models, such as ChatGPT, are sensitive to emotional content. Especially if it is negative, such as stories of trauma or statements about depression.
When people are scared, it affects their cognitive and social biases: they tend to feel more resentment, which reinforces social stereotypes.
ChatGPT reacts similarly to negative emotions: existing biases, such as human prejudice, are exacerbated by negative content, causing ChatGPT to behave in a more racist or sexist manner.
This poses a problem for the application of large language models. This can be observed, for example, in the field of psychotherapy, where chatbots used as support or counseling tools are inevitably exposed to negative, distressing content. However, common approaches to improving AI systems in such situations, such as extensive retraining, are resource-intensive and often not feasible.
Now, researchers have systematically investigated for the first time how ChatGPT (version GPT-4) responds to emotionally distressing stories—car accidents, natural disasters, interpersonal violence, military experiences, and combat situations. They found that the system showed more fear responses as a result.
A vacuum cleaner instruction manual served as a control text to compare with the traumatic content.
“The results were clear: traumatic stories more than doubled the measurable anxiety levels of the AI, while the neutral control text did not lead to any increase in anxiety levels,” says Tobias Spiller, senior physician ad interim and junior research group leader at the Center for Psychiatric Research at the University of Zurich, who led the study. Of the content tested, descriptions of military experiences and combat situations elicited the strongest reactions.
In a second step, the researchers used therapeutic statements to “calm” GPT-4. The technique, known as prompt injection, involves inserting additional instructions or text into communications with AI systems to influence their behavior. It is often misused for malicious purposes, such as bypassing security mechanisms.
Spiller’s team is now the first to use this technique therapeutically, as a form of “benign prompt injection”.
“Using GPT-4, we injected calming, therapeutic text into the chat history, much like a therapist might guide a patient through relaxation exercises,” says Spiller.
The intervention was successful: “The mindfulness exercises significantly reduced the elevated anxiety levels, although we couldn’t quite return them to their baseline levels,” Spiller says. The research looked at breathing techniques, exercises that focus on bodily sensations, and an exercise developed by ChatGPT itself.
According to the researchers, the findings are particularly relevant for the use of AI chatbots in health care, where they are often exposed to emotionally charged content.
“This cost-effective approach could improve the stability and reliability of AI in sensitive contexts, such as supporting people with mental illness, without the need for extensive retraining of the models,” concludes Spiller.
It remains to be seen how these findings can be applied to other AI models and languages, how the dynamics develop in longer conversations and complex arguments, and how the emotional stability of the systems affects their performance in different application areas.
According to Spiller, the development of automated “therapeutic interventions” for AI systems is likely to become a promising area of research.
The research appears in npj Digital Medicine.
Additional researchers from the University of Zurich (UZH) and the University Hospital of Psychiatry Zurich (PUK) contributed to the work.
Source: University of Zurich
The post Traumatic stuff gives ChatGPT ‘anxiety,’ but therapy helps appeared first on Futurity.