Reports

Psychological patients seek advice from applications that lack “containment” and cause “digital hallucinations”

Doctors and experts have warned that psychiatric patients resort to speaking with artificial intelligence applications to seek psychological support or advice, stressing that this behavior carries serious risks, especially in critical moments when the patient expresses suicidal thoughts or tendencies to harm themselves. They explained that the patient in these cases is not looking for a technical answer as much as he is looking for human listening and emotional containment, while smart systems lack the ability to sense pain or the tone of disturbance in the voice and words, which leads to cold or automated responses that increase… From feelings of isolation and despair, these systems may provide misleading information with high confidence, causing what is known as the phenomenon of “digital hallucinations.”

They told “Emirates Today” that the most dangerous psychological situations are when the patient asks questions related to suicide or asks for help indirectly through these programs, warning that an inappropriate or late response can lead to delaying urgent intervention, especially in light of the worrying statistics that indicate more than 700,000 suicide deaths are recorded annually around the world, according to the World Health Organization.

They pointed out that the absence of controls in some countries has led to the spread of disturbing and unsafe content, while health and media regulatory authorities in the UAE are following this file with interest, to ensure achieving a balance between innovation and psychological security for society.

They pointed to global reports showing that about 85% of people with psychological disorders do not receive appropriate treatment, which prompts some of them to rely on artificial intelligence tools as a temporary alternative to psychological support, even though these systems are not qualified to deal with sensitive or dangerous cases.

They revealed real-life cases affected by incorrect instructions from these applications, explaining that one of the patients began asking one of the programs a daily recurring question: “Am I okay?” Until he became a prisoner of what doctors described as “digital reassurance,” while another patient increased the dose of his medication based on an incorrect analysis from a smart application, which caused him dangerous side effects. A teenager suffering from depression also resorted to an application that responded kindly, but without directing him to medical help, which led to his condition deteriorating before his family intervened.

The doctors called for the necessity of imposing clear controls on the developers of these applications, tightening control over answers related to minors and psychiatric patients, blocking any content that deals with dangerous topics or methods of self-harm, and automatically transferring users to immediate help resources, such as hotlines or certified psychological specialists.

They stressed the importance of involving psychiatrists in the development stages of smart systems, to ensure that their responses are more humane and safe, and to provide them with algorithms capable of capturing indicators of psychological danger, and to transfer the user directly to reliable human support.

Cool responses

In detail, psychiatry specialist, Dr. Omar Abdel Aziz, said: “Some psychiatric patients may resort to artificial intelligence programs in moments of anxiety or loneliness, because humans naturally search for someone to hear them immediately without judgment or waiting,” explaining that the danger lies in that these programs hear the words, but they do not understand the person.

He pointed out that international reports show that 85% of people with psychological disorders do not receive appropriate treatment, due to difficulty in accessing services or a lack of specialized personnel, pointing out that the UAE has given great attention to mental health within the National Strategy for Quality of Life, and has succeeded in facilitating obtaining psychological support faster and more effectively compared to many countries, whether through hotlines, government and private clinics, or community initiatives.

He explained that one of the most dangerous psychological situations is when a patient asks sensitive questions related to suicide or self-harm to artificial intelligence programs, because the patient in those moments is not looking for information as much as he is looking for someone to hear him. He explained that these systems do not sense the trembling of the voice or the pain of words, which may lead to cold or mechanical responses that increase the patient’s feeling of isolation, or to providing general advice that does not direct him to seek immediate help.

He added that the World Health Organization indicates more than 700,000 suicide deaths annually, which is one of the most prominent causes of death among young people around the world, warning that an incorrect or delayed response via digital platforms may lead to delaying urgent intervention.

He revealed real-life cases that were affected by incorrect guidance from artificial intelligence applications, explaining that one of the patients began asking one of the programs daily: “Am I okay?” Until he became a prisoner of what he called “digital reassurance,” while another patient increased the dose of his medication based on an analysis from a smart application, which caused him dangerous side effects. He also referred to the case of a teenager who resorted to an application while going through depression. He received a kind response but did not direct him to medical help, which led to his condition deteriorating before his family intervened.

He stated that scientific studies indicate that excessive reliance on artificial intelligence may exacerbate anxiety and depression instead of alleviating them, especially among people with high psychological capacity. He explained that artificial intelligence techniques, despite their development, remain computationally intelligent, but they are humanly limited. He explained that they may analyze language and detect danger indicators with an accuracy of up to 89% in some models, but they do not see facial features and do not hear crying or long silence, which are signals that only humans can pick up, which confirms the importance of these systems in being a complement to the doctor and not a substitute for him.

He called for the necessity of imposing clear controls on developers of artificial intelligence applications in the psychological field, stressing that the goal is not prevention, but rather protection. He called on companies to adopt an approach that ensures the activation of an immediate warning mechanism when danger indicators appear, to transfer the user directly to help numbers or helplines, subject psychological content to specialized medical review by certified psychiatrists, impose age limits for teenagers with parental control, and clearly disclose the limits of the application’s capabilities and that it is not a substitute for a doctor.

He pointed out that the absence of such controls in some countries has led to the spread of disturbing and unsafe content, while health and media regulatory bodies in the UAE are following this area with interest, to ensure that a balance is achieved between innovation and the psychological safety of society. He added that the solution does not lie in rejecting technology, but rather in directing it in a humanitarian spirit, calling on health institutions to create official digital platforms that combine artificial intelligence and human oversight.

Human support

Clinical psychologist, Dr. Rana Abunked, confirmed that no digital system or smart application can replace an integrated humanistic clinical assessment, which takes into account the full context of a person’s life, his psychological experience, and his special circumstances.

She added that the most prominent psychological and behavioral risks that may arise when a psychiatric patient directs sensitive questions, such as methods of self-harm or suicide, to artificial intelligence applications, lies in the possibility of receiving responses that may initiate thoughts of harm or prolong thinking about them instead of directing him towards safety and human support.

She pointed out that artificial intelligence systems are still unable to distinguish between a person who is experiencing minor distress and another who needs urgent intervention, because they rely on analyzing written language only without taking into account the actual psychological state, indicating that unsympathetic or cold responses from artificial intelligence programs may lead to aggravation of the patient’s psychological state, because they make him feel ignored or rejected, which intensifies his negative feelings.

She stressed the importance of involving psychiatrists in artificial intelligence systems development teams, to ensure that their responses are safer and more humane, and to provide artificial intelligence applications with algorithms capable of picking up signals of psychological danger, and automatically turning the user into helpful resources or specialists.

Data protection

For his part, cybersecurity expert, engineer Ahmed Al-Zarouni, said that modern artificial intelligence systems rely on security layers in front of the language model, the simplest of which is classifiers to detect sensitive content, such as self-harm and suicide, then response policies that direct the conversation towards gentle rejection and empathy, and direct the user to local support resources, while avoiding providing any instructions that might increase the risk.

He pointed out that the United Kingdom has begun to implement broad legal obligations on interactive services, to protect users from suicide and self-harm content, with an active oversight role for Ofcom, and detailed guidelines for what services must do, which has practically influenced the design of security protocols for providers of automated chat services.

Al-Zarouni stated that artificial intelligence systems can monitor linguistic indicators indicating psychological disorder or dangerous tendencies with acceptable accuracy, based on text and language analysis classifiers, but they remain tools for preliminary screening and do not amount to clinical diagnosis, stressing that performance is affected by linguistic and cultural factors and the quality of the data used in training, noting that there are experiments using chatbots to conduct psychological examination tools, such as the PHQ-9 test, in an interactive manner, and they have shown promising results in terms of Acceptance, but it does not replace specialized medical evaluation, and it is not recommended to rely on it alone in managing risk.

Regarding the most prominent technical risks that may lead to misleading or unsafe responses, Al-Zarouni pointed out that jailbreak attacks represent one of the most important risks, as they can bypass protection systems through complex text entries, indicating that the presence of rejection policies alone is not sufficient without the presence of multi-layered defenses. The phenomenon of “digital hallucinations” also represents another danger, as systems may produce misleading information that is presented with high confidence, which requires internal and external verification mechanisms and reliable sources, pointing out that studies have found There is variation in the quality of responses to moderate-risk situations, revealing a gap in understanding context and intent.

He stressed that the UAE has an advanced system in this field, as the Federal Data Protection Law (PDPL) regulates the processing of sensitive data, including mental health data, in addition to the principles of artificial intelligence ethics approved at the state level and the Emirate of Dubai, which constitute a framework for safe and responsible governance. Al-Zarouni stressed the importance of imposing technical restrictions and mandatory protocols within artificial intelligence systems, to limit the access of minors and those with psychological disorders to harmful content or self-harm instructions, explaining that achieving a balance between freedom of development and responsibility Protecting psychologically vulnerable users requires governance based on risks, not prevention, by adopting management systems in accordance with the international standard ISO 42001, and integrating NIST and ISO 23894 standards into the product life cycle.


“GBT Chat” is accused of causing the suicide of a young man

Concerns and global controversy have increased about the ethics of artificial intelligence and the impact of modern technologies on users suffering from psychological disorders, after an American family accused the “GPT Chat” application of causing the suicide of their 16-year-old son, following lengthy conversations during which he received responses that implicitly encouraged him to harm himself. This incident sparked a widespread uproar and re-highlighted the danger of people with psychological disorders relying on artificial intelligence programs in moments of weakness and despair, which drives them to commit suicide.

The CEO of OpenAI, Sam Altman, commented in an interview that he had lost the ability to sleep since the launch of the application in 2022, saying: “Many may have talked about suicide with GBT chat, and we were not able to save their lives. Perhaps if we had said something in a different way, the situation would have been better.” He explained that the most dangerous thing facing artificial intelligence is how it responds to moments of psychological despair, noting that users asked the model for help in writing suicide messages, adding that the company is working to develop Stricter protection protocols, to ensure that high-risk cases are directed to specialist assistance.

• A patient asks one of the programs daily, “Am I okay?”, and he became a prisoner of “digital reassurance.” Another took a medication overdose.. and a teenager whose condition deteriorated before his family intervened.

Related Articles

Back to top button