Could AI discriminate when it comes to mental health?

Could AI discriminate when it comes to mental health?

Algorithms used to screen for issues such as anxiety and depression could make assumptions based on patients' gender and ethnicity, experts caution.

ai-ls
New research raises crucial questions about the fairness and effectiveness of AI technology as it pertains to mental health diagnosis and treatment. (Envato Elements pic)

According to an American study, some artificial intelligence tools used in programmes for treating patients’ mental health may rely on biased information.

This discovery raises crucial questions about the fairness and effectiveness of such technologies when it comes to mental health diagnosis and treatment.

Published in the journal Frontiers in Digital Health, this study coming out of the University of Boulder, Colorado suggests that algorithms aimed at screening for issues such as anxiety and depression can make assumptions based on patients’ gender and ethnicity.

Lead author Theodora Chaspari noted that AI could be a promising technology in the healthcare world, as finely tuned algorithms can sift through recordings, searching for subtle changes in the way they talk that could indicate underlying psychological concerns.

But those tools have to perform consistently for patients from many demographic groups, the computer scientist said.

After subjecting people’s audio samples to a set of learning algorithms, the researchers discovered several potentially dangerous flaws. For instance, the machines were more likely to underdiagnose women at risk of depression, over men.

AI might also erroneously assess patients’ speech: according to the researchers, those with anxiety express themselves with a higher tone and more agitation while showing signs of shortness of breath. In contrast, people with signs of depression are more likely to speak softly and in a monotone.

To test this hypothesis, the researchers analysed participants’ behaviour as they gave a short speech in front of a group of people they didn’t know. Meanwhile, another group of men and women talked to one another in a clinical-like context.

In the first group, people of Latin American origin reported being more nervous than white or Black participants, but the AI did not detect this.

In the second group, the algorithms assigned the same level of depression risk to both men and women – even though the latter actually had more symptoms.

“If AI isn’t trained well, or doesn’t include enough representative data, it can propagate human or societal biases,” Chaspari cautioned.

“So, if we think an algorithm actually underestimates depression for a specific group, this is something clinicians need to be informed about.”

Stay current - Follow FMT on WhatsApp, Google news and Telegram

Subscribe to our newsletter and get news delivered to your mailbox.