Mind charity launches first global inquiry into AI and mental health
Mental health charity Mind is launching an inquiry after Google AI Overviews were exposed as giving people misleading health advice. The year-long commission will examine the risks and safeguards of AI and will gather evidence on the intersection of AI and mental health. The inquiry – the first of its kind globally – will bring together the world’s leading doctors and mental health professionals, as well as people with lived experience, health providers, policymakers and tech companies. Mind says it will aim to shape a safer digital mental health ecosystem, with strong regulation, standards and safeguards.
'Very dangerous' advice given
The launch of the inquiry comes after a Guardian investigation revealed how Google’s AI Overviews gave people “very dangerous” medical advice. The Guardian found some AI Overviews served up inaccurate health information and put people at risk of harm. The investigation uncovered false and misleading medical advice across a range of issues, including cancer, liver disease and women’s health, as well as mental health conditions.
People deserve safe, accurate information
The Guardian shared comments from Dr Sarah Hughes, chief executive officer of Mind. She said vulnerable people were being served “dangerously incorrect guidance on mental health”, including “advice that could prevent people from seeking treatment, reinforce stigma or discrimination and in the worst cases, put lives at risk”. Dr Hughes added: “People deserve information that is safe, accurate and grounded in evidence, not untested technology presented with a veneer of confidence.”
Harmful process for people in distress
In a separate article for The Guardian, Mind's information content manager Rosie Weatherley said: "I set myself and my team of mental health information experts at Mind a task: 20 minutes searching using queries we know people with mental health problems tend to use.
"None of us needed 20. Within two minutes, Google had served AI Overviews that assured me starvation was healthy. It told a colleague mental health problems are caused by chemical imbalances in the brain. Another was told that her imagined stalker was real, and a fourth that 60% of benefit claims for mental health conditions are malingering. It should go without saying that none of the above are true.
"In each of these examples we are seeing how AI Overviews are flattening information about highly sensitive and nuanced areas into neat answers. And when you take out important context and nuance and present it in the way AI Overviews do, almost anything can seem plausible. This process is especially harmful for people who are likely to be in some level of distress."
Read more about the Mind inquiry on The Guardian website here.
Find the article from Mind's information content manager Rosie Weatherley here.