Mind charity launches first global inquiry into AI and mental health
Mental health charity Mind is launching an inquiry after Google AI Overviews were exposed in a national newspaper as giving people misleading health advice. The year-long commission, which follows an exclusive investigation by the Guardian, will examine the risks and safeguards of AI and gather evidence on the intersection of AI and mental health. The inquiry – the first of its kind globally – will bring together the world’s leading doctors and mental health professionals, as well as people with lived experience, health providers, policymakers and tech companies. Mind says it will aim to shape a safer digital mental health ecosystem, with strong regulation, standards and safeguards.
Ongoing coverage of members' concerns over AI overviews
The launch comes after the Guardian revealed how people were being put at risk of harm by false and misleading health information in Google AI Overviews.
The Guardian first highlighted concerns over Google's AI summaries for people searching for health information in January. The investigation included examples from Pancreatic Cancer UK, The Eve Appeal, Mind and The British Liver Trust. Google later removed AI health search summaries from some health topics following the investigation supported by PIF and the AI coalition of members including Marie Curie, Macmillan Cancer Support, Cancer Research UK and the British Heart Foundation.
Another article in the Guardian investigation stated the ‘confident authority’ of Google AI Overviews is putting public health at risk. PIF chair Sue Farrington expressed concern that some medical summaries served up inaccurate health information and put people at risk of harm. The British Liver Trust's Vanessa Hebditch and Athena Lamnisos from The Eve Appeal were also quoted.
PIF continues to work with the Guardian's health editor Andrew Gregory on a series of follow-up articles. Most recently, Anthony Nolan's head of patient information Tom Bishop was among those who shared their concerns about the disclaimers underneath Google's AI Overviews.
People deserve safe, accurate information
The Guardian shared comments from Dr Sarah Hughes, chief executive officer of Mind, as she launched the inquiry. She said vulnerable people were being served “dangerously incorrect guidance on mental health”, including “advice that could prevent people from seeking treatment, reinforce stigma or discrimination and in the worst cases, put lives at risk”. Dr Hughes added: “People deserve information that is safe, accurate and grounded in evidence, not untested technology presented with a veneer of confidence.”
Harmful process for people in distress
In a separate article for the Guardian, Mind's information content manager Rosie Weatherley outlined the organisation's concerns in more detail. She said: "I set myself and my team of mental health information experts at Mind a task: 20 minutes searching using queries we know people with mental health problems tend to use.
"None of us needed 20. Within two minutes, Google had served AI Overviews that assured me starvation was healthy. It told a colleague mental health problems are caused by chemical imbalances in the brain. Another was told that her imagined stalker was real, and a fourth that 60% of benefit claims for mental health conditions are malingering. It should go without saying that none of the above are true.
"In each of these examples we are seeing how AI Overviews are flattening information about highly sensitive and nuanced areas into neat answers. And when you take out important context and nuance and present it in the way AI Overviews do, almost anything can seem plausible. This process is especially harmful for people who are likely to be in some level of distress."
Read more about the Mind inquiry on The Guardian website here.
Find the article from Mind's information content manager Rosie Weatherley here.