There are risks associated with the use of AI which health information providers should be aware of.
These challenges relate to AI in all its uses. But, arguably, they are of critical importance in the health information space where we strive to produce accurate, unbiased, inclusive materials.
Data privacy and security
AI runs on data. Whether developing models in-house or outsourcing to developers, organisations need to ensure personal information is secure and used in line with relevant regulations*.
Bias in datasets
ML and DL algorithms learn from the data they are given. If the training data contains bias, this is likely to be reflected or compounded in the AI model’s outputs. AI has been shown to mirror existing misinformation and prejudices.
Hallucinations
Hallucinations are when AI generates convincing but completely made up content producing incorrect or misleading results*. This can include fake references.
Poor quality sources
Some AI includes relevant content, regardless of source or accuracy, in its analysis. For example, GAI models will trawl their training data or the internet looking for answers to the questions they have been given*. If the information found is biased or incorrect, the AI model may spot patterns that do not exist.
Models also tend not to have access to paywall or otherwise protected content. This means primary sources are often excluded from analysis.
Out of date sources
GAI models are trained on pre-existing data, meaning they do not have access to the latest information or research.
For example, the first free-to-use version of ChatGPT was trained on data published before 2021*. This meant any searches relating to COVID-19 returned drastically out-of-date results.
Lack of individualised or specific information
GAI tends to over simplify health topics because it lacks the ability to apply context or nuance to its results, or to understand the meaning behind the data*.
While it can be useful for researching general information on a specific health condition, it is unable to interpret and relay how different health conditions or interventions may impact on each other.
The “Black Box” transparency problem
AI cannot show its workings. We may know or be able to control what data goes in, and the results that come out, but not how the model comes to its conclusions.
This makes it challenging to know where the AI-generated information came from, and whether it is accurate or biased*.
Public trust and understanding
A recent survey of more than 17,000 people from 17 countries found 61% were wary about trusting AI systems*. It is vital the use of AI does not undermine public trust in vital health information resources.
Liability
Currently AI has a high risk of inaccuracy. This means using AI to produce health information comes with complex and unanswered questions around the liability of AI-generated output used by an organisation*.
Loss of website traffic
AI tools make use of content from health charities, the NHS and other trusted sources. This could lead to a reduction in direct traffic to websites which place information in its full context. This could have wider impacts on an organisation’s sustainability.
Copyright breach
The use of AI tools creates a risk of copyright breach5. In January 2024, UK Parliament government confirmed AI training data will infringe copyright unless permitted under licence or an exemption*.
GAI tools which search the internet for answers can pull large sections of copyrighted text from sources including charity and commercial websites. This can have both ethical and legal implications.