A dark blue background with lighter blue squares. In the left is an image of the cover of the Balancing the risks and benefits of AI in the production of health information framework

Framework: Balancing the risks and benefits of AI in the production of health information

This framework will help members build an artificial intelligence use policy to balance the risks and benefits of AI applications. It was developed by PIF's AI working group.

As artificial intelligence (AI) continues to evolve, a clear and robust use policy is essential. It helps make sure your team uses AI technologies responsibly and ethically. A policy helps to mitigate any threats to evidence-based, unbiased health information that could undermine public trust and exacerbate health inequalities.

This framework was developed in collaboration with PIF’s AI working group. It is intended to help members build an AI use policy to balance the risks and benefits of the technologies. A clear, robust AI use policy is important for a number of reasons:

  • Staff guidance: A clear policy will make sure teams understand when the potential risks of using AI outweigh the potential benefits.
  • Ethical guidance: It ensures AI systems are developed and deployed ethically. This prevents misuse and promotes fairness, transparency, and accountability.
  • Risk mitigation: A policy will help teams identify and manage the potential risks, such as bias, discrimination and privacy breaches.
  • Regulatory compliance: A clear policy helps teams comply with all relevant laws and regulations. These include data protection law, such as the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, as well as intellectual property laws.
  • Public trust: By making a commitment to responsible AI practices, a policy will foster trust in the organisation.


An AI use policy helps meet PIF TICK criteria 1: All information is created using a consistent and documented process.

PIF members can login below to access the full framework. Non-members can access our position statement here.

Please login or join us to see this content