Click here to join our community!
AI medical devices, like all interventions, have risks and benefits, with potential for harm, often unrecognised due to their novelty. Our wider community specialises in adapting analytical and evaluation techniques to ensure the safety of AI medical devices for patients. In this regard, one of our community members, AI & Digital Health Research and Policy Group in Birmingham, have developed a medical algorithmic audit and reporting guidelines for medical devices.
To further enhance safety practices, the Medical Algorithmic Audit Framework, a tool designed to identify the weaknesses of an artificial intelligence systems and establish mechanisms to mitigate their impact. You can explore more through:
Webinar: Clinical AI Monitoring: Medical algorithmic audit case studies
Paper: The medical algorithmic audit - The Lancet Digital Health
Additionally, in 2020, the AI & Digital Health Research and Policy Group in Birmingham led a collaboration to create AI-specific extensions to the established CONSORT and SPIRIT reporting guidelines. The CONSORT-AI and SPIRIT-AI extensions ensure that studies involving AI are rigorously reported moving forward. This effort brought together an international group of patient representatives, policymakers, clinical experts, industry partners, academic researchers and journal editors, to ensure the guidelines meet the needs of all stakeholders.
CONSORT-AI and SPIRIT-AI were published on 9th September 2020 simultaneously in Nature Medicine, The Lancet Digital Health, and the British Medical Journal (BMJ).
Read the papers:
CONSORT AI can be read at Nature Medicine, The Lancet Digital Health, or via the BMJ.
SPIRIT-AI can also be read at Nature Medicine, The Lancet Digital Health, or via the BMJ.
Regulation of Health Technologies: an art of a science?
Presented by Prof Alastair Denniston, Director of CERSI-AI
Professor of Regulatory Science and Innovation, University of Birmingham
Monday 14th April 2025
What is it?
We are used to the idea that regulators ask manufacturers for evidence relating to their product before it can be put on the market. But what about the other way round? Is it reasonable for manufacturers – or society more generally – to ask regulators to provide evidence that their approaches are appropriate for the technology being evaluated: effective and safe, but also proportionate and efficient.
The concept of ‘regulatory science’ underscores the need for using scientific methodology to innovate, evaluate, and iterate our regulatory frameworks to ensure they are based on evidence rather than narrative or politics.
Why should I care?
If we want to unlock the potential benefits of AI health technologies safely and at speed, we need our regulatory systems to be evidence-based and underpinned by scientific methodology. The UK Government has recently awarded funding to CERSI-AI, a national centre of excellence to support regulatory science and innovation in AI and Digital Health Technologies.
Does it impact healthcare?
Yes, we need to ensure that our regulatory systems are smart enough to meet the challenges and opportunities emerging technologies like AI, enabling beneficial innovation and early adoption that benefits patients whilst also protecting against harm, and ensuring trust.
Useful links:
https://www.cersi-ai.org/
https://www.digitalregulations.innovation.nhs.uk/
https://www.digitalhealthincubator.ai/webinars/safe
Brave AI Deployment in the South West, England
Presented by Dr Matthew Dolman, Complex Care GP in Somerset
Thursday 13th February 2025
What is it?
Brave AI by Bering Ltd. is an AI tool that analyses primary care data to predict patients at risk of unplanned hospital admissions, enabling proactive and personalised health and care interventions.
Why should I care?
It helps health and care professionals provide early, personalised care, reducing unplanned hospital admissions, emergency visits, and strain on NHS resources.
Does it impact healthcare?
Yes, it has been shown to reduce falls, ambulance callouts, and emergency visits in pilot programmes, improving patient outcomes and healthcare efficiency. It was piloted clinically in Somerset with great success and is now being used clinically in several sites across the region. It is also technically ready for use in 21 sites across the South West of England.
Useful links:
https://leap-hub.ac.uk/training-courses/
Perspectives on AI use in General Practice
15th January 2025
Presented by Tafsir Ahmed
What are the uses of AI in General Practice?
AI is used in GP to improve administrative efficiency, for diagnosis and treatment planning and for information/clinical processing with generative AI and large language models (LLMs).
Does it impact healthcare?
AI is already widely utilised in GP - with one in five GPs using Chat GPT in clinical practice (BMJ survey, 2022), and one in four Doctors using AI regularly in clinical practice (The Alan Turing Institute and GMC, 2024). AI is implicated as a potential solution to the NHS’s issues in Lord Darzi’s 2024 report and likely to take a pivotal role in the upcoming NHS 10-year plan.
Why should I care?
The Care Quality Commission (CQC) are currently undertaking a scoping project for stakeholder perspectives on AI use in GP to inform our regulatory approach. Please join the webinar to be informed of the current landscape of AI in GP and the provisional themes they have taken from stakeholder conversations. Most importantly, have your chance to be a part of discussions as stakeholders to shape the future of AI deployment, use and concurrent regulatory and safety approaches in clinical practice.
Assuring Artificial Intelligence in Healthcare
21st November 2024
Presented by: Anusha Jose on behalf of Dr Adam Byfield
What is it? An overview of NHS England’s AI Quality Community of Practice (AIQ CoP) and case studies demonstrating their work.
Why should I care? Traditional assurance techniques often don’t work for AI while industry standard AI assurance techniques often don’t work for healthcare. As AI is now appearing everywhere throughout healthcare organisations and systems, it is more important than ever that this technology is sufficiently assured before use.
Does it impact healthcare? Absolutely! The AIQ CoP exists specifically to supporting and encourage the detailed, technical assurance of a wide range of healthcare AI, both administrative and clinical.
8th of October 2024
Presented by: Dr Lucia De Santis, Lucy Gregory, Professor Carl Macrae, Dr Joe Alderman, Professor Alastair Denniston, Rebecca Boffa, Martin Nwosu and Russel Pearson, Gemma Warren, and Moritz Flockenhaus,
Highlights include:
Talks and discussions with the AI and digital regulations service (collaboration between
NICE, MHRA, HRA & CQC)
Evaluating an AI model for your population
Insights from an NHSE commissioned review of AI
Locally tailored economic evaluation of clinical AI
Safety monitoring in automated systems
Recording, presentation slides and agenda are available here: Using Artificial Intelligence (AI) Responsibly 8th Oct 2024 – HDR UK Midlands
Why is the Intended Purpose Statement critical, and what should it contain?
10th of September 2024
Presented by: Dr Russel Pearson
What is and Intended Purpose Statement? An intended purpose statement is a document which clearly sets out what a medical device is approved to do, for who and in what settings. This is just as important for medical devices that use AI as any other medical devices.
Why should I care? AI technologies are developed and tested for very specific tasks. Even if we use them for similar but different tasks, they may not perform as well as expected bringing risks of error or harm.
Does it impact healthcare? Yes. It has a big impact on AI companies as it is the foundation of their work to build their products and have them validated. For those providing healthcare it is also really important so that they understand how AI can be used in the care they provide.
Recording unavailable.
For more information visit: Crafting an intended purpose in the context of software as a medical device (SaMD) - GOV.UK and Intended Use — Hardian Health
Clinical AI monitoring: Medical algorithmic audit case studies
24th June 2024
Presented by: Dr Aditya (Adi) Kale
What is it? Medical algorithmic audits are a particular way in which a healthcare provider can check that the clinical AI tools that they use continue to work well. Ideally, this process is done collaboratively with input from both the company that develops the AI tools, and the organisation delivering care (e.g. a hospital).
Why should I care? One of the differences between clinical artificial intelligence and other health technologies is its tendency to not perform the same in different clinical settings. Even in a single setting, the performance tends to change over time. That means that if robust monitoring approaches are not in place, without us realising, AI-enabled healthcare may not actually be helping patients in the way we expect.
Does it impact healthcare? Yes, there are many AI-enabled healthcare pathways serving patients right now. The only way that we can know that they are doing a good job is by regular monitoring of their performance and trying to solve any problems that we find.
Defining AI Safety for Healthcare and Beyond: Unique Risks and Considerations
14th May 2024
Presented by Professor Ibrahim Habli
As Artificial Intelligence (AI) promises to reshape healthcare, a clear understanding of AI safety is essential. This talk proposes a comprehensive definition of AI safety within the healthcare domain, examining how challenges like under-specificity and opacity can introduce new risks for patients. It advocates for proactive safety measures and transparent risk management to ensure AI's potential is realised without compromising patient safety and undermining trust in the healthcare system.
Recording available on request: ai.incubator@uhb.nhs.uk
How to get regulatory approval for an LLM-enabled medical device
18th March 2024
Presented by Dr Hugh Harvey
What is an LLM? Large language models, or LLMs, are a type of artificial intelligence that have taken in very large amounts of text to allow them to make new text by predicting sequences of words. Examples include ChatGPT and Bard.
Why should I care? In a short time, LLMs have already become part of many people’s routine work. Using LLMs has often improved the speed people can do certain tasks at and sometimes how well they can do them.
Does it impact healthcare? Not yet. There has been lots of interest and investment in getting LLMs to improve healthcare, but it has not been done before and people responsible for their use will have to deal with new challenges, including regulation.