Youmobs

Ethical considerations for AI in Pharmacy

Dr. Yasmin Karsan explores the potential ethical implications of AI in pharmacy and how to address them

Over the last few years pharmacy and the role of pharmacy teams has changed dramatically. The introduction of technology and the integration of artificial intelligence (AI) within systems that provide healthcare and support to pharmacies holds great promise; for improving access to health, supporting patient outcomes, and optimising operational efficiency.

Currently, the potential use of AI could span the whole of the medicines value chain, from AI-driven drug discovery to personalised medicines and automated dispensing systems. However, the rapid advancement of AI technology does raise several ethical concerns. This article will explore these concerns and how they can be addressed.

In previous articles, I have discussed what underpins artificially intelligent machines and the importance of data. Datasets are the foundation on which AI algorithms learn and generate conclusions. The first step to understanding the potential ethical implications of AI across the pharmacy sector is to understand the data that is held within these foundational datasets.

Patient privacy and data security

AI systems in and outside of the pharmacy sector, which support patient care, rely heavily on vast amounts of medical data (patient medical records, PMR data, etc). However, the collection, storage, and use of such sensitive data can possibly bring significant privacy concerns.

GDPR compliance is essential within the UK and ethical questions arise around data ownership, patient consent, and the possibility of data and cybersecurity breaches. As frontline healthcare professionals, we need to be able to support our patients when questions are asked about their data. For example, how can patients be sure their data is used only for its intended purposes? Is anonymised data truly safe from de-identification techniques that could expose private information?

Cross contamination of data across sectors is also a concern, for example, confidential healthcare data used for financial or insurance applications also holds some ethical concerns. Addressing these issues requires robust cybersecurity measures, strict access controls, and transparency.

Bias in AI algorithms

There is a term coined when discussing the generation of AI algorithms – ‘Rubbish in, rubbish out’. AI algorithms, especially those used in clinical decision-making and personalised treatment, are only as good as the data they are trained on. Many datasets used are not representative of certain diversity in populations, therefore biases can emerge which could result in unequal treatment, where certain groups (e.g., minorities) may not receive the same level of care or have access to care or medications. This would also be important when adopting new technologies which may have been developed in less diverse countries. For example, an AI tool designed in a country with a predominantly homogeneous population may not be as effective in a multicultural society. Therefore, it is ethically imperative to ensure that AI systems in pharmacy are trained on diverse data and are continuously monitored for fairness and inclusivity.

Accountability and responsibility

As AI tools become more involved in decision-making processes, questions of accountability arise. Who is responsible if an AI-driven recommendation leads to incorrect therapy? Is it the pharmacist, the AI developer, or the healthcare institution? Read More…..

Exit mobile version