Are animal models still relevant to drug development?

A brown lab mouse sits on an experimenter's hand

A lab mouse sits on an experimenter’s hand. Rama, CC BY-SA 2.0 FR, via Wikimedia Commons


It is undeniable that animal models have long played a crucial role in biomedical research and development (R&D). Evidence highlights that animals have been used as models for human conditions dating back to  Ancient Greece. This pattern has since endured, with 94 out of 106 Nobel Prizes in Medicine or Physiology awarded in the 20th century relying on animal models.

Animal models are used as substitutes for humans to assess a drug’s safety and efficacy in preclinical phases of drug development. Recent advances in gene editing technologies such as CRISPR-Cas9, a molecular tool capable of high-precision cutting and pasting of genes, have made genetically engineering animal models for specific diseases cheaper, faster, and significantly more accessible to researchers worldwide. Such progress could lower the cost of drug development while minimising the wastage of resources spent on dead-end drug targets. However, following legislation signed by US President Joe Biden in December, the Food and Drug Administration (FDA) no longer requires drugs to be tested on non-human animals before advancing to Phase I of clinical trials.

Previously, the FDA mandated that all drugs be evaluated in vivo before progressing to Phase I clinical trials, which involve conducting tests on human volunteers. This stipulation was passed in 1938 following numerous catastrophes, such as the 1937 elixir sulphanilamide disaster, where the distribution of a toxic drug, not previously tested on animals, led to the death of 107 people. Policy-makers thought that animal testing could prevent such disasters from happening again. Tragic events like the thalidomide disaster, where a drug meant to alleviate morning sickness in pregnant women caused severe birth defects in their children (such as malformation), had similar outcomes. Since then, animal testing has been a legal requirement in various countries, including the UK.

Yet in recent years, animal rights activists and members of the scientific community have expressed doubts regarding the usefulness, reliability, and ethics of using animal models. Around 94% of drugs that progress to Phase I clinical trials fail in subsequent stages, often because they are deemed unsafe or ineffective compared to drugs already on the market. A review of 2366 different drugs concluded that tests on animals predicted toxicity in humans only half of the time, in other words, as reliably as flipping a coin. Moreover, animal trials are between 1.5 and 30 times more expensive than in vitro testing. The apparent lack of predictive value, combined with the cost of animal tests, naturally brings into question the relevance of animal models in drug development.

Additionally, use of animal models presents ethical concerns surrounding the chronic psychophysical stress experienced by animals bred for research. The American Animal Welfare Act (AWA) and its amendments from 2002 aim to limit animal suffering by integrating the 3Rs at all levels of research planning and execution: replace animal tests where a better method is available; reduce the number of animals being used as much as possible; and refine experimental procedures to limit pain caused to animals. Nevertheless, the AWA does not cover mice, rats, fish, and birds, which account for over 90% of animals used in research, and is often considered insufficient, especially considering general distress of laboratory animals. By comparison, the Animal Scientific Procedures Act (ASPA) in the United Kingdom covers all vertebrates except cephalopods. The British equivalent of the FDA, the Medicines and Healthcare Products Regulatory Agency (MHRA), supports use of animals in toxicology studies when no reliable alternatives are available, but also expects researchers to incorporate the 3Rs approach as much as possible.

If, thanks to the change in FDA regulations, drug R&D in the US were to move away from animal testing, researchers must find and engage with non-animal models. One such technique involves using “organs on chips”. These are 3D microfluidic devices – very small volumes of fluid on a microchip – containing specific lab-grown human cells, which aim to recreate the biochemical, electrical, and mechanical properties of individual human organs to functionally replicate them. For example, liver-on-a-chip models have been used to study the pathophysiology of fatty liver disease, a serious condition which can progress to liver cancer if left untreated, as well as potential treatments for it. While organ-on-a-chip technology is promising, it still has some shortcomings. Fully reproducing an entire functional organ is challenging, especially on such a small scale. Currently, most organs on chips can only model a limited subset of organ functions at once. Consequently, testing drugs for safety and efficacy on such a device might not be representative of the true physiological effects they would have. For instance, large-scale remodelling of organs cannot be modelled by organs on chips as it involves reorganisation of tissue such as extensive scarring.

Organoids offer another possible alternative to animal models. These 3D “organs” are designed in vitro from stem cells (cells that can differentiate into any kind of specialised cell, like cardiac muscle cells or white-blood cells), which are grown in specific media and then self-assemble into organoids. Organoids can then be utilised in pre-clinical drug development processes to evaluate candidate drugs more reliably than 2D cell cultures alone, contributing to drug target validation. Researchers have, for instance, successfully grown and studied norovirus, which causes gastroenteritis and can be fatal to the very young or the elderly, in intestinal organoids. However, just like organs on chips, organoids have some notable limitations. Reproducing all functions of an organ in such a model, and reflecting the heterogeneity of cell types in a naturally developed organ, are challenging endeavours. Organoids cannot model the interactions among multi-organ systems or with the immune system, or clearance of a drug from the body.

Alternatively, computational (in silico) methods have also been developed to optimise the drug development process. Such methods use computational and simulation techniques, such as molecular modelling or virtual screening, to model the interaction between a test drug and drug target. Virtual high-throughput screening (vHTS) combines the two, examining very large chemical libraries to assess whether compounds could have the potential to interact with a drug target. Artificial intelligence has also greatly contributed to the development of in silico methods. For example, AlphaFold, an AI, predicts the structure of over 200 million proteins based on their amino acid sequences. It is the first machine learning approach that has achieved accurate predictions of protein structures when compared to experimental data, and has the potential to revolutionise rapid access to vHTS. However, despite being affordable and accessible, vHTS and AlphaFold ultimately rely on predictions and may not always be the most accurate in a physiological context. Chemical libraries tend to be based on X-ray crystallography — a structural biology technique that uses X-ray diffraction to determine the molecular structure of a crystal. X-ray crystal-based models do not accurately represent the state of biological targets, especially proteins, in live cells, because such targets usually exist in solution or bound within membranes in vivo.

Although innovative in silico and in vitro methods seem promising, they are limited by our current knowledge. Specifically, chemical libraries and molecular modelling are derived from our understanding of the structures of known drug targets; using organoids or organs on chips would not enable researchers to identify new ones. This is especially relevant, for instance, when it comes to neurodegenerative diseases. We have yet to uncover many key details about how the brain functions, such that current knowledge could hardly be used to develop drugs against such neurological disorders.

This is where animal models can help, allowing researchers to link cellular defects to disease phenotypes. For example, if a certain protein mutation were found to be associated with a particular disease in humans, inducing the mutation in an animal model could provide evidence for a causal link between mutation and disease. This then gives researchers a possible trail to pursue to develop a drug, based on the identification of the mutated protein as a target.

Despite the FDA’s new regulations, it is likely that animal models will continue to be used for drug-target identification and proof-of-concept experiments, especially as the FDA retains the right to demand that drugs be tested on animals on a case-by-case basis. Nonetheless, the FDA has taken a pivotal first step towards reducing the number of animals in research, addressing, in the US context, both reliance on them and the ethical implications of conducting animal research. We can hope that such progress might inspire other countries to reduce their dependence on animal models. However, alternatives to animal models are still to be improved before the drug discovery process can rely on them exclusively.

Top