Product Development and Drug Testing

To predict toxicity, corrosivity, and other safety variables as well as the effectiveness of new products, traditional testing of chemicals, consumer products, medical devices, and new drugs has involved the use of animals. The idea that all new drugs and products should be tested for safety in animal studies before being approved for human testing is based on the assumption that animals will respond to drug tests like “little humans.”

However, the past decades have shown that the opposite is true—animals respond differently and unpredictably. Animal-based (in vivo) toxicity testing, which causes severe suffering, distress, and death for the animals used, is typically performed without anesthesia or analgesics and is of questionable, if any, scientific value. Dr. Gerhard Zbinden, one of the world's leading toxicologists, once described a standard in vivo test as little more than “a ritual mass execution of animals.” The testing of just one substance alone, be it a potential drug or toxic chemical, can involve using up to 800 animals and cost over $6 million.[1]

Although seldom mentioned, essentially all of the in vivo animal safety and toxicity tests in use today were never validated and would be unlikely to meet current validation requirements. And while dependence on outmoded and imprecise animal toxicity tests has actually hindered enforcement of both consumer and environmental protection laws, federal agencies continue to accept data from animal tests—all the while recognizing the dangers of applying these results to humans. Even with its inherent limitations and dangers, animal testing has become the “gold standard,” despite how tarnished and broken that standard is.

Common product safety tests

Toxicity—LD50 test:

The traditional LD50 (lethal dose 50 percent) test forced animals, often rats and mice, to ingest chemicals to determine the dose that resulted in the death of 50 percent of the animals. The animals were, for example, force-fed by a tube inserted down the esophagus into the stomach, causing severe discomfort and extreme and unrelenting pain. A standard test would use 60-200 animals, generally without anesthesia or pain relief for concern that they would alter test results. The LD50 was also used to measure the toxicity of gases and powders (the inhalation LD50), irritancy and internal poisoning due to skin exposure (the dermal LD50), and toxicity of substances injected directly into animal tissue or body cavities (the injectable LD50).

In 1985, the Pharmaceutical Manufacturers’ Association came out publicly against the traditional LD50 test[2]. Statements from the U.S. Environmental Protection Agency (EPA), the Food and Drug Administration (FDA), and the Consumer Product Safety Commission followed, suggesting that an alternative such as the Limit Test—a modified LD50 that uses only 6-10 animals—would be acceptable as an alternative to the traditional LD50 in satisfying regulatory requirements. After decades of criticism and documentation of the failures of the traditional LD50 test, its use as a worldwide standard has finally ended. This long-overdue response was, however, delayed for several years due to the refusal of one U.S. regulatory agency—the EPA—to end its use, in spite of the myriad data indicating its lack of necessity.

Although traditional LD50 tests for acute toxicity testing may have been replaced with alternative methods, many of those methods still involve the lethal use of animals, even though the number of animals used is reduced. For example with acute toxicity testing, the “Up and Down Procedure” may be used, in which a small number of animals are dosed one at a time and the dosage of consecutive animals is increased or decreased based on the survival of the previous animal. While a smaller number of animals are used in this test compared to the traditional LD50 test, the animals still experience immense pain (along with convulsions, seizures, and loss of motor skills) and eventual death. For chronic toxicity testing, also called repeated dose toxicity, animals are still heavily used in testing and evaluating the long-term effects of toxins, particularly on various organ systems, through oral, dermal, and inhalation repeated dose studies. These studies last between 28 to 90 days and often involve rodents, although dogs may be used as well. At the end of the study, the animals are killed so researchers can look for signs of organ or body system damage. 

Eye irritancy—Draize test:

The Draize test measures the eye irritancy of chemicals and other products by dropping concentrated amounts of a test substance into an animal’s eye (often albino rabbits, who are docile and inexpensive) and then assessing the eye’s reactions using a subjective numeral score to indicate the level of eye damage and injury—i.e. degree of swelling, redness, ulcerations, etc.. In addition to redness and ulcers, rabbits also experience bleeding and blindness in these experiments. In most instances, the conscious animals are immobilized in full body restraint stocks and remain unanaesthetized for up to 14 days for evaluation. Interpretation of the Draize test is based on the experimenter’s subjective appraisal of eye damage and results can vary significantly between different testing laboratories.

The use of Draize test data to predict human ophthalmic risk is imprecise because a rabbit’s eye differs both in structure and pH from a human eye. For instance, rabbits produce fewer tears than humans do, so their eyes cannot easily flush out test chemicals, and the cornea is substantially thinner and therefore more easily damaged than the human cornea. The Draize is not consistently reproducible and thus cannot reliably predict human risks. At the same time, the poor quality of Draize data ironically contributes to difficulties in replacing it. Due to the seriously compromised nature of existing in vivo data, potentially valid alternatives have technically failed validation efforts when compared to bad Draize data. 

Today a number of in vitro replacements are widely used in-house by industry to eliminate nearly all requirements for the classical Draize test. Alternatives have also been accepted on a case-by-case basis by several regulatory agencies outside of the U.S. The Consumer Product Safety Commission drew up new and more humane guidelines for conducting the Draize, which included the use of local anesthesia, the reduction of the number of animals used, and the elimination of testing known corrosives and irritants. With adequate commitment and funding, the development of non-animal alternatives in this area promises less expensive and more reliable risk assessment procedures.  Alternatives to the Draize eye test include the Bovine Corneal Opacity and Permeability (BCOP) test and the Isolated Chicken Eye Assay (ICE) method; both can be used to determine severe/corrosive categories and to test for eye safety. These tests can be used to identify products that may cause severe or permanent eye damage. However, these tests still utilize animals (albeit from slaughterhouses), and live animals are still required to confirm negative results.

Skin irritation, corrosion, sensitization, and absorption tests:

Tests for skin irritation (level of damage caused to the skin by a substance) and corrosivity (potential of a substance to cause irreversible damage to the skin) are typically conducted on rabbits using the classic Draize skin test, the lesser-known cousin of its ocular counterpart.The test is done by placing a chemical or chemical mixture on an area where the animal’s skin has been shaved; the skin may be prepared by removing layers of skin to cause abrasions. These tests cause severe pain to the animal and can result in ulcers, bleeding, bloody scabs, and discoloration of the skin.

Skin corrosivity and irritation can be easily measured using in vitro systems based on human cell and tissue cultures, such as EPISKIN and EpiDerm—both of which measure cell viability as an endpoint. EPISKIN and EpiDerm have both been approved as complete replacements for animal tests by the European Centre for the Validation of Alternative Methods (ECVAM); however their U.S. counterpart, the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), still requires that they be used with animal tests. ICCVAM is an interagency committee of the U.S. government that coordinates new and revised safety testing methods, including “alternative test methods that may reduce, refine, or replace the use of animals.”[3]

Skin sensitization tests are used to determine if a substance causes an allergic reaction and were typically performed on guinea pigs. Most skin sensitization testing now occurs using the Murine Local Lymph Node Assay (LLNA). The LLNA still requires the use of animals, although the number of animals used and the amount of pain involved is reduced from previous sensitivity testing. The current procedure involves the application of test chemicals on the surface of the ears of mice. While the LLNA can be completed more quickly and results in more accurate dosage information, the mice are still killed when the testing is completed. Dermal Penetration or skin absorption testing analyzes a chemical’s ability to be absorbed through the skin and into the bloodstream. In these tests, rats are typically used and then killed afterward to determine the results.

Mutagenicity and carcinogenicity:

Mutagenicity and carcinogenicity tests examine potential genetic effects from pharmaceuticals, industrial chemicals, and consumer products, classifying the chemicals for cell mutations and carcinogens.  Rats and mice are commonly used in these studies and killed afterward for examination.

It is now widely accepted by regulatory officials and toxicologists that screening for mutagenic potential, along with cell and DNA damage, can be done via in vitro methods such as the AMES test, the In Vitro Cell Line Mutation Test, or the In Vitro Chromosomal Aberation Test. 

Toxicokinetics and ADME:

In order to determine how a toxic substance will affect the body, a series of tests on the drug’s absorption, distribution, metabolism, and excretion (ADME) are carried out. These studies often involve rats and mice, who are given the substance through means of intravenous injection, inhalation, topical application to the skin, or forced feeding. Actual systemic toxicity however depends on several variables: external dose; rate of exposure; ADME; and the intrinsic characteristics of the test material. All of these can be identified and modeled using computer and in vitro approaches. In vitro methods are especially useful for studies on the biological activity and mechanisms of toxic response of chemicals. Due to interspecies differences in metabolic enzymes, human-based models are vital for accurate predictions. 

Metabolic toxicity:

Some chemicals and drugs are essentially nontoxic but become hazardous once ingested and metabolized by the body. For this reason, information from in vitro systems utilizing human cell lines, genetically engineered human cells, and subcellular components, as well as several computer-based systems (METEOR, Hazard Expert, Metabol Expert, COMPACT), are being utilized to detect metabolism-mediated toxicity. Because of the enormous species differences in metabolic parameters (especially between humans and rodents—the animals most frequently used for such tests), it is critical that such studies utilize human-based in vitro techniques and data for computer simulations. This is one area of toxicology for which animal models are widely acknowledged by toxicologists to be inappropriate.

Pyrogen testing:

This test is designed to identify potential bacterial contamination of injectable products, implants, medical devices, dialysis machines, cellular therapies, recombinant proteins and IV products. For over seventy years rabbits have been used in pyrogen testing and injected with test materials to check for reactions to contamination; subsequently millions of rabbits died.

There are alternatives to animal-based pyrogen testing, such as the Limulus amoebocyte lysate (LAL) test, which uses the amoebocytes obtained from horseshoe crabs to demonstrate an immune response to pyrogens, along with several other methods currently under evaluation, some of which are already validated for use in the European Union.

Phototoxicity:

Although it took seven years to complete the validation/approval/adoption process, there now is an in vitro replacement alternative to identify phototoxic (i.e., drugs and chemicals that become toxic when human recipients are exposed to sunlight) potential. The 3T3 Neutral Red Uptake Phototoxicity Test (3T3 NRU PT) utilizes a mouse-derived cell line which measures the degree of cellular damage (cytotoxicity) of the cultures and toxicants when tested in the presence and absence of non-cytotoxic exposure to UVA light. EpiDerm has also been proven in protocols to provide additional phototoxicity alternatives.

Embryotoxicity:

Embryotoxicity involves the toxic effects of a substance on the development of an embryo. In these studies, pregnant animals (rats, mice, rabbits, and sometimes amphibians) are killed just prior to delivery and the fetuses are examined for any sign of toxic effects by the test substance.

There are currently more than a dozen in vitro methods representing various aspects of the reproductive process. Mammalian cell lines, especially stem cells, are used to create assays that are directly predictive of human toxic risks. Using rodents for such studies is especially inappropriate due to the major physiological, biochemical, and structural differences between human and rodent placentas. The Embryonic Stem Cell Test (EST) has been validated by ECVAM and accepted in the European Union for the identification of embryotoxicants. Of the currently available alternatives, it is the only one suitable for high throughput screening and avoids killing large numbers of pregnant animals. It also identifies three unique endpoints representing the principal reproductive toxicological mechanisms. The Frog Embryo Teratogenesis Assay: Xenopus (FETAX), a potential alternative for testing possible embryotoxicity, is still under development. And finally, ECVAM has validated several in vitro methods (embryonic stem cell test, micromass and whole rat embryotoxicity assay) that cannot replace animal tests, but can help reduce them.

Endocrine disruptors:

This represents a class of chemicals believed to have potentially toxic effects on human and wildlife reproductive systems. In response to these concerns, the EPA introduced the Endocrine Disruptor Screening Program to screen pesticides and environmental contaminants for their potential to affect the endocrine systems of humans and wildlife.[4] In an effort to improve efficiency and “further reduce the expected requirements for animal use in the screening of potential endocrine disruptors,” ICCVAM recently recommended an in vitro test method, LUMI-CELL®, to be used as an initial screen to identify substances that may act as endocrine disruptors.[5].

Ecotoxicity:

Utilized to determine the negative effects of chemicals entering the environment, these tests measure acute toxicity using fish in a 96 hour  LC50 (Lethal Concentration 50%) test, and for chronic toxicity over a period of up to 200 days to monitor growth, spawning, hatching, and mortality. There are alternatives used to predict toxic effects to aquatic organisms (fish and mammal cell assays, computational programs, or tiered testing schemes to measure toxicity).

Toxicogenomics:

If toxicology is to eventually evolve from its primitive beginning in quantifying the mass poisoning of various species of animals, the final high-tech destination may be in the field of toxicogenomics and its sister disciplines of proteomics and metabonomics—all of which integrate the interactions between human genes and toxic substances, proteins and metabolic activities respectively. The ultimate goal of toxicogenomics is a single or series of DNA chips that would provide almost immediate toxicity profiles of all test substances. Such chips can provide vast amounts of data on gene expression in response to specific conditions. A single chip can replace the information derived from 20,000 individual experiments.

It is likely that, once fully developed, toxicogenomics will provide the scientific proof that animal-based toxicity testing has little or no relevance to human risk assessment. Use of such chips will become a standard part of any future validation process for in vitro or in vivo safety tests and animal models intended for use in basic biomedical research.

Conclusion

Overall, animal testing is expensive, time-consuming, unpredictable, and not easily reproducible from one lab to another (i.e., results lack reliability). Because of their expense, cumbersomeness, and scientific limitations, animal tests have not adequately addressed the vast number of chemicals already in commercial use, nor the estimated 700 new ones introduced every year.[6] According to Dr. Thomas Hartung, director of the Johns Hopkins University Center for Alternatives to Animal Testing, out of “some 100,000 chemicals in consumer products,…only about 5,000 have had significant testing so far because no one has the capacity for experiments using standard methods involving animals.”[7] While all new products must be tested for safety, using animals to assess human health risks is inefficient, unreliable, and has limited—if any—predictive value for what will happen in humans.

Thankfully, private industry and a growing number of federal agencies are now acknowledging the superiority of alternative methods for safety testing. While alternative methods have not received the full scientific, industry, and government support that they deserve, progress is being made, as the development of alternative techniques becomes more widely recognized as a legitimate and important area of basic and applied scientific investigation.

For example, one traditional criticism of in vitro replacement alternatives was their inability to mimic or reproduce the consequences of long-term, chronic human exposure to toxic substances. This is no longer the case. As cell culture technology has evolved, it is now possible to maintain in vitro systems for sufficiently longer periods of time—weeks or months. It is not necessary to maintain such cultures for years, as is done with some typical chronic animal tests. Long-term cell and tissue culture techniques can now allow in vitro studies of the effects of chronic, repeated exposure to toxic substances, as well as the recovery from such exposure in a shorter period of time.

Non-animal methods involving in vitro research, computer modeling, virtual drug trials, microdosing technologies, and human cell and tissue methods including human skin models and “human-on-a-chip” technology are superior on all fronts: they are more efficient, accurate, and cost-effective than the cruel animal experiments they replace.


[1] The Baltimore Sun. (2010, August 27). Alternatives to Animal Testing Gaining Ground. The Baltimore Sun (MD) Via Acquire Media NewsEdge.

[2] Jennings, J. (1985, July 5). The LD50. JAMA, 254(1), 56.

[3] National Toxicology Program. (n.d.). NICEATM-ICCVAM.

[4] National Toxicology Program. (2011). Validation of the BG1Luc Estrogen Receptor Transcriptional Activation Test Method, Peer Review Panel.

[5] National Toxicology Program. (2011). Validation of the BG1Luc Estrogen Receptor Transcriptional Activation Test Method, Overview.

[6] Lance, J. (2009, February 27). 700 New Chemicals Introduced Each Year Not Tested for Toxicity.

[7] The Baltimore Sun. (2010, August 27). Alternatives to Animal Testing Gaining Ground. The Baltimore Sun (MD) Via Acquire Media NewsEdge.

top