Principles of Virology, Volume 2. S. Jane Flint

Читать онлайн.
Название Principles of Virology, Volume 2
Автор произведения S. Jane Flint
Жанр Биология
Серия
Издательство Биология
Год выпуска 0
isbn 9781683673590



Скачать книгу

is crucial to mitigating the impact of an outbreak. One could argue that the development of worldwide surveil lance programs and information sharing have had as profound an impact on limiting viral infections as antiviral medications and vaccines. The U.S. Centers for Disease Control and Prevention (CDC) was established in 1946 after World War II, with a primary mission to prevent malaria from spreading across the country. The scope of the CDC quickly expanded, and this institution is now a central repository for information and biospecimens available to epidemiologists; it also offers educational tools to foster awareness and ensure public safety. The World Health Organization (WHO), founded in 1948 as an international agency of the United Nations, is charged with establishing priorities and guidelines for the worldwide eradication of viral agents. The WHO provides support to countries that may not have the resources to combat infectious diseases, and coordinates results from a global network of participating laboratories. While the WHO provides coordination, the experimental work is performed in hundreds of laboratories throughout the world, often in remote locations, which process samples and relay information back to the WHO. These WHO-certified laboratories adhere to stringent standards to ensure consistency of methods and interpretations. The laboratories conduct field surveillance using wild and sentinel animals, and perform periodic blood screening for signs of infection or immunity (Box 1.8). The chief successes of such global-surveillance efforts to date include the eradications of smallpox virus and rinderpest virus, the latter of which causes disease in agricultural animals, such as cattle and sheep.

      METHODS

       The use of statistics in virology

      When studying viral infections in hosts, scientists do not always obtain results that are so clear and obvious that everyone agrees with the conclusions. Often the effects are subtle, or the data are highly variable from sample to sample or from study to study (sometimes referred to as “noise”). This ambiguity is particularly true in epidemiological studies, given the large number of parameters and potential outcomes. How do you know if the data that you generated or are reading about in a paper are significant?

      Statistical methods, properly employed, provide the common language of critical analysis to determine whether differences observed between or among groups are significant. Unfortunately, surveys of articles published in scientific journals indicate that statistical errors are common, making it even more difficult for the reader to interpret results. In fact, the term “significant difference” may be one of the most misused phrases in scientific papers, because the actual statistical support for the statement is often absent or incorrectly obtained. While a detailed presentation of basic statistical considerations is beyond the scope of this text, some guiding principles are offered.

      It is essential to consider experimental de sign carefully before going to the bench or to the field. A fundamental challenge in study design is to predict correctly the number of observations required to obtain a reliable significant difference. The significance level is defined as the probability of mistakenly reporting that a difference is meaningful; by convention, this probability is minimally set at 0.05 (5%; see the table for hypothetical data). Scientists do not usually refer to experimental outcomes as “true” or “false” but rather use quantitative approaches to provide a sense of the significance between two data sets (e.g., experimental versus control). An important concept is statistical power, the probability of detecting a difference that truly is significant. In the simplest case, power can be increased by having a larger sample size (see table). Even when results seem black and white, having too few animals (or replicates) will be insufficient for drawing a statistically meaningful conclusion.

       P values for the differences in infection rates between experimental and control groups a

P value for indicated groupb
No. of animals per group All control animals infected and no experimental animals infected All control animals and one experimental animal infected or one control animal uninfected and no experimental animal infected One control animal uninfected and one experimental animal infected
3 0.1 0.4 1.0
4 0.03 0.1 0.5
5 0.008 0.05 0.2
6 0.002 0.02 0.08
7 <0.001 0.005 0.03
8 <0.001 0.001 0.01

      aData from Richardson BA, Overbaugh J. 2005. J. Virol 79:669–676.

      bDetermined by Fisher’s exact test, using a two-sided hypothesis test with the significance level fixed at 0.05.

      Fisher’s exact test is used because it is appropriate for experiments with small numbers of observations.

      It is essential to include a detailed description of how statistical analyses were per formed in all communications linked with the data. Benjamin Disraeli, a 19th-century British prime minister, once said, “There are three kinds of lies: lies, damned lies, and statistics.” Indeed, a gullible reader may be persuaded that a certain set of data is significant, but this conclusion depends on the stringency and appropriateness of the tests that were applied, as well as the data points included in the analysis.

      While this text cannot define what tests are applicable for which assays, we can make a couple of strong suggestions. First, statistics should not be considered an afterthought or a painful process that one does after putting data together for a publication. Reliable studies that stand the test of time have considered statistics throughout the scientific process, and good statistics are essential for good study design. Second, be wary of reports in which an investigator inappropriately influences the statistical analyses to produce a “statistically significant” result. This process, sometimes referred to as “p-hacking,” can result in false positives or data that, while sufficiently different to qualify as “statistical,” in fact have no bearing on biological significance. Finally, while it is true that the field can be complex, most of the tests used by virologists are reasonably straightforward. Computer programs such as Excel and GraphPad have made the calculations easy, but you need to know which tests to apply. Fortunately, there are excellent books available that make statistics logical and accessible (e.g., Intuitive Biostatistics, by Harvey Motulsky). For more complex data, study design issues, and analyses, one may require consultation with a statistician.

       Motulsky H. 2013. Intuitive Biostatistics: a Nonmathe matical Guide to Statistical Thinking, 3rd ed. Oxford University Press, Oxford, United Kingdom.

      BACKGROUND

       Descriptive epidemiology and the discovery of human immunodeficiency