Read More
Date: 8-12-2015
1942
Date: 14-4-2021
2513
Date: 12-6-2021
1926
|
Fluctuation Test
The fluctuation test was invented by Luria and Delbrück in 1943 to determine whether bacterial mutants arise before any selection for their presence or only after the bacteria are subjected to selection (1). It was enormously influential because by showing that mutations arise randomly before selection, it provided strong evidence that bacteria and viruses are normal organisms possessing heritable genetic determinants. Thus it opened the way for the genetic analysis of bacteria and their viruses and led eventually to modern molecular biology. Luria and Delbrück measured the number of mutants resistant to bacteriophage T1 in a large number of replicate cultures of Escherichia coli. They reasoned as follows. If mutants occur after the culture is exposed to the phage (ie, in response to selection), then little variation should occur among cultures in the number of mutants. In fact, the distribution would be Poisson with the variance equal to the mean. However, if mutants arise at random during nonselective growth of cells, the probability of a mutation would be constant per generation per cell. But the consequence of that mutation would depend on when during the growth of the population the mutation occurred. In a binary dividing population, a cell that sustains a mutation gives rise to a clone of identical descendants, each of which is a mutant. Thus a mutation during early generations gives rise to a large clone of mutant cells, whereas a late mutation gives rise to few mutant cells. Among a large set of identical cultures of dividing cells, the few cultures in which the mutation happened in the early generations (the jackpots) have a large number of mutants, whereas the majority of the cultures have none or a few mutants. The predicted distribution, called the Luria–Delbrück distribution, has a variance much larger than the Poisson distribution. This is what Luria and Delbrück observed. Their test is known as “the fluctuation test” because it measures the degree of fluctuation in the number of mutants found in replicate cultures.
Interest in the fluctuation test has been revived recently by the controversy surrounding the phenomenon known as “directed” or “adaptive” mutation (2). By obtaining the highly variant distribution, Luria and Delbrück proved that mutations occur before selection. Because the selection they used was lethal, however, they did not prove the converse that mutations could not also arise after selection. Indeed, when the selection is not lethal, it has been found that mutations do arise in response to selection. Part, but not all, of the evidence for postselection mutation is that in a fluctuation test the distribution of the number of mutants per culture deviates from the Luria–Delbrück toward the Poisson distribution. This has inspired new models describing the distribution of mutant numbers when one or another of the underlying assumptions of the Luria–Delbrück analysis are not met (2, 3).
The fluctuation test is also useful in determining mutation rates during nonselective growth. First applied by Luria and Delbrück themselves, the computation of mutation rates has been refined since then. But all commonly applied techniques still involve assumptions and compromises. Nonetheless, the power of the fluctuation test is that a true mutation rate, ie, the number of mutational events per cell division, is obtained, not a mutant frequency that can be dramatically skewed by jackpots.
The fluctuation test consists of a large number of identical cultures inoculated with a number of cells few enough to ensure that no preexisting mutants are present. These cultures are allowed to grow, achieving at least a thousandfold increase in cell number. Then mutants are selected, usually by plating each entire culture onto a selective agar medium. After a suitable time to allow the mutant clones to grow, the resulting colonies are counted. The parameter used for calculating the mutation rate is m, the mean number of mutations (not mutants) per culture. This is, perhaps, a difficult concept to understand because it is a function of the mutation rate and also of the number of cells at risk for mutation (and thus one requirement for a proper fluctuation test is that every culture has the same number of cells). To date, however, the only solved methods to determine the mutation rate from fluctuation test results are based on the distribution of m. Then the value of m is usually divided by 2N, twice the final number of cells in the culture, to obtain the mutation rate as mutations per cell per generation (because a culture of N cells contains a total of 2N cells during its entire history). Some researchers attempt to correct for asynchronous divisions by dividing m by N/ln2 instead. These calculations assume that the initial number of cells is trivial compared to the final number, that the proportion of mutants is always small, that mutants grow at the same rate as wild-type, and that reverse mutations are negligible.
The data obtained from a fluctuation test are the number of mutants xi in each culture. From the number of mutants, there are several commonly used methods to obtain estimates of m (often called estimators), of which five are described here.
1. The P0 method (1). If the probability of mutation is constant per cell per generation, then the number of mutations per culture has a Poisson distribution (although the number of mutants has a Luria–Delbrück distribution). The probability of no mutations (and thus no mutants) P0, is e–m, the first term of the Poisson distribution. Thus, the P0 estimate of m is obtained from the proportion of cultures that have no mutants as m=–ln P0.
2.The method of the mean (1). Although the extreme variance of the Luria–Delbrück distribution is caused by mutations early in the growth of cultures, these are, in fact, rare events. After the critical point where the total population size (all of the cells in all of the cultures) is large enough so that the probability of a mutation approaches unity, every succeeding generation makes an equal contribution to the number of mutants (from new mutations plus the growth of preexisting mutants). Assuming that no mutations occur before the critical point, the value of m is derived as xmean = mln(mC) where xmean is the mean number of mutants per culture and C is the number of cultures.
3. The graphical method (4, 5). Because each generation produces an equal number of mutants after the critical point, the accumulated distribution of the number of mutants per culture, Yx, is estimated as m/x, where Y x is the number of cultures with x or more mutants. Thus a plot of log Y x vs. log x gives a straight line with a slope of –1. The intercept at log x = 0 gives log m. The advantage of this method is that the straight line can be fit by eye to the intermediate values, which are not caused by jackpots.
4. The method of the median. Although the mean number of mutants fluctuates widely depending on the number of jackpots present, the median is not so influenced and is thus a more stable statistic. Lea and Coulson (6) found empirically that (xmedian/m)–ln(m) = 1.24. This equation is easily solved for m by iteration.
5. The maximum likelihood method. This method relies on solving, at least approximately, the Luria–Delbrück distribution and then generating the most likely value of m for the experimentally determined xi values. Although this is the most accurate method of estimating m, it has been shunned in the past because of the difficulties in calculation. However, a reasonably adequate way of obtaining m is to solve Eq. (50) of Lea and Coulson (6) by iteration. In addition, Koch (7) provided exact values of m obtainable from the median and upper and lower quartiles of the distribution, which was extended to the upper quartiles by Cairns et al. (2). Simpler algorithms and sophisticated computer programs, however, make the maximum-likelihood method now computationally feasible (8).
Of these methods, the mean (2) is the least accurate because it is skewed by jackpots. The utility of the other methods depends on the value of m. If m ≤ 1, the P 0 method (1) is reliable, so long as P0 is determined with sufficient accuracy. When m=1 to 4 , the median method (4) is convenient and adequate. At values of m larger than 4, the median can still be used, but the maximum likelihood method (5) is preferable. Another advantage of the maximum likelihood method is that the standard deviation of m, and thus of the mutation rate, can be approximated (9). Further discussions and refinements can be found in Refs. 9-11.
The phenomenon of phenotypic lag, in which the expression of a new mutant phenotype is delayed, complicates determining mutation rates from fluctuation tests (3, 7, 12).
References
1. S. E. Luria and M. Delbrück (1943) Genetics 28, 491–511.
2. J. Cairns, J. Overbaugh, and S. Miller (1988) Nature (London) 335, 142–145.
3. F. M. Stewart, D. M. Gordon, and B. R. Levin (1990) Genetics 124, 175–185.
4. S. E. Luria (1951) Cold Spring Harbor Symp. Quant. Biol. 16, 463–470.
5. J. Cairns (1980) Nature (London) 286, 176–178.
6. D. E. Lea and C. A. Coulson (1949) J. Genet. 49, 264–285.
7. A. L. Koch (1982) Mutat. Res. 95, 129–143.
8. S. Sarkar, W. T. Ma, and G. v. H. Sandri (1992) Genetica 85, 173–179.
9. F. M. Stewart (1994) Genetics 137, 1139–1146.
10. M. E. Jones, S. M. Thomas, and A. Rogers (1994) Genetics 136, 1209–1216.
11. G. Asteris and S. Sarkar (1996) Genetics 142, 313–326.
12. P. Armitage (1952) J. R. Stat. Soc. B 14, 1–40.
|
|
علامات بسيطة في جسدك قد تنذر بمرض "قاتل"
|
|
|
|
|
أول صور ثلاثية الأبعاد للغدة الزعترية البشرية
|
|
|
|
|
مكتبة أمّ البنين النسويّة تصدر العدد 212 من مجلّة رياض الزهراء (عليها السلام)
|
|
|