Experiments in ecology : their logical design and interpretation using analysis of variance / A. J. Underwood.

Por: Underwood, A. JEditor: Cambridge : Cambridge University Press, c1997Descripción: xvii, 504 p. : il. ; 23 cmISBN: 0521556961 (pbk); 0521553296 (hbk)Otra clasificación: 62J10 (62P10 92D40)
Contenidos:
1 Introduction [1]
2 A framework for investigating biological patterns and processes [7]
2.1 Introduction [7]
2.2 Observations [8]
2.3 Models, theories, explanations [10]
2.3.1 Models of physiological stress [10]
2.3.2 Models based on competition [10]
2.3.3 Grazing models [10]
2.3.4 Models to do with hazards [11]
2.3.5 Models of failure of recruitment [11]
2.4 Numerous competing models [12]
2.5 Hypotheses, predictions [13]
2.6 Null hypotheses [15]
2.7 Experiments and their interpretation [16]
2.8 What to do next? [17]
2.9 Measurements, gathering data and a logical structure [19]
2.10 A consideration: why are you measuring things? [21]
2.11 Conclusion: a plea for more thought [22]
3 Populations, frequency distributions and samples [24]
3.1 Introduction [24]
3.2 Variability in measurements [24]
3.3 Observations and measurements as frequency distributions [25]
3.4 Defining the population to be observed [27]
3.5 The need for samples [30]
3.6 The location parameter [30]
3.7 Sample estimate of the location parameter [33]
3.8 The dispersion parameter [34]
3.9 Sample estimate of the dispersion parameter [36]
3.10 Degrees of freedom [37]
3.11 Representative sampling and accuracy of samples [38]
3.12 Other useful parameters [44]
3.12.1 Skewness [44]
3.12.2 Kurtosis [47]
4 Statistical tests of null hypotheses [50]
4.1 Why a statistical test? [50]
4.2 An example using coins [51]
4.3 The components of a statistical test [55]
4.3.1 Null hypothesis [55]
4.3.2 Test statistic [56]
4.3.3 Region of rejection and critical value [56]
4.4 Type I error or rejection of a true null hypothesis [57]
4.5 Statistical test of a theoretical biological example [58]
4.5.1 Transformation of a normal distribution to the standard normal distribution [59]
4.6 One- and two-tailed null hypotheses [62]
5 Statistical tests on samples [65]
5.1 Repeated sampling [65]
5.2 The standard error from the normal distribution of sample means [70]
5.3 Confidence intervals for a sampled mean [70]
5.4 Precision of a sample estimate of the mean [73]
5.5 A contrived example of use of the confidence interval of sampled means [74]
5.6 Student’s t-distribution [76]
5.7 Increasing precision of sampling [77]
5.7.1 The chosen probability used to construct the confidence interval [78]
5.7.2 The sample size (n) [78]
5.7.3 The variance of the population (σ2) [80]
5.8 Description of sampling [81]
5.9 Student’s t-test for a mensurative hypothesis [82]
5.10 Goodness-of-fit, mensurative experiments and logic [84]
5.11 Type I and Type II errors in relation to a null hypothesis [87]
5.12 Determining the power of a simple statistical test [91]
5.12.1 Probability of Type I error [92]
5.12.2 Size of experiment (n) [93]
5.12.3 Variance of the population [95]
5.12.4 ‘Effect size’ [97]
5.13 Power and alternative hypotheses [97]
6 Simple experiments comparing the means of two populations [100]
6.1 Paired comparisons [100]
6.2 Confounding and lack of controls [104]
6.3 Unpaired experiments [106]
6.4 Standard error of the difference between two means [107]
6.4.1 Independence of samples [108]
6.4.2 Homogeneity of variances [109]
6.5 Allocation of sample units to treatments [114]
6.6 Interpretation of a simple ecological experiment [118]
6.7 Power of an experimental comparison of two populations [124]
6.8 Alternative procedures [128]
6.8.1 Binomial (sign) test for paired data [128]
6.8.2 Other alternative procedures [130]
6.9 Are experimental comparisons of only two populations useful? [132]
6.9.1 The wrong population is being sampled [132]
6.9.2 Modifications to the t-test to compare more than two populations [137]
6.9.3 Conclusion [139]
7 Analysis of variance [140]
7.1 Introduction [140]
7.2 Data collected to test a single-factor null hypothesis [141]
7.3 Partitioning of the data: the analysis of variation [143]
7.4 A linear model [145]
7.5 What do the sums of squares measure? [149]
7.6 Degrees of freedom [152]
7.7 Mean squares and test statistic [153]
7.8 Solution to some problems raised earlier [154]
7.9 So what happens with real data? [155]
7.10 Unbalanced data [156]
7.11 Machine formulae [157]
7.12 Interpretation of the result [157]
7.13 Assumptions of analysis of variance [158]
7.14 Independence of data [159]
7.14.1 Positive correlation within samples [160]
7.14.2 Negative correlation within samples [166]
7.14.3 Negative correlation among samples [168]
7.14.4 Positive correlation among samples [172]
7.15 Dealing with non-independence [179]
7.16 Heterogeneity of variances [181]
7.16.1 Tests for heterogeneity of variances [183]
7.17 Quality control [184]
7.18 Transformations of data [187]
7.18.1 Square-root transformation of counts (or Poisson data) [188]
7.18.2 Log transformation for rates, ratios, concentrations and other data [189]
7.18.3 Arc-sin transformation of percentages and proportions [192]
7.18.4 No transformation is possible [192]
7.19 Normality of data [194]
7.20 The summation assumption [195]
8 More analysis of variance [198]
8.1 Fixed or random factors [198]
8.2 Interpretation of fixed or random factors [204]
8.3 Power of an analysis of a fixed factor [209]
8.3.1 Non-central F-ratio and power [209]
8.3.2 Influences of α, n, σ2e and Ai values [211]
8.3.3 Construction of an alternative hypothesis [214]
8.4 Power of an analysis of a random factor [216]
8.4.1 Central F-ratios and power [216]
8.4.2 Influences of α, n, σ2e , σ2A and a [218]
8.4.3 Construction of an alternative hypothesis [220]
8.5 Alternative analysis of ranked data [223]
8.6 Multiple comparisons to identify the alternative hypothesis [224]
8.6.1 Introduction [224]
8.6.2 Problems of excessive Type I error [225]
8.6.3 A priori versus a posteriori comparisons [226]
8.6.4 A priori procedures [227]
8.6.5 A posteriori comparisons [234]
9 Nested analyses of variance [243]
9.1 Introduction and need [243]
9.2 Hurlbert’s ‘pseudoreplication’ [245]
9.3 Partitioning of the data [245]
9.4 The linear model [250]
9.5 Degrees of freedom and mean squares [254]
9.6 Tests and interpretation: what do the nested bits mean? [259]
9.6.1 F-ratio of appropriate mean squares [259]
9.6.2 Solution to confounding [260]
9.6.3 Multiple comparisons [261]
9.6.4 Variability among replicated units [261]
9.7 Pooling of nested components [268]
9.7.1 Rationale and procedure [268]
9.7.2 Pooling, Type II and Type I errors [269]
9.8 Balanced sampling [273]
9.9 Nested analyses and spatial pattern [275]
9.10 Nested analysis and temporal pattern [279]
9.11 Cost-benefit optimization [283]
9.12 Calculation of power [289]
9.13 Residual variance and an ‘error’ term [291]
10 Factorial experiments [296]
10.1 Introduction [296]
10.2 Partitioning of variation when there are two experimental factors [300]
10.3 Appropriate null hypotheses for a two-factor experiment [305]
10.4 A linear model and estimation of components by mean squares [306]
10.5 Why do a factorial experiment? [312]
10.5.1 Information about interactions [313]
10.5.2 Efficiency and cost-effectiveness of factorial designs [316]
10.6 Meaning and interpretation of interactions [318]
10.7 Interactions of fixed and random factors [323]
10.8 Multiple comparisons for two factors [331]
10.8.1 When there is a significant interaction [331]
10.8.2 When there is no significant interaction [331]
10.8.3 Control of experiment-wise probability of
Type I error [333]
10.9 Three or more factors [335]
10.10 Interpretation of interactions among three factors [335]
10.11 Power and detection of interactions [340]
10.12 Spatial replication of ecological experiments [342]
10.13 What to do with a mixed model [344]
10.14 Problems with power in a mixed analysis [346]
10.15 Magnitudes of effects of treatments [347]
10.15.1 Magnitudes of effects of fixed treatments [348]
10.15.2 Some problems with such measures [348]
10.15.3 Magnitudes of components of variance of random treatments [351]
10.16 Problems with estimates of effects [355]
10.16.1 Summation and interactions [355]
10.16.2 Comparisons among experiments or areas [356]
10.16.3 Conclusions on magnitudes of effects 357j
11 Construction of any analysis from general principles [358]
11.1 General procedures [358]
11.2 Constructing the linear model [361]
11.3 Calculating the degrees of freedom [362]
11.4 Mean square estimates and F-ratios [364]
11.5 Designs seen before [370]
11.5.1 Designs with two factors [370]
11.5.2 Designs with three factors [370]
11.6 Construction of sums of squares using orthogonal designs [375]
11.7 Post hoc pooling [375]
11.8 Quasi F-ratios [377]
11.9 Multiple comparisons [378]
11.10 Missing data and other practicalities [380]
11.10.1 Loss of individual replicates [382]
11.10.2 Missing sets of replicates [383]
12 Some common and some particular experimental designs [385]
12.1 Unreplicated randomized blocks design [385]
12.2 Tukey’s test for non-additivity [389]
12.3 Split-plot designs [391]
12.4 Latin squares [401]
12.5 Unreplicated repeated measures [403]
12.6 Asymmetrical controls: one factor [408]
12.7 Asymmetrical controls: fixed factorial designs [409]
12.8 Problems with experiments on ecological competition [414]
12.9 Asymmetrical analyses of random factors in environmental studies [415]
13 Analyses involving relationships among variables [419]
13.1 Introduction to linear regression [419]
13.2 Tests of null hypotheses about regressions [422]
13.3 Assumptions underlying regression [424]
13.3.1 Independence of data at each X [425]
13.3.2 Homogeneity of variances at each X [427]
13.3.3 X values are not fixed [428]
13.3.4 Normality of errors in T [429]
13.4 Analysis of variance and regression [431]
13.5 How good is the regression? [431]
13.6 Multiple regressions [434]
13.7 Polynomial regressions [439]
13.8 Other, non-linear regressions [444]
13.9 Introduction to analysis of covariance [444]
13.10 The underlying models for covariance [447]
13.10.1 Model 1: Regression in each treatment [448]
13.10.2 Model 2: A common regression in each treatment [449]
13.10.3 Model 3: The total regression, all data combined [454]
13.11 The procedures: making adjustments [457]
13.12 Interpretation of the analysis [462]
13.13 The assumptions needed for an analysis of covariance [464]
13.13.1 Assumptions in regressions [464]
13.13.2 Assumptions in analysis of variance [465]
13.13.3 Assumptions specific to an analysis of covariance [466]
13.14 Alternatives when regressions differ [471]
13.14.1 A two-factor scenario [471]
13.14.2 The Johnson-Neyman technique [473]
13.14.3 Comparisons of regressions [474]
13.15 Extensions of analysis of covariance to other designs [474]
13.15.1 More than one covariate [475]
13.15.2 Non-linear relationships [476]
13.15.3 More than one experimental factor [476]
14 Conclusions: where to from here? [478]
14.1 Be logical, be eco-logical [478]
14.2 Alternative models and hypotheses [480]
14.3 Pilot experiments: all experiments are preliminary [481]
14.4 Repeated experimentation [481]
14.5 Criticisms and the growth of knowledge [484]
References [486]
Author index [496]
Subject index [499]
    Average rating: 0.0 (0 votes)
Item type Home library Shelving location Call number Materials specified Status Date due Barcode Course reserves
Libros Libros Instituto de Matemática, CONICET-UNS
Libros ordenados por tema 62 Un56 (Browse shelf) Available A-8088

BIOTESTADÍSTICA AVANZADA


Incluye referencias bibliográficas (p. 486-495) e índices.

1 Introduction [1] --
2 A framework for investigating biological patterns and processes [7] --
2.1 Introduction [7] --
2.2 Observations [8] --
2.3 Models, theories, explanations [10] --
2.3.1 Models of physiological stress [10] --
2.3.2 Models based on competition [10] --
2.3.3 Grazing models [10] --
2.3.4 Models to do with hazards [11] --
2.3.5 Models of failure of recruitment [11] --
2.4 Numerous competing models [12] --
2.5 Hypotheses, predictions [13] --
2.6 Null hypotheses [15] --
2.7 Experiments and their interpretation [16] --
2.8 What to do next? [17] --
2.9 Measurements, gathering data and a logical structure [19] --
2.10 A consideration: why are you measuring things? [21] --
2.11 Conclusion: a plea for more thought [22] --
3 Populations, frequency distributions and samples [24] --
3.1 Introduction [24] --
3.2 Variability in measurements [24] --
3.3 Observations and measurements as frequency distributions [25] --
3.4 Defining the population to be observed [27] --
3.5 The need for samples [30] --
3.6 The location parameter [30] --
3.7 Sample estimate of the location parameter [33] --
3.8 The dispersion parameter [34] --
3.9 Sample estimate of the dispersion parameter [36] --
3.10 Degrees of freedom [37] --
3.11 Representative sampling and accuracy of samples [38] --
3.12 Other useful parameters [44] --
3.12.1 Skewness [44] --
3.12.2 Kurtosis [47] --
4 Statistical tests of null hypotheses [50] --
4.1 Why a statistical test? [50] --
4.2 An example using coins [51] --
4.3 The components of a statistical test [55] --
4.3.1 Null hypothesis [55] --
4.3.2 Test statistic [56] --
4.3.3 Region of rejection and critical value [56] --
4.4 Type I error or rejection of a true null hypothesis [57] --
4.5 Statistical test of a theoretical biological example [58] --
4.5.1 Transformation of a normal distribution to the standard normal distribution [59] --
4.6 One- and two-tailed null hypotheses [62] --
5 Statistical tests on samples [65] --
5.1 Repeated sampling [65] --
5.2 The standard error from the normal distribution of sample means [70] --
5.3 Confidence intervals for a sampled mean [70] --
5.4 Precision of a sample estimate of the mean [73] --
5.5 A contrived example of use of the confidence interval of sampled means [74] --
5.6 Student’s t-distribution [76] --
5.7 Increasing precision of sampling [77] --
5.7.1 The chosen probability used to construct the confidence interval [78] --
5.7.2 The sample size (n) [78] --
5.7.3 The variance of the population (σ2) [80] --
5.8 Description of sampling [81] --
5.9 Student’s t-test for a mensurative hypothesis [82] --
5.10 Goodness-of-fit, mensurative experiments and logic [84] --
5.11 Type I and Type II errors in relation to a null hypothesis [87] --
5.12 Determining the power of a simple statistical test [91] --
5.12.1 Probability of Type I error [92] --
5.12.2 Size of experiment (n) [93] --
5.12.3 Variance of the population [95] --
5.12.4 ‘Effect size’ [97] --
5.13 Power and alternative hypotheses [97] --
6 Simple experiments comparing the means of two populations [100] --
6.1 Paired comparisons [100] --
6.2 Confounding and lack of controls [104] --
6.3 Unpaired experiments [106] --
6.4 Standard error of the difference between two means [107] --
6.4.1 Independence of samples [108] --
6.4.2 Homogeneity of variances [109] --
6.5 Allocation of sample units to treatments [114] --
6.6 Interpretation of a simple ecological experiment [118] --
6.7 Power of an experimental comparison of two populations [124] --
6.8 Alternative procedures [128] --
6.8.1 Binomial (sign) test for paired data [128] --
6.8.2 Other alternative procedures [130] --
6.9 Are experimental comparisons of only two populations useful? [132] --
6.9.1 The wrong population is being sampled [132] --
6.9.2 Modifications to the t-test to compare more than two populations [137] --
6.9.3 Conclusion [139] --
7 Analysis of variance [140] --
7.1 Introduction [140] --
7.2 Data collected to test a single-factor null hypothesis [141] --
7.3 Partitioning of the data: the analysis of variation [143] --
7.4 A linear model [145] --
7.5 What do the sums of squares measure? [149] --
7.6 Degrees of freedom [152] --
7.7 Mean squares and test statistic [153] --
7.8 Solution to some problems raised earlier [154] --
7.9 So what happens with real data? [155] --
7.10 Unbalanced data [156] --
7.11 Machine formulae [157] --
7.12 Interpretation of the result [157] --
7.13 Assumptions of analysis of variance [158] --
7.14 Independence of data [159] --
7.14.1 Positive correlation within samples [160] --
7.14.2 Negative correlation within samples [166] --
7.14.3 Negative correlation among samples [168] --
7.14.4 Positive correlation among samples [172] --
7.15 Dealing with non-independence [179] --
7.16 Heterogeneity of variances [181] --
7.16.1 Tests for heterogeneity of variances [183] --
7.17 Quality control [184] --
7.18 Transformations of data [187] --
7.18.1 Square-root transformation of counts (or Poisson data) [188] --
7.18.2 Log transformation for rates, ratios, concentrations and other data [189] --
7.18.3 Arc-sin transformation of percentages and proportions [192] --
7.18.4 No transformation is possible [192] --
7.19 Normality of data [194] --
7.20 The summation assumption [195] --
8 More analysis of variance [198] --
8.1 Fixed or random factors [198] --
8.2 Interpretation of fixed or random factors [204] --
8.3 Power of an analysis of a fixed factor [209] --
8.3.1 Non-central F-ratio and power [209] --
8.3.2 Influences of α, n, σ2e and Ai values [211] --
8.3.3 Construction of an alternative hypothesis [214] --
8.4 Power of an analysis of a random factor [216] --
8.4.1 Central F-ratios and power [216] --
8.4.2 Influences of α, n, σ2e , σ2A and a [218] --
8.4.3 Construction of an alternative hypothesis [220] --
8.5 Alternative analysis of ranked data [223] --
8.6 Multiple comparisons to identify the alternative hypothesis [224] --
8.6.1 Introduction [224] --
8.6.2 Problems of excessive Type I error [225] --
8.6.3 A priori versus a posteriori comparisons [226] --
8.6.4 A priori procedures [227] --
8.6.5 A posteriori comparisons [234] --
9 Nested analyses of variance [243] --
9.1 Introduction and need [243] --
9.2 Hurlbert’s ‘pseudoreplication’ [245] --
9.3 Partitioning of the data [245] --
9.4 The linear model [250] --
9.5 Degrees of freedom and mean squares [254] --
9.6 Tests and interpretation: what do the nested bits mean? [259] --
9.6.1 F-ratio of appropriate mean squares [259] --
9.6.2 Solution to confounding [260] --
9.6.3 Multiple comparisons [261] --
9.6.4 Variability among replicated units [261] --
9.7 Pooling of nested components [268] --
9.7.1 Rationale and procedure [268] --
9.7.2 Pooling, Type II and Type I errors [269] --
9.8 Balanced sampling [273] --
9.9 Nested analyses and spatial pattern [275] --
9.10 Nested analysis and temporal pattern [279] --
9.11 Cost-benefit optimization [283] --
9.12 Calculation of power [289] --
9.13 Residual variance and an ‘error’ term [291] --
10 Factorial experiments [296] --
10.1 Introduction [296] --
10.2 Partitioning of variation when there are two experimental factors [300] --
10.3 Appropriate null hypotheses for a two-factor experiment [305] --
10.4 A linear model and estimation of components by mean squares [306] --
10.5 Why do a factorial experiment? [312] --
10.5.1 Information about interactions [313] --
10.5.2 Efficiency and cost-effectiveness of factorial designs [316] --
10.6 Meaning and interpretation of interactions [318] --
10.7 Interactions of fixed and random factors [323] --
10.8 Multiple comparisons for two factors [331] --
10.8.1 When there is a significant interaction [331] --
10.8.2 When there is no significant interaction [331] --
10.8.3 Control of experiment-wise probability of --
Type I error [333] --
10.9 Three or more factors [335] --
10.10 Interpretation of interactions among three factors [335] --
10.11 Power and detection of interactions [340] --
10.12 Spatial replication of ecological experiments [342] --
10.13 What to do with a mixed model [344] --
10.14 Problems with power in a mixed analysis [346] --
10.15 Magnitudes of effects of treatments [347] --
10.15.1 Magnitudes of effects of fixed treatments [348] --
10.15.2 Some problems with such measures [348] --
10.15.3 Magnitudes of components of variance of random treatments [351] --
10.16 Problems with estimates of effects [355] --
10.16.1 Summation and interactions [355] --
10.16.2 Comparisons among experiments or areas [356] --
10.16.3 Conclusions on magnitudes of effects 357j --
11 Construction of any analysis from general principles [358] --
11.1 General procedures [358] --
11.2 Constructing the linear model [361] --
11.3 Calculating the degrees of freedom [362] --
11.4 Mean square estimates and F-ratios [364] --
11.5 Designs seen before [370] --
11.5.1 Designs with two factors [370] --
11.5.2 Designs with three factors [370] --
11.6 Construction of sums of squares using orthogonal designs [375] --
11.7 Post hoc pooling [375] --
11.8 Quasi F-ratios [377] --
11.9 Multiple comparisons [378] --
11.10 Missing data and other practicalities [380] --
11.10.1 Loss of individual replicates [382] --
11.10.2 Missing sets of replicates [383] --
12 Some common and some particular experimental designs [385] --
12.1 Unreplicated randomized blocks design [385] --
12.2 Tukey’s test for non-additivity [389] --
12.3 Split-plot designs [391] --
12.4 Latin squares [401] --
12.5 Unreplicated repeated measures [403] --
12.6 Asymmetrical controls: one factor [408] --
12.7 Asymmetrical controls: fixed factorial designs [409] --
12.8 Problems with experiments on ecological competition [414] --
12.9 Asymmetrical analyses of random factors in environmental studies [415] --
13 Analyses involving relationships among variables [419] --
13.1 Introduction to linear regression [419] --
13.2 Tests of null hypotheses about regressions [422] --
13.3 Assumptions underlying regression [424] --
13.3.1 Independence of data at each X [425] --
13.3.2 Homogeneity of variances at each X [427] --
13.3.3 X values are not fixed [428] --
13.3.4 Normality of errors in T [429] --
13.4 Analysis of variance and regression [431] --
13.5 How good is the regression? [431] --
13.6 Multiple regressions [434] --
13.7 Polynomial regressions [439] --
13.8 Other, non-linear regressions [444] --
13.9 Introduction to analysis of covariance [444] --
13.10 The underlying models for covariance [447] --
13.10.1 Model 1: Regression in each treatment [448] --
13.10.2 Model 2: A common regression in each treatment [449] --
13.10.3 Model 3: The total regression, all data combined [454] --
13.11 The procedures: making adjustments [457] --
13.12 Interpretation of the analysis [462] --
13.13 The assumptions needed for an analysis of covariance [464] --
13.13.1 Assumptions in regressions [464] --
13.13.2 Assumptions in analysis of variance [465] --
13.13.3 Assumptions specific to an analysis of covariance [466] --
13.14 Alternatives when regressions differ [471] --
13.14.1 A two-factor scenario [471] --
13.14.2 The Johnson-Neyman technique [473] --
13.14.3 Comparisons of regressions [474] --
13.15 Extensions of analysis of covariance to other designs [474] --
13.15.1 More than one covariate [475] --
13.15.2 Non-linear relationships [476] --
13.15.3 More than one experimental factor [476] --
14 Conclusions: where to from here? [478] --
14.1 Be logical, be eco-logical [478] --
14.2 Alternative models and hypotheses [480] --
14.3 Pilot experiments: all experiments are preliminary [481] --
14.4 Repeated experimentation [481] --
14.5 Criticisms and the growth of knowledge [484] --
References [486] --
Author index [496] --
Subject index [499] --

MR, REVIEW #

There are no comments on this title.

to post a comment.

Click on an image to view it in the image viewer

¿Necesita ayuda?

Si necesita ayuda para encontrar información, puede visitar personalmente la biblioteca en Av. Alem 1253 Bahía Blanca, llamarnos por teléfono al 291 459 5116, o enviarnos un mensaje a biblioteca.antonio.monteiro@gmail.com

Powered by Koha