Statistical methods in agriculture and experimental biology / R. Mead, R.N. Curnow, and A.M. Hasted.

Por: Mead, R. (Roger)Colaborador(es): Curnow, R. N | Hasted, A. MEditor: London ; New York : Chapman & Hall, 1993Edición: 2nd edDescripción: xi, 415 p. : il. ; 24 cmISBN: 0412354705 (HB); 0412354802 (PB)Tema(s): Agriculture -- Research -- Statistical methods | Agriculture -- Statistical methods | Biology, Experimental -- Statistical methodsOtra clasificación: 62P10
Contenidos:
Preface ix
1 , Introduction [1]
1.1 The need for statistics [1]
1.2 The use of computers in statistics [5]
2 Probability and distributions [9]
2.1 Probability [9]
2.2 Populations and samples [12]
2.3 Means and variances [15]
2.4 The Normal distribution [18]
2.5 Sampling distributions [21]
3 Estimation and hypothesis testing [27]
3.1 Estimation of the population mean [27]
3.2 Testing hypotheses about the population mean [28]
3.3 Population variance unknown [32]
3.4 Comparison of samples [35]
3.5 A pooled estimate of variance [37]
4 A simple experiment [41]
4.1 Randomization and replication [41]
4.2 Analysis of a completely randomized design with two treatments [43]
4.3 A completely randomized design with several treatments [46]
4.4 Testing overall variation between the treatments [49]
4.5 Analysis using a statistical package [55]
5 Control of random variation by blocking [59]
5.1 Local control of variation [59]
5.2 Analysis of a randomized block design [61]
5.3 Meaning of the error mean square [66]
5.4 Latin square designs [70]
5.5 Analysis of structured experimental data using a computer package [74]
5.6 Multiple Latin squares designs [76]
5.7 The benefit of blocking and the use of natural blocks [79]
6 Particular questions about treatments [89]
6.1 Treatment structure [89]
6.2 Treatment contrasts [94]
6.3 Factorial treatment structure [97]
6.4 Main effects and interactions [99]
6.5 Analysis of variance for a two-factor experiment [102]
6.6 Computer analysis of factorials [109]
7 More on factorial treatment structure [111]
7.1 More than two factors [111]
7.2 Factors with two levels [112]
7.3 The double benefit of factorial structure [118]
7.4 Many factors and small blocks [120]
7.5 The analysis of confounded experiments [125]
7.6 Split plot experiments [128]
7.7 Analysis of a split plot experiment [130]
8 The assumptions behind the analysis [137]
8.1 Our assumptions [137]
8.2 Normality [138]
8.3 Variance homogeneity [142]
8.4 Additivity [145]
8.5 Transformations of data for theoretical reasons [147]
8.6 A more general form of analysis [151]
8.7 Empirical detection of the failure of assumptions
and selection of appropriate transformations [152]
8.8 Practice and presentation [158]
9 Studying linear relationships [161]
9.1 Linear regression [161]
9.2 Assessing the regression line [165]
9.3 Inferences about the slope of a line [167]
9.4 Prediction using a regression line [168]
9.5 Correlation
9.6 Testing whether the regression is linear [177]
9.7 Regression analysis using computer packages [178]
10 More complex relationships [183]
10.1 Making the crooked straight [183]
10.2 Two independent variables [186]
10.3 Testing the components of a multiple relationship [193]
10.4 Multiple regression [204]
10.5 Possible problems in computer multiple regression [211]
11 Linear models [213]
11.1 The use of models [213]
11.2 Models for factors and variables [214]
11.3 Comparison of regressions [220]
11.4 Fitting parallel lines [227]
11.5 Covariance analysis [233]
11.6 Regression in the analysis of treatment variation [240]
12 Non-linear models [247]
12.1 Advantages of linear and non-linear models [247]
12.2 Fitting non-linear models to data [252]
12.3 Inferences about non-linear parameters [258]
12.4 Exponential models [262]
12.5 Inverse polynomial models [267]
12.6 Logistic models for growth curves [274]
13 The analysis of proportions [277]
13.1 Data in the form of frequencies [277]
13.2 The 2x2 contingency table [278]
13.3 More than two situations or more than two outcomes [280]
13.4 General contingency tables [284]
13.5 Estimation of proportions [289]
13.6 Sample sizes for estimating proportions [294]
14 Models and distributions for frequency data [299]
14.1 Models for frequency data [299]
14.2 Testing the agreement of frequency data with simple models [300]
14.3 Investigating more complex models [302]
14.4 The binomial distribution [309]
14.5 The Poisson distribution [316]
14.6 Generalized models for analysing experimental data [323]
14.7 Log-linear models [329]
14.8 Probit analysis [336]
15 Making and analysing many experimental measurements [341]
15.1 Different measurements on the same units [341]
15.2 Interdependence of different variables [342]
15.3 Repeated measurements [346]
15.4 Joint (bivariate) analysis [356]
15.5 Investigating relationships with experimental data [367]
16 Choosing the most appropriate experimental design [373]
16.1 The components of design; units and treatments [373]
16.2 Replication and precision [374]
16.3 Different levels of variation and within-unit replication [377]
16.4 Variance components and split plot designs [381]
16.5 Randomization [384]
16.6 Managing with limited resources [386]
16.7 Factors with quantitative levels [387]
16.8 Screening and selection [389]
17 Sampling finite populations [393]
17.1 Experiments and sample surveys [393]
17.2 Simple random sampling [394]
17.3 Stratified random sampling [397]
17.4 Cluster sampling, multistage sampling and sampling proportional to size [399]
17.5 Ratio and regression estimates [400]
References [403]
Appendix [405]
Index [413]
    Average rating: 0.0 (0 votes)
Item type Home library Shelving location Call number Materials specified Status Date due Barcode Course reserves
Libros Libros Instituto de Matemática, CONICET-UNS
Libros ordenados por tema 62 M479 (Browse shelf) Available A-7449

DISEÑO EXPERIMENTAL


Incluye referencias bibliográficas (p. [403]-404) e índice.

Preface ix --
1 , Introduction [1] --
1.1 The need for statistics [1] --
1.2 The use of computers in statistics [5] --
2 Probability and distributions [9] --
2.1 Probability [9] --
2.2 Populations and samples [12] --
2.3 Means and variances [15] --
2.4 The Normal distribution [18] --
2.5 Sampling distributions [21] --
3 Estimation and hypothesis testing [27] --
3.1 Estimation of the population mean [27] --
3.2 Testing hypotheses about the population mean [28] --
3.3 Population variance unknown [32] --
3.4 Comparison of samples [35] --
3.5 A pooled estimate of variance [37] --
4 A simple experiment [41] --
4.1 Randomization and replication [41] --
4.2 Analysis of a completely randomized design with two treatments [43] --
4.3 A completely randomized design with several treatments [46] --
4.4 Testing overall variation between the treatments [49] --
4.5 Analysis using a statistical package [55] --
5 Control of random variation by blocking [59] --
5.1 Local control of variation [59] --
5.2 Analysis of a randomized block design [61] --
5.3 Meaning of the error mean square [66] --
5.4 Latin square designs [70] --
5.5 Analysis of structured experimental data using a computer package [74] --
5.6 Multiple Latin squares designs [76] --
5.7 The benefit of blocking and the use of natural blocks [79] --
6 Particular questions about treatments [89] --
6.1 Treatment structure [89] --
6.2 Treatment contrasts [94] --
6.3 Factorial treatment structure [97] --
6.4 Main effects and interactions [99] --
6.5 Analysis of variance for a two-factor experiment [102] --
6.6 Computer analysis of factorials [109] --
7 More on factorial treatment structure [111] --
7.1 More than two factors [111] --
7.2 Factors with two levels [112] --
7.3 The double benefit of factorial structure [118] --
7.4 Many factors and small blocks [120] --
7.5 The analysis of confounded experiments [125] --
7.6 Split plot experiments [128] --
7.7 Analysis of a split plot experiment [130] --
8 The assumptions behind the analysis [137] --
8.1 Our assumptions [137] --
8.2 Normality [138] --
8.3 Variance homogeneity [142] --
8.4 Additivity [145] --
8.5 Transformations of data for theoretical reasons [147] --
8.6 A more general form of analysis [151] --
8.7 Empirical detection of the failure of assumptions --
and selection of appropriate transformations [152] --
8.8 Practice and presentation [158] --
9 Studying linear relationships [161] --
9.1 Linear regression [161] --
9.2 Assessing the regression line [165] --
9.3 Inferences about the slope of a line [167] --
9.4 Prediction using a regression line [168] --
9.5 Correlation --
9.6 Testing whether the regression is linear [177] --
9.7 Regression analysis using computer packages [178] --
10 More complex relationships [183] --
10.1 Making the crooked straight [183] --
10.2 Two independent variables [186] --
10.3 Testing the components of a multiple relationship [193] --
10.4 Multiple regression [204] --
10.5 Possible problems in computer multiple regression [211] --
11 Linear models [213] --
11.1 The use of models [213] --
11.2 Models for factors and variables [214] --
11.3 Comparison of regressions [220] --
11.4 Fitting parallel lines [227] --
11.5 Covariance analysis [233] --
11.6 Regression in the analysis of treatment variation [240] --
12 Non-linear models [247] --
12.1 Advantages of linear and non-linear models [247] --
12.2 Fitting non-linear models to data [252] --
12.3 Inferences about non-linear parameters [258] --
12.4 Exponential models [262] --
12.5 Inverse polynomial models [267] --
12.6 Logistic models for growth curves [274] --
13 The analysis of proportions [277] --
13.1 Data in the form of frequencies [277] --
13.2 The 2x2 contingency table [278] --
13.3 More than two situations or more than two outcomes [280] --
13.4 General contingency tables [284] --
13.5 Estimation of proportions [289] --
13.6 Sample sizes for estimating proportions [294] --
14 Models and distributions for frequency data [299] --
14.1 Models for frequency data [299] --
14.2 Testing the agreement of frequency data with simple models [300] --
14.3 Investigating more complex models [302] --
14.4 The binomial distribution [309] --
14.5 The Poisson distribution [316] --
14.6 Generalized models for analysing experimental data [323] --
14.7 Log-linear models [329] --
14.8 Probit analysis [336] --
15 Making and analysing many experimental measurements [341] --
15.1 Different measurements on the same units [341] --
15.2 Interdependence of different variables [342] --
15.3 Repeated measurements [346] --
15.4 Joint (bivariate) analysis [356] --
15.5 Investigating relationships with experimental data [367] --
16 Choosing the most appropriate experimental design [373] --
16.1 The components of design; units and treatments [373] --
16.2 Replication and precision [374] --
16.3 Different levels of variation and within-unit replication [377] --
16.4 Variance components and split plot designs [381] --
16.5 Randomization [384] --
16.6 Managing with limited resources [386] --
16.7 Factors with quantitative levels [387] --
16.8 Screening and selection [389] --
17 Sampling finite populations [393] --
17.1 Experiments and sample surveys [393] --
17.2 Simple random sampling [394] --
17.3 Stratified random sampling [397] --
17.4 Cluster sampling, multistage sampling and sampling proportional to size [399] --
17.5 Ratio and regression estimates [400] --
References [403] --
Appendix [405] --
Index [413] --

MR, REVIEW #

There are no comments on this title.

to post a comment.

Click on an image to view it in the image viewer

¿Necesita ayuda?

Si necesita ayuda para encontrar información, puede visitar personalmente la biblioteca en Av. Alem 1253 Bahía Blanca, llamarnos por teléfono al 291 459 5116, o enviarnos un mensaje a biblioteca.antonio.monteiro@gmail.com

Powered by Koha