Normal view

## Multivariate observations / G. A. F. Seber.

Series Wiley series in probability and mathematical statisticsProbability and mathematical statisticsEditor: New York : Wiley, c1984Descripción: xx, 686 p. : il. ; 24 cmISBN: 047188104XTema(s): Multivariate analysisOtra clasificación: 62Hxx Recursos en línea: Publisher description
Contenidos:
```1 Preliminaries 
1.1 Notation 
1.2 What Is Multivariate Analysis? 
1.3 Expectation and Covariance Operators 
1.4 Sample Data 
1.5 Mahalanobis Distances and Angles 
1.6 Simultaneous Inference 
1.6.1 Simultaneous tests, 
1.6.2 Union-intersection principle, 
1.7 Likelihood Ratio Tests 
Exercises 1,14
2 Multivariate Distributions 
2.1 Introduction 
2.2 Multivariate Normal Distribution 
2.3 Wishart Distribution 
2.3.1 Definition and properties, 
2.3.3 Noncentral Wishart distribution, 
2.3.4 Eigenvalues of a Wishart matrix, 
2.3.5 Determinant of a Wishart matrix, 
2.4 Hotelling’s T2 Distribution 
2.4.1 Central distribution, 
2.4.2 Noncentral distribution, 
2.5. Multivariate Beta Distributions 
2.5.1 Derivation, 
2.5.2 Multivariate beta eigenvalues, 
2.5.3 Two trace statistics, 
a. Lawley-Hotelling statistic, 
b. Pillai’s trace statistic, 
2.5.4 U-Distribution, 
2.5.5 Summary of special distributions, 
a. Hotelling’s T2, 
b. U-statistic, 
c. Maximum root statistic, 
d. Trace statistics, 
e. Equivalence of statistics when mH = 1, 
2.5.6 Factorizations of U, 
a. Product of beta variables, 
b. Product of two U-statistics, 
2.6 Rao’s Distribution 
2.7 Multivariate Skewness and Kurtosis 
Exercises 2, 
3 Inference for the Multivariate Normal 
3.1 Introduction 
3.2 Estimation 
3.2.1 Maximum likelihood estimation, 
3.2.2 Distribution theory, 
3.3 Testing for the Mean 
3.3.1 Hotelling’s T2 test, 
3.3.2 Power of the test, 
3.3.3 Robustness of the test, 
3.3.4 Step-down test procedure, 
3.4 Linear Constraints on the Mean 
3.4.1 Generalization of the paired comparison test, 
3.4.2 Some examples, 
a. Repeated-measurement designs, 
b. Testing specified contrasts, 
c. Test for symmetry, 
d. Testing for a polynomial growth trend, 
e. Independent estimate of ∑, 
3.4.3 Minimization technique for the test statistic, 
3.4.4 Confidence intervals, 
3.4.5 Functional relationships (errors in variables regression), 
3.5 Inference for the Dispersion Matrix 
3.5.1 Introduction, 
3.5.2 Blockwise independence: Two blocks, 
a. Likelihood ratio test, 
b. Maximum root test, 
3.5.3 Blockwise independence: b Blocks, 
3.5.4 Diagonal dispersion matrix, 
3.5.5 Equal diagonal blocks, 
3.5.6 Equal correlations and equal variances, 
3.5.7 Simultaneous confidence intervals for correlations, 
3.5.8 Large-sample inferences, 
3.5.9 More general covariance structures, 
3.6 Comparing Two Normal Populations 
3.6.1 Tests for equal dispersion matrices, 
a. Likelihood ratio test, 
b. Union-intersection test, 
c. Robustness of tests, 
d. Robust large-sample tests, 
3.6.2 Test for equal means assuming equal dispersion matrices, 
a. Hotelling’s T2 test, 
b. Effect of unequal dispersion matrices, 
c. Effect of nonnormality, 
3.6.3 Test for equal means assuming unequal dispersion matrices, 
3.6.4 Profile analysis: Two populations, 
Exercises 3, 
4 Graphical and Data - Oriented Techniques 
4.1 Multivariate Graphical Displays 
4.2 Transforming to Normality 
4.2.1 Univariate transformations, 
4.2.2 Multivariate transformations, 
4.3 Distributional Tests and Plots 
4.3.1 Investigating marginal distributions, 
4.3.2 Tests and plots for multivariate normality, 
4.4 Robust Estimation 
4.4.1 Why robust estimates?, 
4.4.2 Estimation of location, 
a Univariate methods, 
b. Multivariate methods, 
4.4.3 Estimation of dispersion and covariance, 
a. Univariate methods, 
b. Multivariate methods, 
4.5 Outlying Observations 
Exercises 4,173
5 Dimension Reduction and Ordination 
5.1 Introduction 
5.2 Principal Components 
5.2.1 Definition, 
5.2.2 Dimension reduction properties, 
5.2.3 Further properties, 
5.2.4 Sample principal components, 
5.2.5 Inference for sample components, 
5.2.6 Preliminary selection of variables, 
5.2.7 General applications, 
5.2.8 Generalized principal components analysis, 
5.3- Biplots and A-Plots 
5.4 Factor Analysis 
5.4.1 Underlying model, 
5.4.2 Estimation procedures, 
a. Method of maximum likelihood, 
b. Principal factor analysis, 
c. Estimating factor scores, 
5.4.3 Some difficulties, 
a. Principal factor analysis, 
b. Maximum likelihood factor analysis, 
c. Conclusions, 
5.5 Multidimensional Scaling 
5.5.1 Classical (metric) solution, 
5.5.2 Nonmetric scaling, 
5.6 Procrustes Analysis (Matching Configurations) 
5.7 Canonical Correlations and Variates 
5.7.1 Population correlations, 
5.7.2 Sample canonical correlations, 
5.7.3 Inference, 
5.7.4 More than two sets of variables, 
5.8 Discriminant Coordinates 
5.9 Assessing Two-Dimensional Representations 
5.10 A Brief Comparison of Methods 
Exercises 5, 
6 Discriminant Analysis 
6.1 Introduction 
6.2 Two Groups: Known Distributions 
6.2.1 Misclassification errors, 
6.2.2 Some allocation principles, 
a. Minimize total probability of misclassification, 
b. Likelihood ratio method, 
c. Minimize total cost of misclassification, 
d. Maximize the posterior probability, 
e. Minimax allocation, 
f. Summary, 
6.3 Two Groups: Known Distributions With Unknown Parameters 
6.3.1 General methods, 
6.3.2 Normal populations, 
a. Linear discriminant function, 
c. Robustness of LDF and QDF, 
d. Missing values, 
e. Predictive discriminant, 
6.3.3 Multivariate discrete distributions, 
a. Independent binary variables, 
b. Correlated binary variables, 
c. Correlated discrete variables, 
6.3.4 Multivariate discrete-continuous distributions, 
6.4 Two Groups: Logistic Discriminant 
6.4.1 General model, 
6.4.2 Sampling designs, 
a. Conditional sampling, 
b. Mixture sampling, 
c. Separate sampling, 
6.4.3 Computations, 
6.4.4 LGD versus LDF, 
6.4.5 Predictive logistic model, 
6.5 Two Groups: Unknown Distributions 
6.5.1 Kernel method, 
a. Continuous data, 
b. Binary data, 
c. Continuous and discrete data, 
6.5.2 Other nonparametric methods, 
a Nearest neighbor techniques, 
b. Partitioning methods, 
c. Distance methods, 
d. Rank procedures, 
e. Sequential discrimination on the variables, 
6.6 Utilizing Unclassified Observations 
6.7 All Observations Unclassified (Method of Mixtures) 
6.8 Cases of Doubt 
6.9 More Than Two Groups 
6.10 Selection of Variables 
6.10.1 Two groups, 
6.10.2 More than two groups, 
6.11 Some Conclusions 
Exercises 6, ```
```7 Cluster Analysis 
7.1 Introduction 
7.2 Proximity data 
7.2.1 Dissimilarities, 
7.2.2 Similarities, 
7.3 Hierarchical Clustering: Agglomerative Techniques 
7.3.1 Some commonly used methods, 
a. Single linkage (nearest neighbor) method, 
b. Complete linkage (farthest neighbor) method, 
c. Centroid method, 
d. Incremental sum of squares method, 
e. Median method, 
f. Group average method, 
g. Lance and Williams flexible method, 
h. Information measures, 
i. Other methods, 
7.3.2 Comparison of methods, 
a. Monotonicity, 
b. Spatial properties, 
c. Computational effort, 
7.4 Hierarchical Clustering: Divisive Techniques 
7.4.1 Monothetic methods, 
7.4.2 Polythetic methods, 
7.5. Partitioning Methods 
7.5.1 Number of partitions, 
7.5.2 Starting the process, 
7.5.3 Reassigning the objects, 
a. Minimize trace W, 
b. Minimize | W|, 
c. Maximize trace BW-1 
7.5.4 Other techniques, 
7.6 Overlapping Clusters (Clumping) 
7.7 Choosing the Number of Clusters 
Exercises 7, 
8 Multivariate Linear Models 
8.1 Least Squares Estimation 
8.2 Properties of Least Squares Estimates 
g,3 Least Squares With Linear Constraints 
8.4 Distribution Theory 
g.5 Analysis of Residuals 
8.6 Hypothesis Testing 
8.6.1 Extension of univariate theory, 
8.6.2 Test procedures, 
a. Union-intersection test, 
b. Likelihood ratio test, 
c. Other test statistics, 
d. Comparison of four test statistics, 
e. Mardia’s permutation test, 
8.6.3 Simultaneous confidence intervals, 
8.6.4 Comparing two populations, 
8.6.5 Canonical form, 
8.6.6 Missing observations, 
8.6.7 Other topics, 
8.7 A Generalized Linear Hypothesis 
8.7.1 Theory, 
8.7.2 Profile analysis for K populations, 
8.7.3 Tests for mean of multivariate normal, 
8.8 Step-Down Procedures 
8.9 Multiple Design Models 
Exercises 8, 
9 Multivariate Analysis of Variance and Covariance 
Introduction 
One-Way Classification 
9.2.1 Hypothesis testing, 
9.2.2 Multiple comparisons, 
9.2.3 Comparison of test statistics, 
9.2.4 Robust tests for equal means, 
a. Permutation test, 
b. James’ test, 
9.2.5 Reparameterization, 
9.2.6 Comparing dispersion matrices, 
a. Test for equal dispersion matrices, 
b. Graphical comparisons, 
9.2.7 Exact procedures for means assuming unequal dispersion matrices, 
a. Hypothesis testing, 
b. Multiple confidence intervals, 
Randomized Block Design 
9.3.1 Hypothesis testing and confidence intervals, 
9.3.2 Underlying assumptions, 
9.4 Two-Way Classification With Equal Observations per Mean 
9.5 Analysis of Covariance 
9.5.1 Univariate theory, 
9.5.2 Multivariate theory, 
9.5.3 Test for additional information, 
9.6 Multivariate Cochran’s Theorem on Quadratics 
9.7 Growth Curve Analysis 
9.7.1 Examples, 
a. Single growth curve, 
b. Two growth curves, 
c. Single growth curve for a randomized block design, 
d. Two-dimensional growth curve, 
9.7.2 General theory, 
a. Potthoff and Roy’s method, 
b. Rao-Khatri analysis of covariance method, 
c. Choice of method, 
9.7.3 Single growth curve, 
9.7.4 Test for internal adequacy, 
9.7.5 A case study, 
9.7.6 Further topics, 
Exercises 9, 
10 Special Topics 
10.1 Computational Techniques 
10.1.1 Solving the normal equations, 
a. Cholesky decomposition, 
b. QR-algorithm, 
10.1.2 Hypothesis matrix, 
10.1.3 Calculating Hotelling's T2, 
10.1.4 Generalized symmetric eigenproblem, 
a. Multivariate linear hypothesis, 
b. One-way classification, 
c. Discriminant coordinates, 
10.1.5 Singular value decomposition, 
a. Definition, 
b. Solution of normal equations, 
c. Principal components, 
d. Canonical correlations, 
10.1.6 Selecting the best subset, 
a. Response variables, 
b. Regressor variables, 
10.2 Log-Linear Models for Binary Data 
10.3 Incomplete Data 
appendix 
A Some Matrix Algebra 
Al Trace and eigenvalues, 
A2 Rank, 
A3 Patterned matrices, 
A4 Positive semidefinite matrices, 
A5 Positive definite matrices, 
A6 Idempotent matrices, 
A7 Optimization and inequalities, 
A8 Vector and matrix differentiation, 
A9 Jacobians and transformations, 
A10 Asymptotic normality, 
B Orthogonal Projections 
Bl Orthogonal decomposition of vectors, 
B2 Orthogonal complements, 
B3 Projections on subspaces, 
C Order Statistics and Probability Plotting 
Cl Sample distribution functions, 
C2 Gamma distribution, 
C3 Beta distribution, 
C4 Probability plotting, 
D Statistical Tables 
DI Bonferroni t-percentage points, 
D2 Maximum likelihood estimates for the gamma distribution, 
D3 Upper tail percentage points for √b1, 
D4 Coefficients in a normalizing transformation of √b1, 
D5 Simulation percentiles for b2, 
D6 Charts for the percentiles of b2, 
D7 Coefficients for the Wilk-Shapiro (W) test, 
D8 Percentiles for the Wilk-Shapiro (W) test, 
D9 DAgostino's test for normality, 
D10 Anderson-Darling (A2n) test for normality, 
Dll Discordancy test for single gamma outlier, 
DI2 Discordancy test for single multivariate normal outlier, 
DI3 Wilks9 likelihood ratio test, 
D14 Roy’s maximum root statistic, 
DI5 Law ley-Hotelling trace statistic, 
D16 Pillai’s trace statistic, 
DI 7 Test for mutual independence, 
DI 8 Test for equal dispersion matrices with equal sample sizes, 
Outline Solutions to Exercises 
References 
Index ``` Average rating: 0.0 (0 votes)
Item type Home library Shelving location Call number Materials specified Status Date due Barcode Course reserves Libros
Libros ordenados por tema 62 Se443m (Browse shelf) Available A-5943

Incluye referencias bibliográficas (p. 626-670) e índice.

1 Preliminaries  --
1.1 Notation  --
1.2 What Is Multivariate Analysis?  --
1.3 Expectation and Covariance Operators  --
1.4 Sample Data  --
1.5 Mahalanobis Distances and Angles  --
1.6 Simultaneous Inference  --
1.6.1 Simultaneous tests,  --
1.6.2 Union-intersection principle,  --
1.7 Likelihood Ratio Tests  --
Exercises 1,14 --
2 Multivariate Distributions  --
2.1 Introduction  --
2.2 Multivariate Normal Distribution  --
2.3 Wishart Distribution  --
2.3.1 Definition and properties,  --
2.3.3 Noncentral Wishart distribution,  --
2.3.4 Eigenvalues of a Wishart matrix,  --
2.3.5 Determinant of a Wishart matrix,  --
2.4 Hotelling’s T2 Distribution  --
2.4.1 Central distribution,  --
2.4.2 Noncentral distribution,  --
2.5. Multivariate Beta Distributions  --
2.5.1 Derivation,  --
2.5.2 Multivariate beta eigenvalues,  --
2.5.3 Two trace statistics,  --
a. Lawley-Hotelling statistic,  --
b. Pillai’s trace statistic,  --
2.5.4 U-Distribution,  --
2.5.5 Summary of special distributions,  --
a. Hotelling’s T2,  --
b. U-statistic,  --
c. Maximum root statistic,  --
d. Trace statistics,  --
e. Equivalence of statistics when mH = 1,  --
2.5.6 Factorizations of U,  --
a. Product of beta variables,  --
b. Product of two U-statistics,  --
2.6 Rao’s Distribution  --
2.7 Multivariate Skewness and Kurtosis  --
Exercises 2,  --
3 Inference for the Multivariate Normal  --
3.1 Introduction  --
3.2 Estimation  --
3.2.1 Maximum likelihood estimation,  --
3.2.2 Distribution theory,  --
3.3 Testing for the Mean  --
3.3.1 Hotelling’s T2 test,  --
3.3.2 Power of the test,  --
3.3.3 Robustness of the test,  --
3.3.4 Step-down test procedure,  --
3.4 Linear Constraints on the Mean  --
3.4.1 Generalization of the paired comparison test,  --
3.4.2 Some examples,  --
a. Repeated-measurement designs,  --
b. Testing specified contrasts,  --
c. Test for symmetry,  --
d. Testing for a polynomial growth trend,  --
e. Independent estimate of ∑,  --
3.4.3 Minimization technique for the test statistic,  --
3.4.4 Confidence intervals,  --
3.4.5 Functional relationships (errors in variables regression),  --
3.5 Inference for the Dispersion Matrix  --
3.5.1 Introduction,  --
3.5.2 Blockwise independence: Two blocks,  --
a. Likelihood ratio test,  --
b. Maximum root test,  --
3.5.3 Blockwise independence: b Blocks,  --
3.5.4 Diagonal dispersion matrix,  --
3.5.5 Equal diagonal blocks,  --
3.5.6 Equal correlations and equal variances,  --
3.5.7 Simultaneous confidence intervals for correlations,  --
3.5.8 Large-sample inferences,  --
3.5.9 More general covariance structures,  --
3.6 Comparing Two Normal Populations  --
3.6.1 Tests for equal dispersion matrices,  --
a. Likelihood ratio test,  --
b. Union-intersection test,  --
c. Robustness of tests,  --
d. Robust large-sample tests,  --
3.6.2 Test for equal means assuming equal dispersion matrices,  --
a. Hotelling’s T2 test,  --
b. Effect of unequal dispersion matrices,  --
c. Effect of nonnormality,  --
3.6.3 Test for equal means assuming unequal dispersion matrices,  --
3.6.4 Profile analysis: Two populations,  --
Exercises 3,  --
4 Graphical and Data - Oriented Techniques  --
4.1 Multivariate Graphical Displays  --
4.2 Transforming to Normality  --
4.2.1 Univariate transformations,  --
4.2.2 Multivariate transformations,  --
4.3 Distributional Tests and Plots  --
4.3.1 Investigating marginal distributions,  --
4.3.2 Tests and plots for multivariate normality,  --
4.4 Robust Estimation  --
4.4.1 Why robust estimates?,  --
4.4.2 Estimation of location,  --
a Univariate methods,  --
b. Multivariate methods,  --
4.4.3 Estimation of dispersion and covariance,  --
a. Univariate methods,  --
b. Multivariate methods,  --
4.5 Outlying Observations  --
Exercises 4,173 --
5 Dimension Reduction and Ordination  --
5.1 Introduction  --
5.2 Principal Components  --
5.2.1 Definition,  --
5.2.2 Dimension reduction properties,  --
5.2.3 Further properties,  --
5.2.4 Sample principal components,  --
5.2.5 Inference for sample components,  --
5.2.6 Preliminary selection of variables,  --
5.2.7 General applications,  --
5.2.8 Generalized principal components analysis,  --
5.3- Biplots and A-Plots  --
5.4 Factor Analysis  --
5.4.1 Underlying model,  --
5.4.2 Estimation procedures,  --
a. Method of maximum likelihood,  --
b. Principal factor analysis,  --
c. Estimating factor scores,  --
5.4.3 Some difficulties,  --
a. Principal factor analysis,  --
b. Maximum likelihood factor analysis,  --
c. Conclusions,  --
5.5 Multidimensional Scaling  --
5.5.1 Classical (metric) solution,  --
5.5.2 Nonmetric scaling,  --
5.6 Procrustes Analysis (Matching Configurations)  --
5.7 Canonical Correlations and Variates  --
5.7.1 Population correlations,  --
5.7.2 Sample canonical correlations,  --
5.7.3 Inference,  --
5.7.4 More than two sets of variables,  --
5.8 Discriminant Coordinates  --
5.9 Assessing Two-Dimensional Representations  --
5.10 A Brief Comparison of Methods  --
Exercises 5,  --
6 Discriminant Analysis  --
6.1 Introduction  --
6.2 Two Groups: Known Distributions  --
6.2.1 Misclassification errors,  --
6.2.2 Some allocation principles,  --
a. Minimize total probability of misclassification,  --
b. Likelihood ratio method,  --
c. Minimize total cost of misclassification,  --
d. Maximize the posterior probability,  --
e. Minimax allocation,  --
f. Summary,  --
6.3 Two Groups: Known Distributions With Unknown Parameters  --
6.3.1 General methods,  --
6.3.2 Normal populations,  --
a. Linear discriminant function,  --
b. Quadratic discriminant function,  --
c. Robustness of LDF and QDF,  --
d. Missing values,  --
e. Predictive discriminant,  --
6.3.3 Multivariate discrete distributions,  --
a. Independent binary variables,  --
b. Correlated binary variables,  --
c. Correlated discrete variables,  --
6.3.4 Multivariate discrete-continuous distributions,  --
6.4 Two Groups: Logistic Discriminant  --
6.4.1 General model,  --
6.4.2 Sampling designs,  --
a. Conditional sampling,  --
b. Mixture sampling,  --
c. Separate sampling,  --
6.4.3 Computations,  --
6.4.4 LGD versus LDF,  --
6.4.5 Predictive logistic model,  --
6.5 Two Groups: Unknown Distributions  --
6.5.1 Kernel method,  --
a. Continuous data,  --
b. Binary data,  --
c. Continuous and discrete data,  --
6.5.2 Other nonparametric methods,  --
a Nearest neighbor techniques,  --
b. Partitioning methods,  --
c. Distance methods,  --
d. Rank procedures,  --
e. Sequential discrimination on the variables,  --
6.6 Utilizing Unclassified Observations  --
6.7 All Observations Unclassified (Method of Mixtures)  --
6.8 Cases of Doubt  --
6.9 More Than Two Groups  --
6.10 Selection of Variables  --
6.10.1 Two groups,  --
6.10.2 More than two groups,  --
6.11 Some Conclusions  --
Exercises 6,  --

7 Cluster Analysis  --
7.1 Introduction  --
7.2 Proximity data  --
7.2.1 Dissimilarities,  --
7.2.2 Similarities,  --
7.3 Hierarchical Clustering: Agglomerative Techniques  --
7.3.1 Some commonly used methods,  --
a. Single linkage (nearest neighbor) method,  --
b. Complete linkage (farthest neighbor) method,  --
c. Centroid method,  --
d. Incremental sum of squares method,  --
e. Median method,  --
f. Group average method,  --
g. Lance and Williams flexible method,  --
h. Information measures,  --
i. Other methods,  --
7.3.2 Comparison of methods,  --
a. Monotonicity,  --
b. Spatial properties,  --
c. Computational effort,  --
7.4 Hierarchical Clustering: Divisive Techniques  --
7.4.1 Monothetic methods,  --
7.4.2 Polythetic methods,  --
7.5. Partitioning Methods  --
7.5.1 Number of partitions,  --
7.5.2 Starting the process,  --
7.5.3 Reassigning the objects,  --
a. Minimize trace W,  --
b. Minimize | W|,  --
c. Maximize trace BW-1  --
7.5.4 Other techniques,  --
7.6 Overlapping Clusters (Clumping)  --
7.7 Choosing the Number of Clusters  --
Exercises 7,  --
8 Multivariate Linear Models  --
8.1 Least Squares Estimation  --
8.2 Properties of Least Squares Estimates  --
g,3 Least Squares With Linear Constraints  --
8.4 Distribution Theory  --
g.5 Analysis of Residuals  --
8.6 Hypothesis Testing  --
8.6.1 Extension of univariate theory,  --
8.6.2 Test procedures,  --
a. Union-intersection test,  --
b. Likelihood ratio test,  --
c. Other test statistics,  --
d. Comparison of four test statistics,  --
e. Mardia’s permutation test,  --
8.6.3 Simultaneous confidence intervals,  --
8.6.4 Comparing two populations,  --
8.6.5 Canonical form,  --
8.6.6 Missing observations,  --
8.6.7 Other topics,  --
8.7 A Generalized Linear Hypothesis  --
8.7.1 Theory,  --
8.7.2 Profile analysis for K populations,  --
8.7.3 Tests for mean of multivariate normal,  --
8.8 Step-Down Procedures  --
8.9 Multiple Design Models  --
Exercises 8,  --
9 Multivariate Analysis of Variance and Covariance  --
Introduction  --
One-Way Classification  --
9.2.1 Hypothesis testing,  --
9.2.2 Multiple comparisons,  --
9.2.3 Comparison of test statistics,  --
9.2.4 Robust tests for equal means,  --
a. Permutation test,  --
b. James’ test,  --
9.2.5 Reparameterization,  --
9.2.6 Comparing dispersion matrices,  --
a. Test for equal dispersion matrices,  --
b. Graphical comparisons,  --
9.2.7 Exact procedures for means assuming unequal dispersion matrices,  --
a. Hypothesis testing,  --
b. Multiple confidence intervals,  --
Randomized Block Design  --
9.3.1 Hypothesis testing and confidence intervals,  --
9.3.2 Underlying assumptions,  --
9.4 Two-Way Classification With Equal Observations per Mean  --
9.5 Analysis of Covariance  --
9.5.1 Univariate theory,  --
9.5.2 Multivariate theory,  --
9.5.3 Test for additional information,  --
9.6 Multivariate Cochran’s Theorem on Quadratics  --
9.7 Growth Curve Analysis  --
9.7.1 Examples,  --
a. Single growth curve,  --
b. Two growth curves,  --
c. Single growth curve for a randomized block design,  --
d. Two-dimensional growth curve,  --
9.7.2 General theory,  --
a. Potthoff and Roy’s method,  --
b. Rao-Khatri analysis of covariance method,  --
c. Choice of method,  --
9.7.3 Single growth curve,  --
9.7.4 Test for internal adequacy,  --
9.7.5 A case study,  --
9.7.6 Further topics,  --
Exercises 9,  --
10 Special Topics  --
10.1 Computational Techniques  --
10.1.1 Solving the normal equations,  --
a. Cholesky decomposition,  --
b. QR-algorithm,  --
10.1.2 Hypothesis matrix,  --
10.1.3 Calculating Hotelling's T2,  --
10.1.4 Generalized symmetric eigenproblem,  --
a. Multivariate linear hypothesis,  --
b. One-way classification,  --
c. Discriminant coordinates,  --
10.1.5 Singular value decomposition,  --
a. Definition,  --
b. Solution of normal equations,  --
c. Principal components,  --
d. Canonical correlations,  --
10.1.6 Selecting the best subset,  --
a. Response variables,  --
b. Regressor variables,  --
10.2 Log-Linear Models for Binary Data  --
10.3 Incomplete Data  --
appendix  --
A Some Matrix Algebra  --
Al Trace and eigenvalues,  --
A2 Rank,  --
A3 Patterned matrices,  --
A4 Positive semidefinite matrices,  --
A5 Positive definite matrices,  --
A6 Idempotent matrices,  --
A7 Optimization and inequalities,  --
A8 Vector and matrix differentiation,  --
A9 Jacobians and transformations,  --
A10 Asymptotic normality,  --
B Orthogonal Projections  --
Bl Orthogonal decomposition of vectors,  --
B2 Orthogonal complements,  --
B3 Projections on subspaces,  --
C Order Statistics and Probability Plotting  --
Cl Sample distribution functions,  --
C2 Gamma distribution,  --
C3 Beta distribution,  --
C4 Probability plotting,  --
D Statistical Tables  --
DI Bonferroni t-percentage points,  --
D2 Maximum likelihood estimates for the gamma distribution,  --
D3 Upper tail percentage points for √b1,  --
D4 Coefficients in a normalizing transformation of √b1,  --
D5 Simulation percentiles for b2,  --
D6 Charts for the percentiles of b2,  --
D7 Coefficients for the Wilk-Shapiro (W) test,  --
D8 Percentiles for the Wilk-Shapiro (W) test,  --
D9 DAgostino's test for normality,  --
D10 Anderson-Darling (A2n) test for normality,  --
Dll Discordancy test for single gamma outlier,  --
DI2 Discordancy test for single multivariate normal outlier,  --
DI3 Wilks9 likelihood ratio test,  --
D14 Roy’s maximum root statistic,  --
DI5 Law ley-Hotelling trace statistic,  --
D16 Pillai’s trace statistic,  --
DI 7 Test for mutual independence,  --
DI 8 Test for equal dispersion matrices with equal sample sizes,  --
Outline Solutions to Exercises  --
References  --
Index  --

MR, 86f:62080

There are no comments on this title.