Normal view

## Prediction and regulation by linear least-square methods / P. Whittle ; foreword by Thomas J. Sargent.

Editor: Minneapolis, Minn. : University of Minnesota Press, c1983Edición: 2nd ed., revDescripción: xv, 187 p. ; 23 cmISBN: 0816611475 :; 0816611483 (pbk.) :Otra clasificación: *CODIGO*
Contenidos:
```CHAPTER I --
INTRODUCTION [1] --
1.1 Scientific prediction. 1.2 Deterministic processes in discrete time. 1.3 Deterministic processes in continuous time. 1.4 Stochastic processes in discrete time. 1.5 Stochastic processes in continuous time. 1.6 Linear least-square prediction and estimation. --
CHAPTER 2 [12] --
STATIONARY AND RELATED PROCESSES --
2.1 Notation. 2.2 Linear operations; response and transfer functions. 2.3 The spectral representation of a stationary process (scalar case). 2.4 Linear operations on stationary processes. 2.5 The elementary stationary processes. 2.6 Linear determinism; Wold’s decomposition. 2.7 Moving average and autoregressive representations of a process. 2.8 The canonical factorisation of the spectral density function. 2.9 Multivariate processes. --
CHAPTER 3 [31] --
A FIRST SOLUTION OF THE PREDICTION PROBLEM --
3.1 Deterministic components. 3.2 Pure prediction. 3.3 Some examples. 3.4 The factorisation of the spectral density function in practice. 3.5 Pure prediction in continuous time. 3.6 Continuous time processes with rational spectral density function. 3.7 The prediction of one series from another. 3.8 Processes with real zeros in the spectral density function. 3.9 General comments. --
CHAPTER 4 [46] --
LEAST-SQUARE APPROXIMATION --
4.1 Derivation of the linear least-square estimate. 4.2 Methods of inverting a covariance matrix. 4.3 The inclusion of deterministic terms in the least-square approximation. 4.4 Minimisation of more general quadratic forms. --
CHAPTER [5] --
PROJECTION ON THE INFINITE SAMPLE [56] --
5.1 Construction of the estimate. 5.2 Some examples of signal extraction. 5.3 Interpolation between evenly-spaced observations. 5.4. Smoothing in several dimensions. --
CHAPTER 6 [66] --
PROJECTION ON THE SEMI-INFINITE SAMPLE --
6.1 The prediction formula. 6.2 The mean square error. 6.3 An example. 6.4 Continuous time processes. --
CHAPTER [7] --
PROJECTION ON THE FINITE SAMPLE [72] --
7.1 Autoregressive processes. 7.2 Moving-average processes. 7.3 Processes with rational s.d.f. 7.4 Interpolation. 7.5 Continuous time processes. 7.6 Processes generated by stochastic differential equations. 7.7 Continuous time processes with rational s.d.f. --
CHAPTER [8] --
DEVIATIONS FROM STATIONARITY: TRENDS DETERMINISTIC COMPONENTS AND ACCUMULATED PROCESSES [83] --
8.1 Linear models. 8.2 Extrapolation of trends. 8.3 The fitting of deterministic components. 8.4 Deterministic components; special cases. 8.5 Accumulated processes. 8.6 Exponentially weighted moving averages. --
CHAPTER [9] --
MULTIVARIATE PROCESSES [98] --
9.1 Projection on the infinite sample. 9.2 Projection on the semiinfinite sample. 9.3 Canonical factorisation of a rational spectral density matrix. 9.4 Projection on the finite sample. 9.5 Continuous time. --
CHAPTER 10 [106] --
REGULATION --
10.1 Notation for operators. 10.2 Some examples. 10.3 Optimal regulation when the input series have known future. 10.4 Statistical averages, time averages and non-stationary inputs. 10.5 Optimal regulation for systems with inputs of uncertain future. 10.6 Inputs with deterministic and non-stationary components. 10.7 Specific examples. 10.8 Certainty equivalence. 10.9 General comments on least-square regulation. --
CHAPTER 11 [142] --
LLS PREDICTION, 1982: A PERSONAL VIEW --
11.1 Notation et cetera. 11.2 Projection on the semi-infinite sample. 11.3 State-structured models and the Kalman filter. 11.4 Recur-sive models of higher order. 11.5 Dual methods of estimation/ prediction. 11.6 Hamiltonian methods of estimation. 11.7 Relation of the Hamiltonian and recursive approaches. --
CHAPTER 12 [159] --
LQG REGULATION AND CONTROL. 1982: --
A PERSONAL VIEW --
12.1 Introduction. 12.2 Structure: the dynamic programming principle. 12.3 Certainty equivalence. 12.4 Optimal control with fullinformation state-structure. 12.5 Tracking and buffering. 12.6 Higher-order dynamics: the easy case. 12.7 Incomplete information: the stochastic case. 12.8 The Hamiltonian treatment of optimal control. 12.9 The Markov case 12.10 Risk-sensitive criteria. 12.11 Other relaxations of the LQG hypothesis --
REFERENCES [180] --
SUPPLEMENTARY REFERENCES FOR THE SECOND EDITION [182] --
INDEX [185] --```
List(s) this item appears in: Últimas adquisiciones
Item type Home library Shelving location Call number Materials specified Status Date due Barcode
Libros
Últimas adquisiciones 62 W627-2 (Browse shelf) Available A-9379

Includes bibliographies and index.

CHAPTER I --
INTRODUCTION [1] --
1.1 Scientific prediction. 1.2 Deterministic processes in discrete time. 1.3 Deterministic processes in continuous time. 1.4 Stochastic processes in discrete time. 1.5 Stochastic processes in continuous time. 1.6 Linear least-square prediction and estimation. --
CHAPTER 2 [12] --
STATIONARY AND RELATED PROCESSES --
2.1 Notation. 2.2 Linear operations; response and transfer functions. 2.3 The spectral representation of a stationary process (scalar case). 2.4 Linear operations on stationary processes. 2.5 The elementary stationary processes. 2.6 Linear determinism; Wold’s decomposition. 2.7 Moving average and autoregressive representations of a process. 2.8 The canonical factorisation of the spectral density function. 2.9 Multivariate processes. --
CHAPTER 3 [31] --
A FIRST SOLUTION OF THE PREDICTION PROBLEM --
3.1 Deterministic components. 3.2 Pure prediction. 3.3 Some examples. 3.4 The factorisation of the spectral density function in practice. 3.5 Pure prediction in continuous time. 3.6 Continuous time processes with rational spectral density function. 3.7 The prediction of one series from another. 3.8 Processes with real zeros in the spectral density function. 3.9 General comments. --
CHAPTER 4 [46] --
LEAST-SQUARE APPROXIMATION --
4.1 Derivation of the linear least-square estimate. 4.2 Methods of inverting a covariance matrix. 4.3 The inclusion of deterministic terms in the least-square approximation. 4.4 Minimisation of more general quadratic forms. --
CHAPTER [5] --
PROJECTION ON THE INFINITE SAMPLE [56] --
5.1 Construction of the estimate. 5.2 Some examples of signal extraction. 5.3 Interpolation between evenly-spaced observations. 5.4. Smoothing in several dimensions. --
CHAPTER 6 [66] --
PROJECTION ON THE SEMI-INFINITE SAMPLE --
6.1 The prediction formula. 6.2 The mean square error. 6.3 An example. 6.4 Continuous time processes. --
CHAPTER [7] --
PROJECTION ON THE FINITE SAMPLE [72] --
7.1 Autoregressive processes. 7.2 Moving-average processes. 7.3 Processes with rational s.d.f. 7.4 Interpolation. 7.5 Continuous time processes. 7.6 Processes generated by stochastic differential equations. 7.7 Continuous time processes with rational s.d.f. --
CHAPTER [8] --
DEVIATIONS FROM STATIONARITY: TRENDS DETERMINISTIC COMPONENTS AND ACCUMULATED PROCESSES [83] --
8.1 Linear models. 8.2 Extrapolation of trends. 8.3 The fitting of deterministic components. 8.4 Deterministic components; special cases. 8.5 Accumulated processes. 8.6 Exponentially weighted moving averages. --
CHAPTER [9] --
MULTIVARIATE PROCESSES [98] --
9.1 Projection on the infinite sample. 9.2 Projection on the semiinfinite sample. 9.3 Canonical factorisation of a rational spectral density matrix. 9.4 Projection on the finite sample. 9.5 Continuous time. --
CHAPTER 10 [106] --
REGULATION --
10.1 Notation for operators. 10.2 Some examples. 10.3 Optimal regulation when the input series have known future. 10.4 Statistical averages, time averages and non-stationary inputs. 10.5 Optimal regulation for systems with inputs of uncertain future. 10.6 Inputs with deterministic and non-stationary components. 10.7 Specific examples. 10.8 Certainty equivalence. 10.9 General comments on least-square regulation. --
CHAPTER 11 [142] --
LLS PREDICTION, 1982: A PERSONAL VIEW --
11.1 Notation et cetera. 11.2 Projection on the semi-infinite sample. 11.3 State-structured models and the Kalman filter. 11.4 Recur-sive models of higher order. 11.5 Dual methods of estimation/ prediction. 11.6 Hamiltonian methods of estimation. 11.7 Relation of the Hamiltonian and recursive approaches. --
CHAPTER 12 [159] --
LQG REGULATION AND CONTROL. 1982: --
A PERSONAL VIEW --
12.1 Introduction. 12.2 Structure: the dynamic programming principle. 12.3 Certainty equivalence. 12.4 Optimal control with fullinformation state-structure. 12.5 Tracking and buffering. 12.6 Higher-order dynamics: the easy case. 12.7 Incomplete information: the stochastic case. 12.8 The Hamiltonian treatment of optimal control. 12.9 The Markov case 12.10 Risk-sensitive criteria. 12.11 Other relaxations of the LQG hypothesis --
REFERENCES [180] --
SUPPLEMENTARY REFERENCES FOR THE SECOND EDITION [182] --
INDEX [185] --

MR, REVIEW #

There are no comments on this title.