# Principal component extraction using recursive least squares learning

@article{Bannour1995PrincipalCE, title={Principal component extraction using recursive least squares learning}, author={Sami Bannour and Mahmood R. Azimi-Sadjadi}, journal={IEEE transactions on neural networks}, year={1995}, volume={6 2}, pages={ 457-69 } }

A new neural network-based approach is introduced for recursive computation of the principal components of a stationary vector stochastic process. The neurons of a single-layer network are sequentially trained using a recursive least squares squares (RLS) type algorithm to extract the principal components of the input process. The optimality criterion is based on retaining the maximum information contained in the input sequence so as to be able to reconstruct the network inputs from the… Expand

#### Figures, Tables, and Topics from this paper

#### 128 Citations

Fast recursive least squares learning algorithm for principal component analysis

- Computer Science
- 2000

It is shown that all the information needed for PCA can be completely represented by the unnormalized weight vector which is updated based only on the corresponding neuron input-output product. Expand

Robust recursive least squares learning algorithm for principal component analysis

- Computer Science, Mathematics
- IEEE Trans. Neural Networks Learn. Syst.
- 2000

It is shown that all information needed for PCA can be completely represented by the unnormalized weight vector which is updated based only on the corresponding neuron input-output product. Expand

Fast principal component extraction by a homogeneous neural network

- Computer Science
- 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221)
- 2001

Two adaptive algorithms based on the WINC for extracting in parallel multiple principal components are developed and they are able to provide an adaptive step size which leads to a significant improvement in the learning performance. Expand

Adaptive learning algorithm for principal component analysis with partial data

- Mathematics
- 1996

In this paper a fast and ecient adaptive learning algorithm for estimation of the principal components is developed. It seems to be especially useful in applications with changing environment , where… Expand

Combining PCA and MCA by using recursive least square learning method

- Computer Science, Mathematics
- 1998 IEEE International Conference on Electronics, Circuits and Systems. Surfing the Waves of Science and Technology (Cat. No.98EX196)
- 1998

Simulation results show that both the convergent speed and the compression ratio are improved and indicate that the method combines the extraction of principal components and the pruning of minor components effectively. Expand

Fast principal component extraction by a weighted information criterion

- Computer Science, Mathematics
- IEEE Trans. Signal Process.
- 2002

A weighted information criterion (WINC) for searching the optimal solution of a linear neural network, and analytically shows that the optimum weights globally asymptotically converge to the principal eigenvectors of a stationary vector stochastic process. Expand

Using recursive least square learning method for principal and minor components analysis

- Mathematics, Computer Science
- Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181)
- 1998

Simulation results show that both the convergent speed and the compression ratio are improved, which indicate that the parallel extraction method effectively combines the extraction of the principal components and the pruning of the minor components. Expand

Principal component analysis of multispectral images using neural network

- Computer Science
- Proceedings ACS/IEEE International Conference on Computer Systems and Applications
- 2001

A neural network model is proposed that performs the PCA directly from the original spectral images without any additional non-neuronal computations or preliminary matrix estimation and results show that the model performs well. Expand

Recursive algorithms for principal component extraction

- Mathematics
- 1997

Two new on-line recursive algorithms, namely, the Jacobi recursive principal component algorithm (JRPCA) and the Gauss–Seidel recursive principal component algorithm (GRPCA), are introduced for the… Expand

Image compression using principal component neural networks

- Computer Science
- Image Vis. Comput.
- 2001

The conclusion of the wide comparison among eight principal component networks is that the cascade recursive least-squares algorithm by Ci-chocki, Kasprzak and Skarbek exhibits the best numerical and structural properties. Expand

#### References

SHOWING 1-10 OF 17 REFERENCES

Principal component extraction using recursive least squares learning method

- Computer Science
- [Proceedings] 1991 IEEE International Joint Conference on Neural Networks
- 1991

A new approach is introduced for the recursive computation of the principal components of a vector stochastic process. The neurons of a single layer perceptron are sequentially trained using a… Expand

An adaptive approach for optimal data reduction using recursive least squares learning method

- Mathematics, Computer Science
- [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing
- 1992

An approach is introduced for the recursive computation of the principal components of a vector stochastic process. The neurons of a single-layer perceptron are sequentially trained using a recursive… Expand

A neural network learning algorithm for adaptive principal component extraction (APEX)

- Computer Science
- International Conference on Acoustics, Speech, and Signal Processing
- 1990

An algorithm called APEX which can recursively compute the principal components of a vector stochastic process using a linear neural network is proposed, and its computational advantages over previously proposed methods are demonstrated. Expand

Neural networks and principal component analysis: Learning from examples without local minima

- Mathematics, Computer Science
- Neural Networks
- 1989

The main result is a complete description of the landscape attached to E in terms of principal component analysis, showing that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns. Expand

Optimal unsupervised learning in a single-layer linear feedforward neural network

- Computer Science
- Neural Networks
- 1989

An optimality principle is proposed which is based upon preserving maximal information in the output units and an algorithm for unsupervised learning based upon a Hebbian learning rule, which achieves the desired optimality is presented. Expand

Adaptive network for optimal linear feature extraction

- Computer Science
- International 1989 Joint Conference on Neural Networks
- 1989

A network of highly interconnected linear neuron-like processing units and a simple, local, unsupervised rule for the modification of connection strengths between these units are proposed, making the implementation of the network easier, faster, and biologically more plausible than rules depending on error propagation. Expand

Two-dimensional adaptive block Kalman filtering of SAR imagery

- Mathematics, Computer Science
- IEEE Trans. Geosci. Remote. Sens.
- 1991

Simulation results on several images are provided to indicate the effectiveness of the proposed 2-D adaptive block Kalman filtering method when used to remove the effects of speckle noise as well as those of the additive noise. Expand

A comparison of two eigen-networks

- Mathematics
- IJCNN-91-Seattle International Joint Conference on Neural Networks
- 1991

The authors compare two linear networks which project adaptively the input data points on their principal components. They rederive Sanger's algorithm as the result of a constrained optimization… Expand

Adaptive Filter Theory

- Mathematics
- 1986

Background and Overview. 1. Stochastic Processes and Models. 2. Wiener Filters. 3. Linear Prediction. 4. Method of Steepest Descent. 5. Least-Mean-Square Adaptive Filters. 6. Normalized… Expand

Simplified neuron model as a principal component analyzer

- Mathematics, Medicine
- Journal of mathematical biology
- 1982

A simple linear neuron model with constrained Hebbian-type synaptic modification is analyzed and a new class of unconstrained learning rules is derived. It is shown that the model neuron tends to… Expand