Skip to content

Nonlinear Shrinkage

Welcome to our Knowledge Base

Nonlinear Shrinkage

You are here:
< last topic

Nonlinear Shrinkage

Many statistical applications require an estimate of a covariance matrix and/or of its inverse when the matrix dimension, p, is large compared to the sample size, n.

A cursory glance at the Marˇcenko and Pastur (1967) equation shows that linear shrinkage is the first-order approximation to a fundamentally nonlinear problem. How good is this approximation? Ledoit and Wolf (2004) are very clear about this. Depending on the situation at hand, the improvement over the sample covariance matrix can either be gigantic or minuscule. When \(\frac{p}{n} \) is large, and the population eigenvalues are close to one another, linear shrinkage captures most of the potential improvement over the sample covariance matrix. In the opposite case, that is, when \(\frac{p}{n} \)is small and the population eigenvalues are dispersed, linear shrinkage hardly improves at all over the sample covariance matrix.

What can nonlinear shrinkage do?

The goal of solving covariance matrix is to find estimators that outperform the sample covariance matrix, both in finite samples and asymptotically. For the purposes of asymptotic analyses, to reflect the fact that p is large compared to n, one has to employ large-dimensional asymptotics where p is allowed to go to infinity together with n. And this is how nonlinear shrinkage works.

Further Discussion

The estimation of covariance matrix can be divided into case \(pn\)

More details can be see at Spectrum estimation: A Unified Framework for Covariance Matrix Estimation and PCA in Large Dimensions.

References

O. Ledoit, M. Wolf, Nonlinear shrinkage estimation of large-dimensional covariance matrices, Ann. Statist. 40 (2) (2012) 1024–1060.

Was this article helpful?
0 out Of 5 Stars
5 Stars 0%
4 Stars 0%
3 Stars 0%
2 Stars 0%
1 Stars 0%
How can we improve this article?
Please submit the reason for your vote so that we can improve the article.
Table of Contents