Sufficient dimension reduction

From The Right Wiki
Jump to navigationJump to search

In statistics, sufficient dimension reduction (SDR) is a paradigm for analyzing data that combines the ideas of dimension reduction with the concept of sufficiency. Dimension reduction has long been a primary goal of regression analysis. Given a response variable y and a p-dimensional predictor vector x, regression analysis aims to study the distribution of yx, the conditional distribution of y given x. A dimension reduction is a function R(x) that maps x to a subset of k, k < p, thereby reducing the dimension of x.[1] For example, R(x) may be one or more linear combinations of x. A dimension reduction R(x) is said to be sufficient if the distribution of yR(x) is the same as that of yx. In other words, no information about the regression is lost in reducing the dimension of x if the reduction is sufficient.[1]

Graphical motivation

In a regression setting, it is often useful to summarize the distribution of yx graphically. For instance, one may consider a scatterplot of y versus one or more of the predictors or a linear combination of the predictors. A scatterplot that contains all available regression information is called a sufficient summary plot. When x is high-dimensional, particularly when p3, it becomes increasingly challenging to construct and visually interpret sufficiency summary plots without reducing the data. Even three-dimensional scatter plots must be viewed via a computer program, and the third dimension can only be visualized by rotating the coordinate axes. However, if there exists a sufficient dimension reduction R(x) with small enough dimension, a sufficient summary plot of y versus R(x) may be constructed and visually interpreted with relative ease. Hence sufficient dimension reduction allows for graphical intuition about the distribution of yx, which might not have otherwise been available for high-dimensional data. Most graphical methodology focuses primarily on dimension reduction involving linear combinations of x. The rest of this article deals only with such reductions.

Dimension reduction subspace

Suppose R(x)=ATx is a sufficient dimension reduction, where A is a p×k matrix with rank kp. Then the regression information for yx can be inferred by studying the distribution of yATx, and the plot of y versus ATx is a sufficient summary plot. Without loss of generality, only the space spanned by the columns of A need be considered. Let η be a basis for the column space of A, and let the space spanned by η be denoted by 𝒮(η). It follows from the definition of a sufficient dimension reduction that

Fyx=FyηTx,

where F denotes the appropriate distribution function. Another way to express this property is

yxηTx,

or y is conditionally independent of x, given ηTx. Then the subspace 𝒮(η) is defined to be a dimension reduction subspace (DRS).[2]

Structural dimensionality

For a regression yx, the structural dimension, d, is the smallest number of distinct linear combinations of x necessary to preserve the conditional distribution of yx. In other words, the smallest dimension reduction that is still sufficient maps x to a subset of d. The corresponding DRS will be d-dimensional.[2]

Minimum dimension reduction subspace

A subspace 𝒮 is said to be a minimum DRS for yx if it is a DRS and its dimension is less than or equal to that of all other DRSs for yx. A minimum DRS 𝒮 is not necessarily unique, but its dimension is equal to the structural dimension d of yx, by definition.[2] If 𝒮 has basis η and is a minimum DRS, then a plot of y versus ηTx is a minimal sufficient summary plot, and it is (d + 1)-dimensional.

Central subspace

If a subspace 𝒮 is a DRS for yx, and if 𝒮𝒮drs for all other DRSs 𝒮drs, then it is a central dimension reduction subspace, or simply a central subspace, and it is denoted by 𝒮yx. In other words, a central subspace for yx exists if and only if the intersection 𝒮drs of all dimension reduction subspaces is also a dimension reduction subspace, and that intersection is the central subspace 𝒮yx.[2] The central subspace 𝒮yx does not necessarily exist because the intersection 𝒮drs is not necessarily a DRS. However, if 𝒮yx does exist, then it is also the unique minimum dimension reduction subspace.[2]

Existence of the central subspace

While the existence of the central subspace 𝒮yx is not guaranteed in every regression situation, there are some rather broad conditions under which its existence follows directly. For example, consider the following proposition from Cook (1998):

Let 𝒮1 and 𝒮2 be dimension reduction subspaces for yx. If x has density f(a)>0 for all aΩx and f(a)=0 everywhere else, where Ωx is convex, then the intersection 𝒮1𝒮2 is also a dimension reduction subspace.

It follows from this proposition that the central subspace 𝒮yx exists for such x.[2]

Methods for dimension reduction

There are many existing methods for dimension reduction, both graphical and numeric. For example, sliced inverse regression (SIR) and sliced average variance estimation (SAVE) were introduced in the 1990s and continue to be widely used.[3] Although SIR was originally designed to estimate an effective dimension reducing subspace, it is now understood that it estimates only the central subspace, which is generally different. More recent methods for dimension reduction include likelihood-based sufficient dimension reduction,[4] estimating the central subspace based on the inverse third moment (or kth moment),[5] estimating the central solution space,[6] graphical regression,[2] envelope model, and the principal support vector machine.[7] For more details on these and other methods, consult the statistical literature. Principal components analysis (PCA) and similar methods for dimension reduction are not based on the sufficiency principle.

Example: linear regression

Consider the regression model

y=α+βTx+ε, where εx.

Note that the distribution of yx is the same as the distribution of yβTx. Hence, the span of β is a dimension reduction subspace. Also, βTx is 1-dimensional (unless β=0), so the structural dimension of this regression is d=1. The OLS estimate β^ of β is consistent, and so the span of β^ is a consistent estimator of 𝒮yx. The plot of y versus β^Tx is a sufficient summary plot for this regression.

See also

Notes

  1. 1.0 1.1 Cook & Adragni (2009) Sufficient Dimension Reduction and Prediction in Regression In: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367(1906): 4385–4405
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Cook, R.D. (1998) Regression Graphics: Ideas for Studying Regressions Through Graphics, Wiley ISBN 0471193658
  3. Li, K-C. (1991) Sliced Inverse Regression for Dimension Reduction In: Journal of the American Statistical Association, 86(414): 316–327
  4. Cook, R.D. and Forzani, L. (2009) "Likelihood-Based Sufficient Dimension Reduction", Journal of the American Statistical Association, 104(485): 197–208
  5. Yin, X. and Cook, R.D. (2003) Estimating Central Subspaces via Inverse Third Moments In: Biometrika, 90(1): 113–125
  6. Li, B. and Dong, Y.D. (2009) Dimension Reduction for Nonelliptically Distributed Predictors In: Annals of Statistics, 37(3): 1272–1298
  7. Li, Bing; Artemiou, Andreas; Li, Lexin (2011). "Principal support vector machines for linear and nonlinear sufficient dimension reduction". The Annals of Statistics. 39 (6): 3182–3210. arXiv:1203.2790. doi:10.1214/11-AOS932. S2CID 88519106.

References

External links