Representative work series 1: A series of new methods for high-dimensional covariate screening and data dimensionality reduction

Chang, Tang & Wu (2013, AoS; 16, AoS) first introduced a method for selecting ultra-high-dimensional covariates using marginal hypothesis testing, effectively addressing the limitations of existing methods and significantly reducing the data and model requirements and constraints imposed by other methods. They proposed using the value of the empirical likelihood ratio at 0 as a test statistic to measure whether the marginal contribution of each covariate is zero, which avoids the identification problems that may arise when directly estimating marginal contributions. Furthermore, based on the characteristics of empirical likelihood self-normalization, this test statistic can avoid the impact of heteroscedasticity. Chang, Guo & Yao (2015, JoE) proposed a fast factor dimensionality reduction method by spectral decomposition of a positive-definite matrix, avoiding the need to solve ultra-high-dimensional optimization problems directly and overcoming the computational bottleneck faced by traditional methods. Even when the observed data dimensions reach thousands, this method can perform dimensionality reduction in just a few seconds on a personal laptop. Chang, Guo & Yao (2018, AoS) proposed a dimensionality reduction method based on linear transformations. They transformed the observed data into a new set of data through linear transformations, where the components in the new data can be grouped, and there is no correlation between the groups. This effectively avoided the two main problems of “overparameterization” and “model non-identifiability” that arise when directly modeling. In practice, even when such linear transformations do not exist, forcing the dimensionality reduction and subsequent grouping for modeling still significantly improved predictive accuracy. Chang, He, Yang & Yao (2023, JRSSB) proposed a dimensionality reduction method for complex matrix-type time series through tensor CP decomposition and introduced a fast algorithm that does not require iteration to complete the solution. This improved upon the common approach in the literature, which typically relies on iterative algorithms for CP decomposition. The authors have developed these methods into an R software package PCA4TS, freely available for others to use.

Representative Work Series 2: Unified Methodological System for Estimation and Inference of Ultra High Dimensional Models

Chang, Chen & Chen (2015, JoE) provided a method for estimating and making inferences about the divergent estimation equations for 𝑟 and 𝑝 using empirical likelihood. They demonstrated that, like other estimation methods such as the generalized method of moments (GMM), empirical likelihood estimation can only work when 𝑟 and 𝑝 diverge at a very slow rate. Chang, Tang & Wu (2018, AoS) introduced a system to solve the parameter estimation problem of the estimating equations when 𝑟 and 𝑝 are much larger than 𝑛 by incorporating penalties on both the estimated parameters and Lagrange multipliers in the loss function of empirical likelihood. They established a unified methodology for solving ultra-high-dimensional estimation equations. Chang, Chen, Tang & Wu (2021, BKA) developed a unified statistical inference method that is rotation-based and does not rely on bias correction. This method systematically addressed the inference problem for ultra-high-dimensional estimating equations and, for the first time, provided an over-identification test for these equations. Chang, Shi & Zhang (2022, JBES) systematically studied the parameter estimation and inference issues in high-dimensional moment constraint models under the possibility of misidentified moment conditions. They proposed a penalized empirical likelihood method and established corresponding criteria for moment condition identification.

Representative Work Series 3: A New Theory of Statistical Inference Based on Gaussian Approximation

Chang & Hall (2015, BKA) conducted a systematic study of this method for the first time and provided its theoretical properties in the contexts of bias correction and confidence interval construction. Chang, Yao & Zhou (2017, BKA) solved the white noise testing problem in ultra-high-dimensional time series for the first time. Chang, Jiang & Shao (2023, JoE) extended the method of Chang, Yao & Zhou (2017, BKA) and for the first time solved the more general problem of ultra-high-dimensional martingale difference testing. Chang, Zheng, Zhou & Zhou (2017, Biometrics) and Chang, Zhou, Zhou & Wang (2017, Biometrics) provided ultra-high-dimensional mean and covariance testing methods that work when there are arbitrary correlation structures among the components within the data. Chang, Qiu, Yao & Zou (2018, JoE) developed a method for constructing confidence regions for ultra-high-dimensional precision matrices, and used this method to study changes in the connectivity between different sectors of the U.S. stock market before and after the 2008 financial crisis. Chang, He, Kang & Wu (2024, JASA) proposed a fast inference method using a parametric bootstrap for the dependency structure in multimodal brain imaging data. This method does not require assumptions about the correlations between different brain regions in multimodal brain imaging data. Using this method to analyze the multi-task fMRI data from the Human Connectome Project, they discovered new conclusions in the field of brain science.