research findings
Filters
搜索

Zhong, W., Zhou, W., Fan, Q., & Gao, Y. (2022). Dummy endogenous treatment effect estimation using high‐dimensional instrumental variables. Canadian Journal of Statistics, 50, 795-819.

We develop a two-stage approach to estimate the treatment effects of dummy endogenous variables using high-dimensional instrumental variables (IVs). In the first stage, instead of using a conventional linear reduced-form regression to approximate the optimal instrument, we propose a penalized logistic reduced-form model to accommodate both the binary nature of the endogenous treatment variable and the high dimensionality of the IVs. In the second stage, we replace the original treatment variable with its estimated propensity score and run a least-squares regression to obtain a penalized logistic regression instrumental variables estimator (LIVE). We show theoretically that the proposed LIVE is root-n consistent with the true treatment effect and asymptotically normal. Monte Carlo simulations demonstrate that LIVE is more efficient than existing IV estimators for endogenous treatment effects. In applications, we use LIVE to investigate whether the Olympic Games facilitate the host nation’s economic growth and whether home visits from teachers enhance students’ academic performance. In addition, the R functions for the proposed algorithms have been developed in an R package naivereg. The Canadian Journal of Statistics 50: 795–819;

Read More »

Ai, Q., He, L., Liu, S., & Xu, Z. (2022). ByPE-VAE: Bayesian pseudocoresets exemplar VAE. Neural Information Processing Systems (NeurIPS).

Recent studies show that advanced priors play a major role in deep generativemodels. Exemplar VAE, as a variant of VAE with an exemplar-based prior, hasachieved impressive results. However, due to the nature of model design, anexemplar-based model usually requires vast amounts of data to participate in training, which leads to huge computational complexity. To address this issue, wepropose Bayesian Pseudocoresets Exemplar VAE (ByPE-VAE), a new variant ofVAE with a prior based on Bayesian pseudocoreset. The proposed prior is condi.tioned on a small-scale pseudocoreset rather than the whole dataset for reducingthe computational cost and avoiding overfitting. Simultaneously, we obtain theoptimal pseudocoreset via a stochastic optimization algorithm during VAE trainingaiming to minimize the Kullback-Leibler divergence between the prior based onthe pseudocoreset and that based on the whole dataset. Experimental results showthat ByPE-VAE can achieve competitive improvements over the state-of-the-artVAEs in the tasks of density estimation, representation learning, and generativedata augmentation. Particularly, on a basic VAE architecture, ByPE-VAE is up to 3times faster than Exemplar VAE while almost holding the performance. Code isavailable at https://github.com/Aiqz/ByPE-VAE.

Read More »

Chang, J., Chen, S. X., Tang, C. Y., & Wu, T. T. (2021). High-dimensional empirical likelihood inference. Biometrika, 108, 127-147.

High-dimensional statistical inference with general estimating equations is challenging and remains little explored. We study two problems in the area: confidence set estimation for multiple components of the model parameters, and model specifications tests. First, we propose to construct a new set of estimating equations such that the impact from estimating the high-dimensional nuisance parameters becomes asymptotically negligible. The new construction enables us to estimate a valid confidence region by empirical likelihood ratio. Second, we propose a test statistic as the maximum of the marginal empirical likelihood ratios to quantify data evidence against the model specification. Our theory establishes the validity of the proposed empirical likelihood approaches, accommodating over-identification and exponentially growing data dimensionality. Numerical studies demonstrate promising performance and potential practical benefits of the new methods.

Read More »

Zheng, X., Guo, B., He, J., & Chen, S. X. (2021). Effects of corona virus disease‐19 control measures on air quality in North China. Environmetrics, 32, e2673.

Corona virus disease-19 (COVID-19) has substantially reduced human activities and the associated anthropogenic emissions. This study quantifies the effects of COVID-19 control measures on six major air pollutants over 68 cities in North China by a Difference in Relative-Difference method that allows estimation of the COVID-19 effects while taking account of the general annual air quality trends, temporal and meteorological variations, and the spring festival effects. Significant COVID-19 effects on all six major air pollutants are found, with NO2 having the largest decline (−39.6%), followed by PM2.5 (−30.9%), O3 (−16.3%), PM10 (−14.3%), CO (−13.9%), and the least in SO2 (−10.0%), which shows the achievability of air quality improvement by a large reduction in anthropogenic emissions. The heterogeneity of effects among the six pollutants and different regions can be partly explained by coal consumption and industrial output data.

Read More »

Zhong, W., Gao, Y., Zhou, W., & Fan, Q. (2021). Endogenous treatment effect estimation using high-dimensional instruments and double selection. Statistics & Probability Letters, 169, 108967.

We propose a double selection instrumental variable estimator for the endogenous treat- ment effects using both high-dimensional control variables and instrumental variables. It deals with the endogeneity of the treatment variable and reduces omitted variable bias due to imperfect model selection.

Read More »

Chen, X., Zhang, J., & Zhou, W. (2022). High-dimensional elliptical sliced inverse regression in non-Gaussian distributions. Journal of Business & Economic Statistics, 40, 1204-1215.

Sliced inverse regression (SIR) is the most widely used sufficient dimension reduction method due to its simplicity, generality and computational efficiency. However, when the distribution of covariates deviates from multivariate normal distribution, the estimation efficiency of SIR gets rather low, and the SIR estimator may be inconsistent and misleading, especially in the high-dimensional setting. In this article, we propose a robust alternative to SIR—called elliptical sliced inverse regression (ESIR), to analysis high-dimensional, elliptically distributed data. There are wide applications of elliptically distributed data, especially in finance and economics where the distribution of the data is often heavy-tailed. To tackle the heavy-tailed elliptically distributed covariates, we novelly use the multivariate Kendall’s tau matrix in a framework of generalized eigenvalue problem in sufficient dimension reduction. Methodologically, we present a practical algorithm for our method. Theoretically, we investigate the asymptotic behavior of the ESIR estimator under the high-dimensional setting. Extensive simulation results show ESIR significantly improves the estimation efficiency in heavy-tailed scenarios, compared with other robust SIR methods. Analysis of the Istanbul stock exchange dataset also demonstrates the effectiveness of our proposed method. Moreover, ESIR can be easily extended to other sufficient dimension reduction methods and applied to nonelliptical heavy-tailed distributions.

Read More »

Chang, J., Kolaczyk, E. D. & Yao, Q. (2020). Discussion of ‘Network cross-validation by edge sampling’. Biometrika, 107, 277-280.

We thank the authorsfor their new contribution to networkmodelling.Datareuse, encompassingmethods such as bootstrapping and cross-validation, is an area that to date has largely resisted obvious and rapid development in the network context. One of the major reasons is that mimicking the original sampling mechanisms is challenging if not impossible. To avoid deleting edges and destroying some of the network structure, the resampling strategy proposed in Li et al. (2020) based on splitting node pairs rather than nodes is therefore insightful and effective. Matrix completion is the key technique involved, with its use here providing a new perspective for network analysis.

Read More »

Li, Q., Yu, G., & Liu, Y. (2020). A deep multimodal generative and fusion framework for class-imbalanced multimodal data. Multimedia Tools and Applications, 79, 25023-25050.

The purpose of multimodal classification is to integrate features from diverse information sources to make decisions. The interactions between different modalities are crucial to this task. However, common strategies in previous studies have been to either concatenate features from various sources into a single compound vector or input them separately into several different classifiers that are then assembled into a single robust classifier to generate the final prediction. Both of these approaches weaken or even ignore the interactions among different feature modalities. In addition, in the case of class-imbalanced data, multimodal classification becomes troublesome. In this study, we propose a deep multimodal generative and fusion framework for multimodal classification with class-imbalanced data. This framework consists of two modules: a deep multimodal generative adversarial network (DMGAN) and a deep multimodal hybrid fusion network (DMHFN). The DMGAN is used to handle the class imbalance problem. The DMHFN identifies fine-grained interactions and integrates different information sources for multimodal classification. Experiments on a faculty homepage dataset show the superiority of our framework compared to several start-of-the-art methods.

Read More »

Yu G., Li, Q., Wang, J., Zhang, D., & Liu, Y. (2020). A multimodal generative and fusion framework for recognizing faculty homepages. Information Sciences, 525, 205-220.

Multimodal data consist of several data modes, where each mode is a group of similar data sharing the same attributes. Recognizing faculty homepages is essentially a multimodal classification problem in which a target faculty homepage is determined from three different information sources, including text, images, and layout. Conventional strategies in previous studies have been either to concatenate features from various information sources into a compound vector or to input them separately into several different classifiers that are then assembled into a stronger classifier for the final prediction. However, both approaches ignore the connections among different feature sets. We argue that such relations are essential to enhance multimodal classification. Besides, recognizing faculty homepages is a class imbalance problem in which the total number of samples of a minority class is far smaller than the sample numbers of other classes. In this study, we propose a multimodal generative and fusion framework for multimodal learning with the problems of imbalanced data and mutually dependent feature modes. Specifically, a multimodal generative adversarial network is first introduced to rebalance the dataset by generating pseudo features based on each mode and combining them to describe a fake sample. Then, a gated fusion network with the gate and fusion mechanisms is presented to reduce the noise to improve the generalization ability and capture the links among the different feature modes. Experiments on a faculty homepage dataset show the superiority of the proposed framework.

Read More »

Zhang, J., & Chen, X. (2020). Principal envelope model. Journal of Statistical Planning and Inference, 206, 249-262.

Principal component analysis (PCA) is widely used in various fields to reduce high dimensional data sets to lower dimensions. Traditionally, the first a few principal components that capture most of the variance in the data are thought to be important. Tipping and Bishop (1999) introduced probabilistic principal component analysis (PPCA) in which they assumed an isotropic error in a latent variable model. Motivated by a general error structure and incorporating the novel idea of ‘‘envelope” proposed by Cook et al. (2010), we construct principal envelope models (PEM) which demonstrate the possibility that any subset of the principal components could retain most of the sample’s information. The useful principal components can be found through maximum likelihood approaches. We also embed the PEM to a factor model setting to illustrate its reasonableness and validity. Numerical results indicate the potentials of the proposed method.

Read More »