Please use this identifier to cite or link to this item: https://scholar.dlu.edu.vn/handle/123456789/2336
Title: Estimation and Feature Selection in High-Dimensional Mixtures-of-Experts Models
Authors: Huỳnh, Bảo Tuyên 
Keywords: Mixture models;Mixture of Experts;Regularized Estimation;Feature Selection;Lasso;L1-regularization;Sparsity;EM algorithm;MM Algorithm;Proximal-Newton;Coordinate Ascent;Clustering;Classification;Regression;Prediction
Issue Date: 2019
Place of publication: Caen, France
Abstract: 
The statistical analysis of heterogeneous and high-dimensional data is being a challenging problem, both from modeling, and inference point of views, especially with the today’s big data phenomenon. This suggests new strategies, particularly in advanced analyses going from density estimation to prediction, as well as the unsupervised classification, of many kinds of such data with complex distribution. Mixture models are known to be very successful in modeling heterogeneity in data, in many statistical data science problems, including density estimation and clustering, and their elegant Mixtures-of-Experts (MoE) variety, which strengthen the link with supervised learning and hence deals furthermore with prediction from heterogeneous regressiontype data, and for classification. In a high-dimensional scenario, particularly for data arising from a heterogeneous population, using such MoE models requires addressing modeling and estimation questions, since the state-of-the art estimation methodologies are limited.
This thesis deals with the problem of modeling and estimation of high-dimensional MoE models, towards effective density estimation, prediction and clustering of such heterogeneous and high-dimensional data. We propose new strategies based on regularized maximum-likelihood estimation (MLE) of MoE models to overcome the limitations of standard methods, including MLE estimation with Expectation-Maximization (EM) algorithms, and to simultaneously perform feature selection so that sparse models are encouraged in such a high-dimensional setting. We first introduce a mixture-of-experts’ parameter estimation and variable selection methodology, based on L1 (lasso) regularizations and the EM framework, for regression and clustering suited to highdimensional contexts. Then, we extend the method to regularized mixture of experts models for discrete data, including classification. We develop efficient algorithms to maximize the proposed L1-penalized observed-data log-likelihood function. Our proposed strategies enjoy the efficient monotone maximization of the optimized criterion, and unlike previous approaches, they do not rely on approximations on the penalty functions, avoid matrix inversion, and exploit the efficiency of the coordinate ascent algorithm, particularly within the proximal Newton-based approach.
URI: https://scholar.dlu.edu.vn/handle/123456789/2336
Starting Date: 2016
Completion Date: 2019
Field: Khoa học tự nhiên
Type: Luận văn, luận án
Appears in Collections:Đề tài khoa học (Khoa Toán - Tin học)

Show full item record


CORE Recommender

Page view(s)

33
Last Week
0
Last month
checked on Feb 18, 2025

Google ScholarTM

Check




Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.