Regularised maximum-likelihood inference of mixture of experts for regression and clustering
Variable selection is fundamental to high-dimensional statistical modeling, and is challenging in particular in unsupervised modeling, including mixture models. We propose a regularised maximumlikelihood inference of the Mixture of Experts model which is able to deal with potentially correlated feat...
Đã lưu trong:
Những tác giả chính: | , |
---|---|
Định dạng: | Conference paper |
Ngôn ngữ: | English |
Được phát hành: |
Bruges (Belgium)
2023
|
Những chủ đề: | |
Truy cập trực tuyến: | https://scholar.dlu.edu.vn/handle/123456789/2334 |
Các nhãn: |
Thêm thẻ
Không có thẻ, Là người đầu tiên thẻ bản ghi này!
|
Thư viện lưu trữ: | Thư viện Trường Đại học Đà Lạt |
---|
id |
oai:scholar.dlu.edu.vn:123456789-2334 |
---|---|
record_format |
dspace |
spelling |
oai:scholar.dlu.edu.vn:123456789-23342023-06-14T04:21:40Z Regularised maximum-likelihood inference of mixture of experts for regression and clustering Huỳnh, Bảo Tuyên Faicel, Chamroukhi mixture of experts regression clustering Variable selection is fundamental to high-dimensional statistical modeling, and is challenging in particular in unsupervised modeling, including mixture models. We propose a regularised maximumlikelihood inference of the Mixture of Experts model which is able to deal with potentially correlated features and encourages sparse models in a potentially high-dimensional scenarios. We develop a hybrid Expectation- Majorization- Maximization (EM/MM) algorithm for model fitting. Unlike state-of-the art regularised ML inference [1, 2], the proposed modeling doesn’t require an approximate of the regularisation. The proposed algorithm allows to automatically obtain sparse solutions without thresholding, and includes coordinate descent updates avoiding matrix inversion. An experimental study shows the capability of the algorithm to retrieve sparse solutions and for model fitting in model-based clustering of regression data. 2023-05-19T09:48:09Z 2023-05-19T09:48:09Z 2018 Conference paper Bài báo đăng trên KYHT quốc tế (có ISBN) https://scholar.dlu.edu.vn/handle/123456789/2334 en ESANN 2018 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. [1] A. Khalili. New estimation and feature selection methods in mixture-of-experts models. Canadian Journal of Statistics, 38(4):519–539, 2010. [2] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association, 96(456):1348–1360, 2001. [3] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79–87, 1991. [4] S. E. Yuksel, J. N. W., and P. D. Gader. Twenty years of mixture of experts. IEEE transactions on neural networks and learning systems, 23(8):1177–1193, 2012. [5] A. Khalili and J. Chen. Variable selection in finite mixture of regression models. Journal of the American Statistical association, 102(479):1025–1038, 2007. [6] N. St¨adler, P. B¨uhlmann, and S. Van De Geer. l1-penalization for mixture regression models. Test, 19(2):209–256, 2010. [7] C. Meynet. An _1-oracle inequality for the lasso in finite mixture gaussian regression models. ESAIM: Probability and Statistics, 17:650–671, 2013. [8] E. Devijver. An _1-oracle inequality for the lasso in multivariate finite mixture of multivariate gaussian regression models. ESAIM: Probability and Statistics, 19:649–670, 2015. [9] F. K. Hui, D. I. Warton, S. D. Foster, et al. Multi-species distribution modeling using penalized mixture of regressions. The Annals of Applied Statistics, 9(2):866–882, 2015. [10] K. Lange. Optimization (2nd edition). Springer, 2013. [11] L. R. Lloyd-Jones, H. D. Nguyen, and G. J. McLachlan. A globally convergent algorithm for lasso-penalized mixture of linear regression models. arXiv:1603.08326, 2016. [12] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. J. of the royal statistical society. Series B, pages 1–38, 1977. [13] T. Hastie, R. Tibshirani, and M. Wainwright. Statistical Learning with Sparsity: The Lasso and Generalizations. Taylor & Francis, 2015. [14] D. R. Hunter and R. Li. Variable selection using mm algorithms. Annals of statistics, 33(4):1617, 2005. Bruges (Belgium) |
institution |
Thư viện Trường Đại học Đà Lạt |
collection |
Thư viện số |
language |
English |
topic |
mixture of experts regression clustering |
spellingShingle |
mixture of experts regression clustering Huỳnh, Bảo Tuyên Faicel, Chamroukhi Regularised maximum-likelihood inference of mixture of experts for regression and clustering |
description |
Variable selection is fundamental to high-dimensional statistical modeling, and is challenging in particular in unsupervised modeling, including mixture models. We propose a regularised maximumlikelihood inference of the Mixture of Experts model which is able to deal with potentially correlated features and encourages sparse models in a potentially high-dimensional scenarios. We develop a hybrid Expectation- Majorization- Maximization (EM/MM) algorithm for model fitting. Unlike state-of-the art regularised ML inference [1, 2], the proposed modeling doesn’t require an approximate of the regularisation. The proposed algorithm allows to automatically obtain sparse solutions without thresholding, and includes coordinate descent updates avoiding matrix inversion. An experimental study shows the capability of the algorithm to retrieve sparse solutions and for model fitting in model-based clustering of regression data. |
format |
Conference paper |
author |
Huỳnh, Bảo Tuyên Faicel, Chamroukhi |
author_facet |
Huỳnh, Bảo Tuyên Faicel, Chamroukhi |
author_sort |
Huỳnh, Bảo Tuyên |
title |
Regularised maximum-likelihood inference of mixture of experts for regression and clustering |
title_short |
Regularised maximum-likelihood inference of mixture of experts for regression and clustering |
title_full |
Regularised maximum-likelihood inference of mixture of experts for regression and clustering |
title_fullStr |
Regularised maximum-likelihood inference of mixture of experts for regression and clustering |
title_full_unstemmed |
Regularised maximum-likelihood inference of mixture of experts for regression and clustering |
title_sort |
regularised maximum-likelihood inference of mixture of experts for regression and clustering |
publisher |
Bruges (Belgium) |
publishDate |
2023 |
url |
https://scholar.dlu.edu.vn/handle/123456789/2334 |
_version_ |
1778233853344743424 |