Publications

(*) denotes equal contribution, (**) denotes equal advising

Preprints

[P.6] On Expert Estimation in Hierarchical Mixture of Experts: Beyond Softmax Gating Functions Under review.
Huy Nguyen*, Xing Han*, Carl William Harris, Suchi Saria**, Nhat Ho**

[P.5] Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts. Under review.
Minh Le*, Chau Nguyen*, Huy Nguyen*, Quyen Tran, Trung Le, Nhat Ho

[P.4] Statistical Advantages of Perturbing Cosine Router in Sparse Mixture of Experts. Under review.
Huy Nguyen, Pedram Akbarian*, Trang Pham*, Trang Nguyen*, Shujian Zhang, Nhat Ho

[P.3] Quadratic Gating Functions in Mixture of Experts: A Statistical Insight. Under review.
Pedram Akbarian*, Huy Nguyen*, Xing Han*, Nhat Ho

[P.2] Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts. Under review.
Fanqi Yan*, Huy Nguyen\8, Dung Le*, Pedram Akbarian, Nhat Ho

[P.1] CompeteSMoE - Effective Training of Sparse Mixture of Experts via Competition. Under review.
Quang Pham, Giang Do, Huy Nguyen, TrungTin Nguyen, Chenghao Liu, Mina Sartipi, Binh T. Nguyen, Savitha Ramasamy, Xiaoli Li, Steven Hoi, Nhat Ho

Conference Publications

[C.16] Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts. Advances in NeurIPS, 2024.
Huy Nguyen, Nhat Ho**, Alessandro Rinaldo**

[C.15] FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion. Advances in NeurIPS, 2024.
Xing Han, Huy Nguyen*, Carl Harris*, Nhat Ho**, Suchi Saria**

[C.14] Mixture of Experts Meets Prompt-Based Continual Learning. Advances in NeurIPS, 2024.
Minh Le, An Nguyen*, Huy Nguyen*, Trang Nguyen*, Trang Pham*, Linh Van Ngo, Nhat Ho

[C.13] On Least Square Estimation in Softmax Gating Mixture of Experts. Proceedings of the ICML, 2024.
Huy Nguyen, Nhat Ho**, Alessandro Rinaldo**

[C.12] Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?. Proceedings of the ICML, 2024.
Huy Nguyen, Pedram Akbarian, Nhat Ho

[C.11] A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts. Proceedings of the ICML, 2024.
Huy Nguyen, Pedram Akbarian, TrungTin Nguyen, Nhat Ho

[C.10] Statistical Perspective of Top-K Sparse Softmax Gating Mixture of Experts. Proceedings of the ICLR, 2024.
Huy Nguyen, Pedram Akbarian, Fanqi Yan, Nhat Ho

[C.9] Towards Convergence Rates for Parameter Estimation in Gaussian-gated Mixture of Experts. In AISTATS, 2024.
Huy Nguyen*, TrungTin Nguyen*, Khai Nguyen, Nhat Ho

[C.8] On Parameter Estimation in Deviated Gaussian Mixture of Experts. In AISTATS, 2024.
Huy Nguyen, Khai Nguyen, Nhat Ho

[C.7] Demystifying Softmax Gating Function in Gaussian Mixture of Experts. Advances in NeurIPS, 2023 (Spotlight) .
Huy Nguyen, TrungTin Nguyen, Nhat Ho

[C.6] Minimax Optimal Rate for Parameter Estimation in Multivariate Deviated Models. Advances in NeurIPS, 2023.
Dat Do*, Huy Nguyen*, Khai Nguyen, Nhat Ho

[C.5] Fast Approximation of the Generalized Sliced-Wasserstein Distance. In ICASSP, 2024.
Dung Le*, Huy Nguyen*, Khai Nguyen*, Trang Nguyen*, Nhat Ho

[C.4] Hierarchical Sliced Wasserstein Distance. Proceedings of the ICLR, 2023.
Khai Nguyen, Tongzheng Ren, Huy Nguyen, Litu Rout, Tan Nguyen, Nhat Ho

[C.3] Entropic Gromov-Wasserstein between Gaussian Distributions. Proceedings of the ICML, 2022.
Huy Nguyen*, Khang Le*, Dung Le*, Dat Do, Tung Pham, Nhat Ho

[C.2] On Multimarginal Partial Optimal Transport: Equivalent Forms and Computational Complexity. Proceedings of the AISTATS, 2022.
Huy Nguyen*, Khang Le*, Khai Nguyen, Tung Pham, Nhat Ho

[C.1] On Robust Optimal Transport: Computational Complexity and Barycenter Computation. Advances in NeurIPS, 2021.
Huy Nguyen*, Khang Le*, Quang Minh Nguyen, Tung Pham, Hung Bui, Nhat Ho

Journal Publications

[J.1] EPEM: Efficient Parameter Estimation for Multiple Class Monotone Missing Data. Information Sciences, Volume 567, August 2021, Pages 1-22.
Thu Nguyen, Duy H. M. Nguyen, Huy Nguyen, Binh T. Nguyen, Bruce A. Wade.