Publications
(*) denotes equal contribution, (**) denotes equal advising
Journal Submissions
[JS.2] Convergence Rates for Softmax Gating Mixture of Experts Under review.
Huy Nguyen, Nhat Ho**, Alessandro Rinaldo**,
[JS.1] On Expert Estimation in Hierarchical Mixture of Experts: Beyond Softmax Gating Functions Under review.
Huy Nguyen*, Xing Han*, Carl William Harris, Suchi Saria**, Nhat Ho**
Conference Submissions
[CS.6] Sigmoid Self-Attention is Better than Softmax Self-Attention: A Mixture-of-Experts Perspective . Under review
Huy Nguyen*, Fanqi Yan*, Pedram Akbarian, Nhat Ho**, Alessandro Rinaldo**
[CS.5] RepLoRA: Reparameterizing Low-rank Adaptation via the Perspective of Mixture of Experts. Under review
Tuan Truong*, Chau Nguyen*, Huy Nguyen*, Minh Le, Trung Le, Nhat Ho
[CS.4] Adaptive Prompt: Unlocking the Power of Visual Prompt Tuning. Under review
Minh Le*, Anh Nguyen*, Huy Nguyen, Chau Nguyen, Nhat Ho
[CS.3] On Zero-Initialized Attention: Optimal Prompt and Gating Factor Estimation. Under review
Nghiem T. Diep*, Huy Nguyen*, Chau Nguyen*, Minh Le, Duy M. H. Nguyen, Daniel Sonntag, Mathias Niepert, Nhat Ho
[CS.2] Quadratic Gating Functions in Mixture of Experts: A Statistical Insight. Under review.
Pedram Akbarian*, Huy Nguyen*, Xing Han*, Nhat Ho
[CS.1] CompeteSMoE - Effective Training of Sparse Mixture of Experts via Competition. Under review.
Quang Pham, Giang Do, Huy Nguyen, TrungTin Nguyen, Chenghao Liu, Mina Sartipi, Binh T. Nguyen, Savitha Ramasamy, Xiaoli Li, Steven Hoi, Nhat Ho
Conference Publications
[C.19] Statistical Advantages of Perturbing Cosine Router in Sparse Mixture of Experts. Proceedings of the ICLR, 2025.
Huy Nguyen, Pedram Akbarian*, Trang Pham*, Trang Nguyen*, Shujian Zhang, Nhat Ho
[C.18] Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts. Proceedings of the ICLR, 2025.
Minh Le*, Chau Nguyen*, Huy Nguyen*, Quyen Tran, Trung Le, Nhat Ho
[C.17] Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts. In AISTATS, 2025.
Fanqi Yan*, Huy Nguyen*, Dung Le*, Pedram Akbarian, Nhat Ho
[C.16] Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts. Advances in NeurIPS, 2024.
Huy Nguyen, Nhat Ho**, Alessandro Rinaldo**
[C.15] FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion. Advances in NeurIPS, 2024.
Xing Han, Huy Nguyen*, Carl Harris*, Nhat Ho**, Suchi Saria**
[C.14] Mixture of Experts Meets Prompt-Based Continual Learning. Advances in NeurIPS, 2024.
Minh Le, An Nguyen*, Huy Nguyen*, Trang Nguyen*, Trang Pham*, Linh Van Ngo, Nhat Ho
[C.13] On Least Square Estimation in Softmax Gating Mixture of Experts. Proceedings of the ICML, 2024.
Huy Nguyen, Nhat Ho**, Alessandro Rinaldo**
[C.12] Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?. Proceedings of the ICML, 2024.
Huy Nguyen, Pedram Akbarian, Nhat Ho
[C.11] A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts. Proceedings of the ICML, 2024.
Huy Nguyen, Pedram Akbarian, TrungTin Nguyen, Nhat Ho
[C.10] Statistical Perspective of Top-K Sparse Softmax Gating Mixture of Experts. Proceedings of the ICLR, 2024.
Huy Nguyen, Pedram Akbarian, Fanqi Yan, Nhat Ho
[C.9] Towards Convergence Rates for Parameter Estimation in Gaussian-gated Mixture of Experts. In AISTATS, 2024.
Huy Nguyen*, TrungTin Nguyen*, Khai Nguyen, Nhat Ho
[C.8] On Parameter Estimation in Deviated Gaussian Mixture of Experts. In AISTATS, 2024.
Huy Nguyen, Khai Nguyen, Nhat Ho
[C.7] Demystifying Softmax Gating Function in Gaussian Mixture of Experts. Advances in NeurIPS, 2023 (Spotlight) .
Huy Nguyen, TrungTin Nguyen, Nhat Ho
[C.6] Minimax Optimal Rate for Parameter Estimation in Multivariate Deviated Models. Advances in NeurIPS, 2023.
Dat Do*, Huy Nguyen*, Khai Nguyen, Nhat Ho
[C.5] Fast Approximation of the Generalized Sliced-Wasserstein Distance. In ICASSP, 2024.
Dung Le*, Huy Nguyen*, Khai Nguyen*, Trang Nguyen*, Nhat Ho
[C.4] Hierarchical Sliced Wasserstein Distance. Proceedings of the ICLR, 2023.
Khai Nguyen, Tongzheng Ren, Huy Nguyen, Litu Rout, Tan Nguyen, Nhat Ho
[C.3] Entropic Gromov-Wasserstein between Gaussian Distributions. Proceedings of the ICML, 2022.
Huy Nguyen*, Khang Le*, Dung Le*, Dat Do, Tung Pham, Nhat Ho
[C.2] On Multimarginal Partial Optimal Transport: Equivalent Forms and Computational Complexity. Proceedings of the AISTATS, 2022.
Huy Nguyen*, Khang Le*, Khai Nguyen, Tung Pham, Nhat Ho
[C.1] On Robust Optimal Transport: Computational Complexity and Barycenter Computation. Advances in NeurIPS, 2021.
Huy Nguyen*, Khang Le*, Quang Minh Nguyen, Tung Pham, Hung Bui, Nhat Ho
Journal Publications
[J.1] EPEM: Efficient Parameter Estimation for Multiple Class Monotone Missing Data. Information Sciences, Volume 567, August 2021, Pages 1-22.
Thu Nguyen, Duy H. M. Nguyen, Huy Nguyen, Binh T. Nguyen, Bruce A. Wade.