Jiawei Zhao
I am a Research Scientist at Meta FAIR, where I work on topics related to optimization, reasoning and efficiency. I received my Ph.D. at Caltech. My research seeks to uncover the statistical principles related to Large Language Models (LLMs), with the aim of developing algorithms that are theoretically grounded, scalable to large-scale models, and efficient in practice. This includes
- modern optimization algorithms for model training (GaLore, GaLore 2, signSGD-MV)
- efficient LLM reasoning and large-scale reinforcement learning (DeepConf, GRESO, M2PO)
- foundations of deep learning, quantization, and efficient inference (check out my research for details)
Reach me at jwzhao at meta dot com (work) or jwzzhao at gmail dot com (personal).
news
| Sep 24, 2025 | I gave a guest lecture regarding our recent works on Reasoning and Efficiency at Princeton University. |
|---|---|
| Aug 23, 2025 | We released Deep Think with Confidence! Social and media coverages (Link1, Link2, Link3). |
| Jul 21, 2025 | We are organizing the first workshop on Efficient Reasoning at NeurIPS 2025 and are calling for papers. See you in San Diego! |
| Jun 01, 2025 | I gave a talk at TL;DR’25 at Rice University about my recent work on Hardware-Efficient Learning Algorithms for Large Language Models. |
| May 13, 2024 | I gave a talk at MLSys’24 about my recent work on Memory-Efficient LLM Training. |