I am a tenure-track assistant professor in the Department of Biostatistics & Bioinformatics, Department of Computer Science, and Department of Electrical & Computer Engineering at Duke University. My research is centered around Machine Learning, with broad interests in the areas of Artificial Intelligence, Data Science, Optimization, Reinforcement Learning, High Dimensional Statistics, and their applications to real-world problems including Bioinformatics and Healthcare. My research goal is to develop computationally- and data-efficient machine learning algorithms with both strong empirical performance and theoretical guarantees.

Prior to joining Duke, I was a Postdoctoral Scholar Research Associate in the Department of Computing and Mathematical Science at the California Institute of Technology, working with Prof. Anima Anandkumar, Prof. Adam Wierman and Prof. Eric Mazumdar. I received my Ph.D. degree in the Department of Computer Science at the University of California, Los Angeles, where I was advised by Prof. Quanquan Gu.

[Prospective students] I am looking for highly motivated and self-driven students with strong mathematical backgrounds and implementation skills in machine learning. Please apply to the Ph.D. programs in Biostatistics & Bioinformatics, Computer Science, and Electrical & Computer Engineering, and mention me in your applications. If you are interested in working with me as an intern, please take the time to read at least one of our lab’s papers, send me your CV, and include a brief note highlighting your specific interest in our research. Due to the high volume of emails I receive, I may not be able to respond to all inquiries. I will prioritize applications that demonstrate a clear and strong motivation to engage with our work.



Contact

Email: pan (DOT) xu (AT) duke (DOT) edu
Address: 2424 Erwin Road Room 9032, Durham, NC 27705
  ORCID:0000-0002-2559-8622
Semantic Scholar
@iampanxu



Recent News

  • [2024/11] I am serving as an action editor for Transactions on Machine Learning Research (TMLR).
  • [2024/09] Our papers on pure exploration in batched bandits, randomized exploration in MARL, off-dynamics reinforcement learning, and distributionally robust RL get accepted to NeurIPS 2024.
    • Optimal Batched Best Arm Identification
    • Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning
    • Off-Dynamics Reinforcement Learning via Domain Adaptation and Reward Augmented Imitation
    • Minimax Optimal and Computationally Efficient Algorithms for Distributionally Robust Offline Reinforcement Learning
  • [2024/09] My paper which extends my talk at New Faculty Highlights, focused on the efficient exploration and robust learning in decision making, gets published in AI Magazine.
    • Efficient and Robust Sequential Decision Making Algorithms
  • [2024/08] Our paper on self-supervised training of hypergraph neural networks gets accepted to TMLR.
    • PhyGCN: Pre-trained Hypergraph Convolutional Neural Networks with Self-supervised Learning
  • [2024/05] Our paper on exploration in deep reinforcement learning gets accepted to RLC 2024.
    • More Efficient Randomized Exploration for Reinforcement Learning via Approximate Sampling
  • [2024/05] Our paper on batched linear bandits gets accepted to ICML 2024.
    • Optimal Batched Linear Bandits
  • [2024/01] Our paper on distributionally robust reinforcement learning gets accepted to AISTATS 2024.
    • Distributionally Robust Off-Dynamics Reinforcement Learning: Provable Efficiency with Linear Function Approximation
  • [2024/01] Our paper on exploration in deep reinforcement learning gets accepted to ICLR 2024.
    • Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo
  • [2023/12] Our paper on distributional robust bandits gets accepted to TMLR with a Featured Certification.
    • Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits
  • [2023/12] I have been invited to give a presentation at the New Faculty Highlights program at AAAI 2024.
  • [2023/12] Our paper on multi-agent multi-armed bandits gets accepted to AAAI 2024 as an Oral Presentation.
    • Finite-Time Frequentist Regret Bounds of Multi-Agent Thompson Sampling on Sparse Hypergraphs
  • [2023/08] Received an award from NSF on approximate sampling based exploration for sequential decision making. [NSF award information]
  • [2023/08] I am serving as an area chair for AISTATS 2024.
  • [2023/08] I am serving as an area chair for ICLR 2024.
  • [2023/04] Our paper on randomized exploration in multi-armed bandits gets accepted to ICML 2023.
    • Thompson Sampling with Less Exploration is Fast and Optimal
  • [2023/04] Our paper on Participatory AI gets accepted to FAccT 2023 and wins the Best Paper Award.
    • Queer In AI: A Case Study in Community-Led Participatory AI
  • [2023/03] I am serving as an area chair for NeurIPS 2023.
  • [2023/01] I received the Whitehead Scholar award from the Duke University School of Medicine.
  • [2023/01] Our paper on offline contextual bandits gets accepted to AISTATS 2023.
    • Distributionally Robust Policy Gradient for Offline Contextual Bandits
  • [2023/01] I am serving as an area chair for ICML 2023.
  • [2022/12] Our paper on multi-agent reinforcement learning gets accepted to ACM SIGMETRICS 2023.
    • Global Convergence of Localized Policy Iteration in Networked Multi-Agent Reinforcement Learning
  • [2022/09] Two papers on Thompson sampling and preference learning get accepted to NeurIPS 2022.
    • Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits
    • Active Ranking without Strong Stochastic Transitivity
  • [2022/08] I started at Duke University as an assistant professor.
  • [2022/01] I received the PIMCO Postdoctoral Fellowship in Data Science.
  • [2021/06] I received the UCLA Outstanding Graduate Student Research Award.
  • [2021/01] I am selected to the inaugural cohort of Rising Stars in Data Science by the Center for Data and Computing at the University of Chicago.
  • [2020/09] I am selected as a top reviewer for the International Conference on Machine Learning (ICML) 2020.
  • [2020/05] Our forecast on Combating COVID-19 has been adopted in the COVID-19 Forecasts by Centers for Disease Control and Prevention (CDC), California COVID Assessment Tool (CalCat) by California Department of Public Health (CDPH), and the COVID-19 Forecast Hub by the Reich Lab of the University of Massachusetts Amherst.
  • [2020/03] We launched the project Combating COVID-19 to model and predict the spread of coronavirus using machine learning empowered epidemic models. Details can be found at covid19.uclaml.org
  • [2020/02] One paper (P1) on rank aggregation is selected as Oral Presentation in AAAI 2020.
  • [2019/09] I am selected as a top reviewer for the Conference on Advances in Neural Information Processing Systems (NeurIPS) 2019.
  • [2019/06] One paper (P1) on reinforcement learning is selected as Oral Presentation in UAI 2019!
  • [2019/05] I gave a tutorial on "Nonconvex Optimization for Knowledge Discovery and Data Mining" with Quanquan Gu and Zhaoran Wang in SIAM International Conference on Data Mining (SDM'19). Slides can be found at [Slides]
  • [2018/09] Two papers (P1, P2) on nonconvex optimization are selected as Spotlight in NeurIPS 2018!
  • [2018/05] One paper (P1) on high-dimensional statistics is selected as Long Oral Talk in ICML 2018.
  • [2016/06] I received the Presidential Fellowship in Data Science from the University of Virginia.