Divyansh Jhunjhunwala

About Me

Hi! I am Divyansh, a fourth year PhD candidate in the Electrical and Computer Engineering department at Carnegie Mellon University, advised by Dr. Gauri Joshi. My research interests lie broadly in distributed optimization and machine learning, in particular federated learning. Given the multi-disciplinary nature of problems in federated learning, my research often leverages ideas from related areas such as optimization theory, transfer learning, model fusion, theory of over-parameterized neural networks, among others.

In summer 22 and summer 23, I interned at IBM Research working with Dr. Shiqiang Wang on some interesting problems in federated learning.

Prior to CMU, I completed my Bachelors in Technology (B.Tech) in Electronics and Electrical Communication Engineering from IIT Kharagpur, where I received the Institute Silver Medal for graduating with the highest CGPA in my department.

Email  /  Google Scholar

profile photo
Recent News

Jan 24: My work on one-shot federated learning using Fisher information got accepted to AISTATS 2024!

Sep 23: Attended the New Frontiers in Federated Learning Workshop at Toyota Institute of Chicago (TTIC). Thanks to all the organizers!

May 23: I am returning to IBM Research, Yorktown Heights as a summer research intern.

Jan 23: My internship work on tuning the server step size in federated learning was accepted as a spotlight presentation at ICLR 2023!

Oct 22: Our work on incentivizing clients for federated learning was accepted as an oral presentation at the FL-Neurips 22 workshop! (12% acceptance rate).

Aug 22: Completed my internship at IBM T.J. Watson Research Center, New York.

April 22: Our team was selected as a finalist for the Qualcomm Innovation Fellowship for the research proposal "Incentivized Federated Learning for Data-Heterogeneous and Resource-Constrained Clients".

Research
few shot image FedFisher: Leveraging Fisher Information for One-Shot Federated Learning
Divyansh Jhunjhunwala , Shiqiang Wang, Gauri Joshi
International Conference on Artificial Intelligence and Statistics (AISTATS) 2024
A preliminary version appeared at Federated Learning and Analytics workshop at ICML 2023

Propose FedFisher, an algorithm for learning the global model for federated learning using just one round communication with novel theotetical guarantees for two layer overparameterized ReLU networks.

inc fl image FedExP: Speeding up Federated Averaging via Extrapolation
Divyansh Jhunjhunwala , Shiqiang Wang, Gauri Joshi
International Conference on Learning Representations (ICLR), 2023 ( Spotlight, top 25% of accepted papers )

We present FedExP, a method to adaptively determine the server step size in FL based on dynamically varying pseudo-gradients throughout the FL process.

inc fl image Maximizing Global Model Appeal in Federated Learning
Yae Jee Cho, Divyansh Jhunjhunwala , Tian Li, Virginia Smith, Gauri Joshi
Transactions of Machine Learning Research (TMLR), 2024

Propose MaxFL algorithm to explicitly maximize the fraction of clients that are incentivized to use the global model in federated learning.

fedvarp image FedVARP: Tackling the Variance Due to Partial Client Participation in Federated Learning
Divyansh Jhunjhunwala , Pranay Sharma, Aushim Nagarkatti, Gauri Joshi
Uncertainty in Artificial Intelligence (UAI), 2022

Propose FedVARP algorithm to deal with variance caused by only a few clients participating in every round of federated training.

spatial image Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala , Ankur Mallick, Advait Gadhikar, Swanand Kadhe, Gauri Joshi
Advances in Neural Information Processing Systems (NeurIPS), 2021

Introduce notions of spatial and temporal correlations and show how they can be used to efficiently compute the mean of a set of vectors in a communication-limited setting.

adaquant fl image Adaptive Quantization of model updates for communication-efficient federated learning
Divyansh Jhunjhunwala , Advait Gadhikar, Gauri Joshi, Yonina C. Eldar
International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021

Propose an adaptive quantization strategy that aims to achieve communication efficiency as well as a low error floor by changing the number of quantization levels during training in federated learning.


Source code credit to Dr. Jon Barron.