|
FedExP: Speeding up Federated Averaging via Extrapolation
Divyansh Jhunjhunwala , Shiqiang Wang, Gauri Joshi
International Conference on Learning Representations (ICLR), 2023 ( Spotlight, top 25% of accepted papers )
We present FedExP, a method to adaptively determine the server step size in FL based on dynamically varying pseudo-gradients throughout the FL process.
|
|
To Federate or Not To Federate: Incentivizing Client Participation in Federated Learning
Yae Jee Cho, Divyansh Jhunjhunwala , Tian Li, Virginia Smith, Gauri Joshi
Under review
Propose IncFL algorithm to explicitly maximize the fraction of clients that are incentivized to use the global model in federated learning.
|
|
FedVARP: Tackling the Variance Due to Partial Client Participation in Federated Learning
Divyansh Jhunjhunwala , Pranay Sharma, Aushim Nagarkatti, Gauri Joshi
Uncertainty in Artificial Intelligence (UAI), 2022
Propose FedVARP algorithm to deal with variance caused by only a few clients participating in every round of federated training.
|
|
Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala , Ankur Mallick, Advait Gadhikar, Swanand Kadhe, Gauri Joshi
Advances in Neural Information Processing Systems (NeurIPS), 2021
Introduce notions of spatial and temporal correlations and show how they can be used to efficiently compute the mean of a set of vectors in a communication-limited setting.
|
|
Adaptive Quantization of model updates for communication-efficient federated learning
Divyansh Jhunjhunwala , Advait Gadhikar, Gauri Joshi, Yonina C. Eldar
International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021
Propose an adaptive quantization strategy that aims to achieve communication efficiency as well as a low error floor by changing the number of quantization levels during training in federated learning.
|
|