In federated learning (FL), the FedProxGrad algorithm addresses the challenge of learning across heterogeneous datasets while emphasizing fairness through group-level disparity minimization. However, prior convergence analyses of FedProxGrad suffered from a significant limitation: the convergence rate depended on a noise floor term that could not be reduced to zero, preventing the algorithm from achieving exact stationarity. This paper presents a refined analysis that eliminates this noise floor dependence, establishing that DS FedProxGrad achieves asymptotic stationarity with an improved convergence rate. The new analysis provides stronger theoretical guarantees for fair federated learning scenarios.
@misc{arif2025dsfedproxgrad,title={DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning},author={Arif, Huzaifa},year={2025},month=dec,note={arXiv preprint},}
Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models
Huzaifa Arif, Keerthiram Murugesan, Ching-Yun Ko, and 3 more authors
We propose patching for large language models (LLMs) like software versions, a lightweight and modular approach for addressing safety vulnerabilities. This "patch" introduces only 0.003% additional parameters, yet reliably steers model behavior toward that of a safer reference model across toxicity mitigation, bias reduction, and harmfulness refusal.
@misc{arif2025patching,title={Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models},author={Arif, Huzaifa and Murugesan, Keerthiram and Ko, Ching-Yun and Chen, Pin-Yu and Das, Payel and Gittens, Alex},year={2025},month=nov,note={arXiv preprint},}
PEEL the Layers and Find Yourself: Revisiting Inference-time Data Leakage for Residual Neural Networks
Huzaifa Arif, Keerthiram Murugesan, Payel Das, and 2 more authors
In IEEE Conference on Secure and Trustworthy Machine Learning, Apr 2025
We investigate data leakage vulnerabilities in residual neural networks during inference time and propose mitigation strategies.
@inproceedings{arif2025peel,title={PEEL the Layers and Find Yourself: Revisiting Inference-time Data Leakage for Residual Neural Networks},author={Arif, Huzaifa and Murugesan, Keerthiram and Das, Payel and Gittens, Alex and Chen, Pin-Yu},booktitle={IEEE Conference on Secure and Trustworthy Machine Learning},year={2025},month=apr,}
Group Fair Federated Learning via Stochastic Kernel Regularization
Huzaifa Arif, Pin-Yu Chen, Keerthiram Murugesan, and 1 more author
Transactions on Machine Learning Research, Apr 2025
We develop a stochastic kernel regularization approach to achieve group fairness in federated learning settings.
@article{arif2025group,title={Group Fair Federated Learning via Stochastic Kernel Regularization},author={Arif, Huzaifa and Chen, Pin-Yu and Murugesan, Keerthiram and Gittens, Alex},journal={Transactions on Machine Learning Research},year={2025},month=apr,}
Forecasting Fails: Unveiling Evasion Attacks in Weather Prediction Models
Huzaifa Arif, Pin-Yu Chen, Alex Gittens, and 2 more authors
In AAAI Workshop on AI to Accelerate Science and Engineering, Apr 2025
We expose vulnerabilities in weather prediction models through novel evasion attacks and propose defensive strategies.
@inproceedings{arif2025forecasting,title={Forecasting Fails: Unveiling Evasion Attacks in Weather Prediction Models},author={Arif, Huzaifa and Chen, Pin-Yu and Gittens, Alex and Diffenderfer, James and Kailkhura, Bhavya},booktitle={AAAI Workshop on AI to Accelerate Science and Engineering},year={2025},}
2023
Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming
Huzaifa Arif, Alex Gittens, and Pin-Yu Chen
In IEEE Conference on Secure and Trustworthy Machine Learning, Feb 2023
We present a novel approach to improve the utility-privacy tradeoff in federated learning through model reprogramming techniques.
@inproceedings{arif2023reprogrammable,title={Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming},author={Arif, Huzaifa and Gittens, Alex and Chen, Pin-Yu},booktitle={IEEE Conference on Secure and Trustworthy Machine Learning},year={2023},month=feb,}
2022
DP-Compressed VFL is secure for Model Inversion Attacks
We analyze the security of differentially private compressed vertical federated learning against model inversion attacks.
@misc{arif2022dpcompressed,title={DP-Compressed VFL is secure for Model Inversion Attacks},author={Arif, Huzaifa and Patterson, Stacy},year={2022},month=apr,note={Preprint},}