March 22, 2024
February 4, 2024
Xie, Wanyun; Pethick, Thomas; Ramezani-Kebrya, Ali; Cevher, Volkan
We study robust federated learning (FL) within a game theoretic framework to alleviate the server vulnerabilities to even an informed adversary who can tailor training-time attacks (Fang et al., 2020; Xie et al., 2020a; Ozfatura et al., 2022; Rodríguez-Barroso et al., 2023). Specifically, we introduce RobustTailor, a simulation-based framework that prevents the adversary from being omniscient and derives its convergence guarantees. RobustTailor improves robustness to training-time attacks significantly with a minor trade-off of privacy. Empirical results under challenging attacks show that RobustTailor performs close to an upper bound with perfect knowledge of honest clients.
Mixed Nash for Robust Federated Learning
Xie, Wanyun; Pethick, Thomas; Ramezani-Kebrya, Ali; Cevher, Volkan
Transactions on Machine Learning Research (02/2024)
February 4, 2024
Xie, Wanyun; Pethick, Thomas; Ramezani-Kebrya, Ali; Cevher, Volkan
Transactions on Machine Learning Research (02/2024)
February 4, 2024