Secure federated learning is a privacy-preserving framework to improve
machine learning models by training over large volumes of data collected by
mobile users. This is achieved through an iterative process where, at each
iteration, users update a global model using their local datasets. Each user
then masks its local model via random keys, and the masked models are
aggregated at a central server to compute the global model for the next
iteration. As the local models are protected by random masks, the server cannot
observe their true values. This presents a major challenge for the resilience
of the model against adversarial (Byzantine) users, who can manipulate the
global model by modifying their local models or datasets. Towards addressing
this challenge, this paper presents the first single-server Byzantine-resilient
secure aggregation framework (BREA) for secure federated learning. BREA is
based on an integrated stochastic quantization, verifiable outlier detection,
and secure model aggregation approach to guarantee Byzantine-resilience,
privacy, and convergence simultaneously. We provide theoretical convergence and
privacy guarantees and characterize the fundamental trade-offs in terms of the
network size, user dropouts, and privacy protection. Our experiments
demonstrate convergence in the presence of Byzantine users, and comparable
accuracy to conventional federated learning benchmarks.

By admin