Federated Learning Without Compromise
The Privacy-Utility Tradeoff Federated learning promises to train models on distributed data without centralizing sensitive information. In practice, existing approaches force uncomfortable tradeoffs: Differential privacy adds noise that degrades model quality Secure aggregation increases communication costs Data heterogeneity causes convergence problems Byzantine participants can poison the model We present techniques that mitigate these tradeoffs. Our Approach Adaptive Clipping Standard gradient clipping uses a fixed threshold $C$: $$g_i^{clipped} = g_i \cdot \min\left(1, \frac{C}{|g_i|}\right)$$...