| Written by Adrian Henkel, advised by Reza Naserigerdeh and supervised by Dr. Josch Pauling and Prof. Dr. Jan Baumbach.
Hey, thank you for being interested in my thesis. 🎉
Please see the slides in the presentation folder to get a quick summary of the work. The full thesis can be found here here 🤓.
This project aims to simulate and analyse three different communication-efficient approaches for federated machine learning.
- Gradient Quantification: Each parameter is reduced in its size before sending the gradients to the server and vice versa.
- Gradient Sparsification: This appraoch ignores gradients which have not changed beyond a certain level after the local updates.
- Multiple local Updates: This approach performs the training algorithm mini-batch SGD multiple times in one communication round.
All code that was used can be found here.
The configuration files for the final simulations can be found here.
The result files can be found here.
Below the package structure is displayed in a truncated UML diagram that focuses on the main functionalities.
This work was graded with a 1.0 (highest grade).