Federated Studying (FL) is an rising ML coaching paradigm the place purchasers personal their information and collaborate to coach a world mannequin with out revealing any information to the server and different members.
Researchers generally carry out experiments in a simulation surroundings to shortly iterate on concepts. Nevertheless, present open-source instruments don’t provide the effectivity required to simulate FL on bigger and extra sensible FL datasets. We introduce pfl-research, a quick, modular, and easy-to-use Python framework for simulating FL. It helps TensorFlow, PyTorch, and non-neural community fashions, and is tightly built-in with state-of-the-art privateness algorithms.
We research the pace of open-source FL frameworks and present that pfl-research is 7-72× quicker than different open-source frameworks on widespread cross-device setups. Such speedup will considerably increase the productiveness of the FL analysis group and allow testing hypotheses on sensible FL datasets that had been beforehand too useful resource intensive. We launch a collection of benchmarks that evaluates an algorithm’s general efficiency on a various set of sensible eventualities.