Federated Learning

Federated Learning Algorithms

Federated Averaging

In NVIDIA FLARE, FedAvg is implemented through the Scatter and Gather Workflow. In the federated averaging workflow, a set of initial weights is distributed to client workers who perform local training. After local training, clients return their local weights as a Shareables that are aggregated (averaged). This new set of global average weights is redistributed to clients and the process repeats for the specified number of rounds.

FedProx

FedProx implements a Loss function to penalize a client’s local weights based on deviation from the global model. An example configuration can be found in cifar10_fedprox of the CIFAR-10 example.

FedOpt

FedOpt implements a ShareableGenerator that can use a specified Optimizer and Learning Rate Scheduler when updating the global model. An example configuration can be found in cifar10_fedopt of CIFAR-10 example.

SCAFFOLD

SCAFFOLD uses a slightly modified version of the CIFAR-10 Learner implementation, namely the CIFAR10ScaffoldLearner, which adds a correction term during local training following the implementation as described in Li et al.

Ditto

Ditto uses a slightly modified version of the prostate Learner implementation, namely the ProstateDittoLearner, which decouples local personalized model from global model via an additional model training and a controllable prox term. See the prostate segmentation example for an example with ditto in addition to FedProx, FedAvg, and centralized training.

Federated Analytics

Federated analytics may be used to gather information about the data at various sites. An example can be found in the Federated Analysis example.