FL Algorithms

Federated Averaging

In NVIDIA FLARE, FedAvg is implemented through the Scatter and Gather Workflow. In the federated averaging workflow, a set of initial weights is distributed to client workers who perform local training. After local training, clients return their local weights as a Shareables that are aggregated (averaged). This new set of global average weights is redistributed to clients and the process repeats for the specified number of rounds.


FedProx implements a Loss function to penalize a client’s local weights based on deviation from the global model. An example configuration can be found in cifar10_fedprox of the CIFAR-10 example.


FedOpt implements a ShareableGenerator that can use a specified Optimizer and Learning Rate Scheduler when updating the global model. An example configuration can be found in cifar10_fedopt of CIFAR-10 example.


SCAFFOLD uses a slightly modified version of the CIFAR-10 Learner implementation, namely the CIFAR10ScaffoldLearner, which adds a correction term during local training following the implementation as described in Li et al.. An example configuration can be found in cifar10_scaffold of CIFAR-10 example.


Ditto uses a slightly modified version of the prostate Learner implementation, namely the ProstateDittoLearner, which decouples local personalized model from global model via an additional model training and a controllable prox term. See the prostate segmentation example for an example with ditto in addition to FedProx, FedAvg, and centralized training.

Federated XGBoost

NVFlare supports federated learning using popular gradient boosting library XGBoost. It uses XGBoost library with federated plugin (xgboost version >= 1.7.0rc1) to perform the learning.

Using XGBoost with NVFlare has following benefits compared with running federated XGBoost directly,

  • XGBoost instance’s life-cycle is managed by NVFlare. Both XGBoost client and server are started/stopped automatically by NVFlare workflow.

  • For histogram-based XGBoost federated server can be configured automatically with auto-assigned port number.

  • When mutual TLS is used, the certificates are managed by NVFlare using existing provisioning process.

  • No need to manually configure each instance. Instance specific parameters like code:rank are assigned automatically by the NVFlare controller.

  • Federated Horizontal XGBoost (GitHub) - Includes examples of histogram-based and tree-based algorithms. Tree-based algorithms also includes bagging and cyclic approaches

  • Federated Vertical XGBoost (GitHub) - Example using Private Set Intersection and XGBoost on vertically split HIGGS data.

Federated Analytics