In NVIDIA FLARE, FedAvg is implemented through the Scatter and Gather Workflow. In the federated averaging workflow, a set of initial weights is distributed to client workers who perform local training. After local training, clients return their local weights as a Shareables that are aggregated (averaged). This new set of global average weights is redistributed to clients and the process repeats for the specified number of rounds.
FedOpt implements a
that can use a specified Optimizer and Learning Rate Scheduler when updating the global model. An example configuration
can be found in cifar10_fedopt of CIFAR-10 example.
SCAFFOLD uses a slightly modified version of the CIFAR-10 Learner implementation, namely the CIFAR10ScaffoldLearner, which adds a correction term during local training following the implementation as described in Li et al.. An example configuration can be found in cifar10_scaffold of CIFAR-10 example.
Ditto uses a slightly modified version of the prostate Learner implementation, namely the ProstateDittoLearner, which decouples local personalized model from global model via an additional model training and a controllable prox term. See the prostate segmentation example for an example with ditto in addition to FedProx, FedAvg, and centralized training.
NVFlare supports federated learning using popular gradient boosting library XGBoost. It uses XGBoost library with federated plugin (xgboost version >= 1.7.0rc1) to perform the learning.
Using XGBoost with NVFlare has following benefits compared with running federated XGBoost directly,
XGBoost instance’s life-cycle is managed by NVFlare. Both XGBoost client and server are started/stopped automatically by NVFlare workflow.
For histogram-based XGBoost federated server can be configured automatically with auto-assigned port number.
When mutual TLS is used, the certificates are managed by NVFlare using existing provisioning process.
No need to manually configure each instance. Instance specific parameters like code:rank are assigned automatically by the NVFlare controller.
Federated Horizontal XGBoost (GitHub) - Includes examples of histogram-based and tree-based algorithms. Tree-based algorithms also includes bagging and cyclic approaches
Federated Vertical XGBoost (GitHub) - Example using Private Set Intersection and XGBoost on vertically split HIGGS data.
Federated Statistics for medical imaging (Github) - Example of gathering local image histogram to compute the global dataset histograms.
Federated Statistics for tabular data with DataFrame (Github) - Example of gathering local statistics summary from Pandas DataFrame to compute the global dataset statistics.
Federated Statistics with Monai Statistics integration for Spleen CT Image (Github) - Example demonstrated Monai statistics integration and few other features in federated statistics