Example Applications

NVIDIA FLARE has several examples to help you get started with federated learning and to explore certain features in the examples directory.

The following quickstart guides walk you through some of these examples:

  1. Hello World Examples

1.1. Workflows

1.2. Deep Learning

  • Hello PyTorch - Example image classifier using FedAvg and PyTorch as the deep learning training framework

  • Hello PyTorch with TensorBoard - Example building on Hello PyTorch with TensorBoard streaming from clients to server

  • Hello TensorFlow - Example image classifier using FedAvg and TensorFlow as the deep learning training frameworks

  1. FL algorithms

  • Federated Learning with CIFAR-10 (GitHub) - Includes examples of using FedAvg, FedProx, FedOpt, SCAFFOLD, homomorphic encryption, and streaming of TensorBoard metrics to the server during training

  • Federated XGBoost (GitHub) - Includes examples of histogram-based and tree-based algorithms. Tree-based algorithms also includes bagging and cyclic approaches

  1. Medical Image Analysis

  1. Federated Statistics

  1. Federated Site Policies

For the complete collection of example applications, see https://github.com/NVIDIA/NVFlare/tree/main/examples.

Custom Code in Example Apps

There are several ways to make custom code available to clients when using NVIDIA FLARE. Most hello-* examples use a custom folder within the FL application. Note that using a custom folder in the app needs to be allowed when using secure provisioning. By default, this option is disabled in the secure mode. POC mode, however, will work with custom code by default.

In contrast, the CIFAR-10, prostate segmentation, and BraTS18 segmentation examples assume that the learner code is already installed on the client’s system and available in the PYTHONPATH. Hence, the app folders do not include the custom code there. The PYTHONPATH is set in the run_poc.sh or run_secure.sh scripts of the example. Running these scripts as described in the README will make the learner code available to the clients.

Federated Learning Algorithms

Federated Averaging

In NVIDIA FLARE, FedAvg is implemented through the Scatter and Gather Workflow. In the federated averaging workflow, a set of initial weights is distributed to client workers who perform local training. After local training, clients return their local weights as a Shareables that are aggregated (averaged). This new set of global average weights is redistributed to clients and the process repeats for the specified number of rounds.

FedProx

FedProx implements a Loss function to penalize a client’s local weights based on deviation from the global model. An example configuration can be found in cifar10_fedprox of the CIFAR-10 example.

FedOpt

FedOpt implements a ShareableGenerator that can use a specified Optimizer and Learning Rate Scheduler when updating the global model. An example configuration can be found in cifar10_fedopt of CIFAR-10 example.

SCAFFOLD

SCAFFOLD uses a slightly modified version of the CIFAR-10 Learner implementation, namely the CIFAR10ScaffoldLearner, which adds a correction term during local training following the implementation as described in Li et al.

Ditto

Ditto uses a slightly modified version of the prostate Learner implementation, namely the ProstateDittoLearner, which decouples local personalized model from global model via an additional model training and a controllable prox term. See the prostate segmentation example for an example with ditto in addition to FedProx, FedAvg, and centralized training.

Federated XGBoost

  • Federated XGBoost (GitHub) - Includes examples of histogram-based and tree-based algorithms. Tree-based algorithms also includes bagging and cyclic approaches

Federated Analytics