Example Applications

NVIDIA FLARE has several tutorials and examples to help you get started with federated learning and to explore certain features in the examples directory.

The following tutorials and quickstart guides walk you through some of these examples:

  1. Hello World introduction to NVFlare.

    1.1. Deep Learning to Federated Learning
    1.2. Step-by-Step Examples
  2. Hello World Examples which can be run from the hello_world notebook.

    2.1. Workflows
    2.2. Deep Learning
    • Hello PyTorch - Example image classifier using FedAvg and PyTorch as the deep learning training framework

    • Hello TensorFlow - Example image classifier using FedAvg and TensorFlow as the deep learning training frameworks

  3. Tutorial notebooks

  1. FL algorithms

  • Federated Learning with CIFAR-10 (GitHub) - Includes examples of using FedAvg, FedProx, FedOpt, SCAFFOLD, homomorphic encryption, and streaming of TensorBoard metrics to the server during training

  • Federated XGBoost - Includes examples of histogram-based and tree-based algorithms. Tree-based algorithms also includes bagging and cyclic approaches

  1. Traditional ML examples

  1. Medical Image Analysis

  1. Federated Statistics

  1. Federated Site Policies

  1. Experiment tracking

  1. NLP

For the complete collection of example applications, see https://github.com/NVIDIA/NVFlare/tree/main/examples.

Setting up a virtual environment for examples and notebooks

It is recommended to set up a virtual environment before installing the dependencies for the examples. Install dependencies for a virtual environment with:

python3 -m pip install --user --upgrade pip
python3 -m pip install --user virtualenv

Once venv is installed, you can use it to create a virtual environment with:

$ python3 -m venv nvflare_example

This will create the nvflare_example directory in current working directory if it doesn’t exist, and also create directories inside it containing a copy of the Python interpreter, the standard library, and various supporting files.

Activate the virtualenv by running the following command:

$ source nvflare_example/bin/activate

Installing required packages

In each example folder, install required packages for training:

pip install --upgrade pip
pip install -r requirements.txt

(optional) some examples contain scripts for plotting the TensorBoard event files, if needed, please also install the additional requirements in the example folder:

pip install -r plot-requirements.txt

JupyterLab with your virtual environment for Notebooks

To run examples including notebooks, we recommend using JupyterLab.

After activating your virtual environment, install JupyterLab:

pip install jupyterlab

If you need to register the virtual environment you created so it is usable in JupyterLab, you can register the kernel with:

python -m ipykernel install --user --name="nvflare_example"

Start a Jupyter Lab:

jupyter lab .

When you open a notebook, select the kernel you registered, “nvflare_example”, using the dropdown menu at the top right.

Custom Code in Example Apps

There are several ways to make custom code available to clients when using NVIDIA FLARE. Most hello-* examples use a custom folder within the FL application. Note that using a custom folder in the app needs to be allowed when using secure provisioning. By default, this option is disabled in the secure mode. POC mode, however, will work with custom code by default.

In contrast, the CIFAR-10, prostate segmentation, and BraTS18 segmentation examples assume that the learner code is already installed on the client’s system and available in the PYTHONPATH. Hence, the app folders do not include the custom code there. The PYTHONPATH is set in the run_poc.sh or run_secure.sh scripts of the example. Running these scripts as described in the README will make the learner code available to the clients.

Federated Learning Algorithms

Federated Averaging

In NVIDIA FLARE, FedAvg is implemented through the Scatter and Gather Workflow. In the federated averaging workflow, a set of initial weights is distributed to client workers who perform local training. After local training, clients return their local weights as a Shareables that are aggregated (averaged). This new set of global average weights is redistributed to clients and the process repeats for the specified number of rounds.


FedProx implements a Loss function to penalize a client’s local weights based on deviation from the global model. An example configuration can be found in cifar10_fedprox of the CIFAR-10 example.


FedOpt implements a ShareableGenerator that can use a specified Optimizer and Learning Rate Scheduler when updating the global model. An example configuration can be found in cifar10_fedopt of CIFAR-10 example.


SCAFFOLD uses a slightly modified version of the CIFAR-10 Learner implementation, namely the CIFAR10ScaffoldLearner, which adds a correction term during local training following the implementation as described in Li et al.


Ditto uses a slightly modified version of the prostate Learner implementation, namely the ProstateDittoLearner, which decouples local personalized model from global model via an additional model training and a controllable prox term. See the prostate segmentation example for an example with ditto in addition to FedProx, FedAvg, and centralized training.

Federated XGBoost

  • Federated XGBoost (GitHub) - Includes examples of histogram-based and tree-based algorithms. Tree-based algorithms also includes bagging and cyclic approaches

Federated Analytics