Scatter and Gather Workflow¶
The Federated scatter and gather workflow is an included reference implementation of the default workflow of previous versions of NVIDIA FLARE with a Server aggregating results from Clients that have produced Shareable results from their Trainer.
At the core, the control_flow of nvflare.app_common.workflows.scatter_and_gather.ScatterAndGather
is a for loop:
Trainer¶
A Trainer
is a type of Executor
in NVIDIA FLARE.
The execute()
method needs to get the required information from the Shareable
,
use that in its training process, then returning the local training result as a Shareable
.
You will need to configure your own Trainer
in config_fed_client.json.
Example FL configurations can be found in NVIDIA FLARE Application.
Learnable¶
Learnable
is the result of an FL application.
For example, in the deep learning scenario, it can be the model weights.
In the AutoML case, it can be the network architecture.
A LearnablePersistor
defines how to load
and save a Learnable
. Learnable
is a subset of the model file (which can contain other data like LR schedule)
which is to be learned, like the model weights.
Aggregator¶
Aggregators
define the aggregation algorithm to aggregate the Shareable
.
For example, a simple aggregator would be just average all the Shareable
of the same round.
Below is the signature for an aggregator.
class Aggregator(FLComponent, ABC):
@abstractmethod
def accept(self, shareable: Shareable, fl_ctx: FLContext) -> bool:
"""Accept the shareable submitted by the client.
Args:
shareable: submitted Shareable object
fl_ctx: FLContext
Returns:
first boolean to indicate if the contribution has been accepted.
"""
pass
@abstractmethod
def aggregate(self, fl_ctx: FLContext) -> Shareable:
"""Perform the aggregation for all the received Shareable from the clients.
Args:
fl_ctx: FLContext
Returns:
shareable
"""
pass