nvflare.client.in_process.api module

class InProcessClientAPI(task_metadata: dict, result_check_interval: float = 2.0)[source]

Bases: APISpec

Initializes the InProcessClientAPI.

Parameters:
  • task_metadata (dict) – task metadata, added to client_config.

  • result_check_interval (float) – how often to check if result is availabe.

clear()[source]

Clears the cache.

Example

nvflare.client.clear()
get_config() Dict[source]

Gets the ClientConfig dictionary.

Returns:

A dict of the configuration used in Client API.

Example

config = nvflare.client.get_config()
get_job_id() str[source]

Gets job id.

Returns:

The current job id.

Example

job_id = nvflare.client.get_job_id()
get_site_name() str[source]

Gets site name.

Returns:

The site name of this client.

Example

site_name = nvflare.client.get_site_name()
get_task_name() str[source]

Gets task name.

Returns:

The task name.

Example

task_name = nvflare.client.get_task_name()
init(rank: str | None = None, config: Dict | None = None)[source]

Initializes NVFlare Client API environment.

Parameters:
  • config (Union[str, Dict]) – config dictionary.

  • rank (str) – local rank of the process. It is only useful when the training script has multiple worker processes. (for example multi GPU)

is_evaluate() bool[source]

Returns whether the current task is an evaluate task.

Returns:

True, if the current task is an evaluate task. False, otherwise.

Example

if nvflare.client.is_evaluate():
# perform evaluate task on received model
    ...
is_running() bool[source]

Returns whether the NVFlare system is up and running.

Returns:

True, if the system is up and running. False, otherwise.

Example

while nvflare.client.is_running():
    # receive model, perform task, send model, etc.
    ...
is_submit_model() bool[source]

Returns whether the current task is a submit_model task.

Returns:

True, if the current task is a submit_model. False, otherwise.

Example

if nvflare.client.is_submit_model():
# perform submit_model task to obtain the best local model
    ...
is_train() bool[source]

Returns whether the current task is a training task.

Returns:

True, if the current task is a training task. False, otherwise.

Example

if nvflare.client.is_train():
# perform train task on received model
    ...
log(key: str, value: Any, data_type: AnalyticsDataType, **kwargs)[source]

Logs a key value pair.

We suggest users use the high-level APIs in nvflare/client/tracking.py

Parameters:
  • key (str) – key string.

  • value (Any) – value to log.

  • data_type (AnalyticsDataType) – the data type of the “value”.

  • kwargs – additional arguments to be included.

Returns:

whether the key value pair is logged successfully

Example

log(
    key=tag,
    value=scalar,
    data_type=AnalyticsDataType.SCALAR,
    global_step=global_step,
    writer=LogWriterName.TORCH_TB,
    **kwargs,
)
prepare_client_config(config)[source]
receive(timeout: float | None = None) FLModel | None[source]

Receives model from NVFlare side.

Returns:

An FLModel received.

Example

nvflare.client.receive()
send(model: FLModel, clear_cache: bool = True) None[source]

Sends the model to NVFlare side.

Parameters:
  • fl_model (FLModel) – Sends a FLModel object.

  • clear_cache (bool) – clear cache after send.

Example

nvflare.client.send(fl_model=FLModel(...))
set_meta(meta: dict)[source]
system_info() Dict[source]

Gets NVFlare system information.

System information will be available after a valid FLModel is received. It does not retrieve information actively.

Note

system information includes job id and site name.

Returns: A dict of system information.

Example

sys_info = nvflare.client.system_info()