nvflare.app_opt.xgboost.histogram_based_v2.executor module

class FedXGBHistogramExecutor(early_stopping_rounds, xgb_params: dict, data_loader_id: str, verbose_eval=False, use_gpus=False, per_msg_timeout=10.0, tx_timeout=100.0, model_file_name='model.json', metrics_writer_id: str | None = None, in_process: bool = True)[source]

Bases: XGBExecutor

Parameters:
  • early_stopping_rounds – early stopping rounds

  • xgb_params – This dict is passed to xgboost.train() as the first argument params. It contains all the Booster parameters. Please refer to XGBoost documentation for details: https://xgboost.readthedocs.io/en/stable/parameter.html

  • data_loader_id – the ID points to XGBDataLoader.

  • verbose_eval – verbose_eval in xgboost.train

  • use_gpus (bool) – A convenient flag to enable gpu training, if gpu device is specified in the xgb_params then this flag can be ignored.

  • metrics_writer_id – the ID points to a LogWriter, if provided, a MetricsCallback will be added. Users can then use the receivers from nvflare.app_opt.tracking.

  • model_file_name (str) – where to save the model.

  • in_process (bool) – Specifies whether to start the XGBRunner in the same process or not.

  • per_msg_timeout – timeout for sending one message

  • tx_timeout – transaction timeout

get_adaptor(fl_ctx: FLContext)[source]

Get adaptor to be used by this executor. This is the default implementation that gets the adaptor based on configured adaptor_component_id. A subclass of XGBExecutor may get adaptor in a different way.

Parameters:

fl_ctx – the FL context

Returns:

An XGBClientAdaptor object