wandb#


class WandbLogger(train_interval: int = 1000, test_interval: int = 1, update_interval: int = 1000, info_interval: int = 1, save_interval: int = 1000, write_flush: bool = True, project: str | None = None, name: str | None = None, entity: str | None = None, run_id: str | None = None, config: Namespace | dict | None = None, monitor_gym: bool = True)[source]#

Weights and Biases logger that sends data to https://wandb.ai/.

This logger creates three panels with plots: train, test, and update. Make sure to select the correct access for each panel in weights and biases:

Example of usage:

logger = WandbLogger()
logger.load(SummaryWriter(log_path))
result = OnpolicyTrainer(policy, train_collector, test_collector,
                          logger=logger).run()
Parameters:
  • train_interval – the log interval in log_train_data(). Default to 1000.

  • test_interval – the log interval in log_test_data(). Default to 1.

  • update_interval – the log interval in log_update_data(). Default to 1000.

  • info_interval – the log interval in log_info_data(). Default to 1.

  • save_interval – the save interval in save_data(). Default to 1 (save at the end of each epoch).

  • write_flush – whether to flush tensorboard result after each add_scalar operation. Default to True.

  • project (str) – W&B project name. Default to “tianshou”.

  • name (str) – W&B run name. Default to None. If None, random name is assigned.

  • entity (str) – W&B team/organization name. Default to None.

  • run_id (str) – run id of W&B run to be resumed. Default to None.

  • config (argparse.Namespace) – experiment configurations. Default to None.

load(writer: SummaryWriter) None[source]#
restore_data() tuple[int, int, int][source]#

Return the metadata from existing log.

If it finds nothing or an error occurs during the recover process, it will return the default parameters.

Returns:

epoch, env_step, gradient_step.

save_data(epoch: int, env_step: int, gradient_step: int, save_checkpoint_fn: Callable[[int, int, int], str] | None = None) None[source]#

Use writer to log metadata when calling save_checkpoint_fn in trainer.

Parameters:
  • epoch – the epoch in trainer.

  • env_step – the env_step in trainer.

  • gradient_step – the gradient_step in trainer.

  • save_checkpoint_fn (function) – a hook defined by user, see trainer documentation for detail.

write(step_type: str, step: int, data: dict[str, int | Number | number | ndarray | float]) None[source]#

Specify how the writer is used to log data.

Parameters:
  • step_type (str) – namespace which the data dict belongs to.

  • step – stands for the ordinate of the data dict.

  • data – the data to write with format {key: value}.