base#


class BaseLogger(train_interval: int = 1000, test_interval: int = 1, update_interval: int = 1000, info_interval: int = 1)[source]#

The base class for any logger which is compatible with trainer.

Try to overwrite write() method to use your own writer.

Parameters:
  • train_interval – the log interval in log_train_data(). Default to 1000.

  • test_interval – the log interval in log_test_data(). Default to 1.

  • update_interval – the log interval in log_update_data(). Default to 1000.

  • info_interval – the log interval in log_info_data(). Default to 1.

log_info_data(log_data: dict, step: int) None[source]#

Use writer to log global statistics.

Parameters:
  • log_data – a dict containing information of data collected at the end of an epoch.

  • step – stands for the timestep the training info is logged.

log_test_data(log_data: dict, step: int) None[source]#

Use writer to log statistics generated during evaluating.

:param log_data:a dict containing the information returned by the collector during the evaluation step. :param step: stands for the timestep the collector result is logged.

log_train_data(log_data: dict, step: int) None[source]#

Use writer to log statistics generated during training.

Parameters:
  • log_data – a dict containing the information returned by the collector during the train step.

  • step – stands for the timestep the collector result is logged.

log_update_data(log_data: dict, step: int) None[source]#

Use writer to log statistics generated during updating.

:param log_data:a dict containing the information returned during the policy update step. :param step: stands for the timestep the policy training data is logged.

static prepare_dict_for_logging(input_dict: dict[str, Any], parent_key: str = '', delimiter: str = '/', exclude_arrays: bool = True) dict[str, int | Number | number | ndarray | float][source]#

Flattens and filters a nested dictionary by recursively traversing all levels and compressing the keys.

Filtering is performed with respect to valid logging data types.

Parameters:
  • input_dict – The nested dictionary to be flattened and filtered.

  • parent_key – The parent key used as a prefix before the input_dict keys.

  • delimiter – The delimiter used to separate the keys.

  • exclude_arrays – Whether to exclude numpy arrays from the output.

Returns:

A flattened dictionary where the keys are compressed and values are filtered.

abstract restore_data() tuple[int, int, int][source]#

Return the metadata from existing log.

If it finds nothing or an error occurs during the recover process, it will return the default parameters.

Returns:

epoch, env_step, gradient_step.

abstract save_data(epoch: int, env_step: int, gradient_step: int, save_checkpoint_fn: Callable[[int, int, int], str] | None = None) None[source]#

Use writer to log metadata when calling save_checkpoint_fn in trainer.

Parameters:
  • epoch – the epoch in trainer.

  • env_step – the env_step in trainer.

  • gradient_step – the gradient_step in trainer.

  • save_checkpoint_fn (function) – a hook defined by user, see trainer documentation for detail.

abstract write(step_type: str, step: int, data: dict[str, int | Number | number | ndarray | float]) None[source]#

Specify how the writer is used to log data.

Parameters:
  • step_type (str) – namespace which the data dict belongs to.

  • step – stands for the ordinate of the data dict.

  • data – the data to write with format {key: value}.

class DataScope(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
INFO = 'info'#
TEST = 'test'#
TRAIN = 'train'#
UPDATE = 'update'#
class LazyLogger[source]#

A logger that does nothing. Used as the placeholder in trainer.

restore_data() tuple[int, int, int][source]#

Return the metadata from existing log.

If it finds nothing or an error occurs during the recover process, it will return the default parameters.

Returns:

epoch, env_step, gradient_step.

save_data(epoch: int, env_step: int, gradient_step: int, save_checkpoint_fn: Callable[[int, int, int], str] | None = None) None[source]#

Use writer to log metadata when calling save_checkpoint_fn in trainer.

Parameters:
  • epoch – the epoch in trainer.

  • env_step – the env_step in trainer.

  • gradient_step – the gradient_step in trainer.

  • save_checkpoint_fn (function) – a hook defined by user, see trainer documentation for detail.

write(step_type: str, step: int, data: dict[str, int | Number | number | ndarray | float]) None[source]#

The LazyLogger writes nothing.