dgs.models.module.BaseModule

class dgs.models.module.BaseModule(config: dict[str, any], path: list[str])[source]

Base class for all custom modules.

Description

Every Module is a building block that can be replaced with other building blocks. This defines a base module all of those building blocks inherit. This class should not be called directly and should only be inherited by other classes.

Every module has access to the global configuration for parameters like the modules’ device(s). Additionally, every module will have own parameters (params) which are a sub node of the overall configuration.

Configuration

device: (Device)

The torch device to run this module and tracker on.

is_training: (bool)

Whether the general torch modules should train or evaluate. Modes of different modules can be set individually using ‘.eval()’, ‘.train()’, or the functions from dgs.utils.torchtools.

name (str):

The name of this configuration. Mostly used for printing, logging, and file saving.

Optional Configuration

print_prio: (str, optional)

How much information should be printed while running. “INFO” will print status reports but no debugging information. Default: DEF_VAL.base.print_prio .

description (str, optional):

The description of the overall configuration. Default: DEF_VAL.base.description .

log_dir (FilePath, optional):

Path to directory where all the files of this run are saved. The date will be added to the path if log_dir_add_date is True. Default: DEF_VAL.base.log_dir .

log_dir_add_date (bool, optional):

Whether to append the date to the log_dir. If True, The subdirectory that represents today will be added to the log directory (“./YYYYMMDD/”). Default: DEF_VAL.base.log_dir_add_date .

log_dir_suffix (str, optional):

Suffix to add to the log directory. Default: DEF_VAL.base.log_dir_suffix .

precision (Union[type, str, torch.dtype], optional)

The precision at which this module should operate. Default: DEF_VAL.base.precision .

config

The overall configuration of the whole algorithm.

params

The parameters for this specific module.

_path

Location of params within config as a node path.

param config:

The overall configuration of the whole algorithm

param path:

Keys of config to the parameters of the current module e.g. the parameters for the pose estimator will be located in a pose-estimator subgroup of the config those key-based paths may be even deeper, just make sure that only information about this specific model is stored in params.

Methods

__init__(config: dict[str, any], path: list[str])[source]
configure_torch_module(module: torch.nn.Module, train: bool | None = None) torch.nn.Module[source]

Set compute mode and send model to the device or multiple parallel devices if applicable.

Parameters:
  • module – The torch module instance to configure.

  • train – Whether to train or eval this module, defaults to the value set in the base config.

Returns:

The module on the specified device or in parallel.

terminate() None[source]

Terminate this module and all of its submodules.

If nothing has to be done, just pass. Is used for terminating parallel execution and threads in specific models.

validate_params(validations: dict[str, list[str | type | tuple[str, any] | Callable[[any, any], bool]]], attrib_name: str = 'params') None[source]

Given per key validations, validate this module’s parameters.

Throws exceptions on invalid or nonexistent params.

Parameters:
  • attrib_name – name of the attribute to validate, should be “params” and only for base class “config”

  • validations

    Dictionary with the name of the parameter as key and a list of validations as value. Every validation in this list has to be true for the validation to be successful.

    The value for the validation can have multiple types:
    • A lambda function or other type of callable

    • A string as reference to a predefined validation function with one argument

    • None for existence

    • A tuple with a string as reference to a predefined validation function with one additional argument

    • It is possible to write nested validations, but then every nested validation has to be a tuple, or a tuple of tuples. For convenience, there are implementations for “any”, “all”, “not”, “eq”, “neq”, and “xor”. Those can have data which is a tuple containing other tuples or validations, or a single validation.

    • Lists and other iterables can be validated using “forall” running the given validations for every item in the input. A single validation or a tuple of (nested) validations is accepted as data.

Example

This example is an excerpt of the validation for the BaseModule-configuration.

>>> validations = {
    "device": [
            str,
            ("any",
                [
                    ("in", ["cuda", "cpu"]),
                    ("instance", torch.device)
                ]
            )
        ],
        "print_prio": [("in", PRINT_PRIORITY)],
        "callable": (lambda value: value == 1),
    }

And within the class __init__() call:

>>> self.validate_params()
Raises:

Attributes

device

Get the device of this module.

is_training

Get whether this module is set to training-mode.

name

Get the name of the module.

name_safe

Get the escaped name of the module usable in filepaths by replacing spaces and underscores.

precision

Get the (floating point) precision used in multiple parts of this module.