dgs.models.dataset.posetrack21.PoseTrack21_ImageHistory

class dgs.models.dataset.posetrack21.PoseTrack21_ImageHistory(*args: Any, **kwargs: Any)[source]

A PoseTrack21 dataset that creates combined states from a current state and its history.

Methods

__init__(config: dict[str, any], path: list[str])[source]
arbitrary_to_ds(a: list[any], idx: int) list[State][source]

Convert raw PoseTrack21 annotations to a list of State objects.

configure_torch_module(module: torch.nn.Module, train: bool | None = None) torch.nn.Module

Set compute mode and send model to the device or multiple parallel devices if applicable.

Parameters:
  • module – The torch module instance to configure.

  • train – Whether to train or eval this module, defaults to the value set in the base config.

Returns:

The module on the specified device or in parallel.

get_image_crops(ds: State) None

Add the image crops and local key-points to a given state. Works for single or batched State objects. This function modifies the given State in place.

Will load precomputed image crops by setting self.params["crops_folder"].

get_path_in_dataset(path: str) str

Given an arbitrary file- or directory-path, return its absolute path.

  1. check whether the path is a valid absolute path

  2. check whether the path is a valid project path

  3. check whether the path is an existing path within self.params[“dataset_path”]

Returns:

The absolute found path to the file or directory.

Raises:

FileNotFoundError – If the path is not found.

terminate() None

Terminate this module and all of its submodules.

If nothing has to be done, just pass. Is used for terminating parallel execution and threads in specific models.

static transform_crop_resize() torchvision.transforms.v2.Compose

Given one single image, with its corresponding bounding boxes and key-points, obtain a cropped image for every bounding box with localized key-points.

This transform expects a custom structured input as a dict.

>>> structured_input: dict[str, any] = {
    "image": tv_tensors.Image,
    "box": tv_tensors.BoundingBoxes,
    "keypoints": torch.Tensor,
    "output_size": ImgShape,
    "mode": str,
}
Returns:

A composed torchvision function that accepts a dict as input.

After calling this transform function, some values will have different shapes:

image

Now contains the image crops as tensor of shape [N x C x H x W].

bboxes

Zero, one, or multiple bounding boxes for this image as tensor of shape [N x 4]. And the bounding boxes got transformed into the XYWH format.

coordinates

Now contains the joint coordinates of every detection in local coordinates in shape [N x J x 2|3].

static transform_resize_image() torchvision.transforms.v2.Compose

Given an image, bboxes, and key-points, resize them with custom modes.

This transform expects a custom structured input as a dict.

>>> structured_input: dict[str, any] = {
    "image": tv_tensors.Image,
    "box": tv_tensors.BoundingBoxes,
    "keypoints": torch.Tensor,
    "output_size": ImgShape,
    "mode": str,
}
Returns:

A composed torchvision function that accepts a dict as input.

validate_params(validations: dict[str, list[str | type | tuple[str, any] | Callable[[any, any], bool]]], attrib_name: str = 'params') None

Given per key validations, validate this module’s parameters.

Throws exceptions on invalid or nonexistent params.

Parameters:
  • attrib_name – name of the attribute to validate, should be “params” and only for base class “config”

  • validations

    Dictionary with the name of the parameter as key and a list of validations as value. Every validation in this list has to be true for the validation to be successful.

    The value for the validation can have multiple types:
    • A lambda function or other type of callable

    • A string as reference to a predefined validation function with one argument

    • None for existence

    • A tuple with a string as reference to a predefined validation function with one additional argument

    • It is possible to write nested validations, but then every nested validation has to be a tuple, or a tuple of tuples. For convenience, there are implementations for “any”, “all”, “not”, “eq”, “neq”, and “xor”. Those can have data which is a tuple containing other tuples or validations, or a single validation.

    • Lists and other iterables can be validated using “forall” running the given validations for every item in the input. A single validation or a tuple of (nested) validations is accepted as data.

Example

This example is an excerpt of the validation for the BaseModule-configuration.

>>> validations = {
    "device": [
            str,
            ("any",
                [
                    ("in", ["cuda", "cpu"]),
                    ("instance", torch.device)
                ]
            )
        ],
        "print_prio": [("in", PRINT_PRIORITY)],
        "callable": (lambda value: value == 1),
    }

And within the class __init__() call:

>>> self.validate_params()
Raises:

Attributes

bbox_format

The format of the bounding boxes.

device

Get the device of this module.

is_training

Get whether this module is set to training-mode.

module_name

Get the name of the module.

module_type

name

Get the name of the module.

name_safe

Get the escaped name of the module usable in filepaths by replacing spaces and underscores.

nof_kps

The number of key points.

precision

Get the (floating point) precision used in multiple parts of this module.

skeleton_name

The format of the skeleton.

data

A dict mapping the

annos

L

img_shape

The size of the images in the dataset.

dataset_path

The base path to the dataset.