dgs.models.embedding_generator.pose_based.KeyPointConvolutionPBEG

class dgs.models.embedding_generator.pose_based.KeyPointConvolutionPBEG(*args: Any, **kwargs: Any)[source]

Create a short torch Module that has one convolutional layer reducing the key points using relational information and an arbitrary number of hidden fully connected layers at the end.

Module Name

KeyPointConvolutionPBEG

Description

First, the convolution of the key points is computed using a given number of :attr:nof_kernels, which will return J values after flattening the convolution output. Those values are then inserted with the four bounding box values into the first fully connected layer. There at this point, there can be an arbitrary number of hidden FC-layers.

Model Input: [B x J x j_dim] and [B x 4] Model Output: [B x self.embedding_size]

Params

joint_shape: (tuple[int, int])

Number of joints and number of dimensions of the joints as tuple.

Optional Params

hidden_layers_kp: (Union[list[int], tuple[int, …], None], optional)

Respective size of every hidden layer after the convolution of the key points. The value can be None to use only one single convolution layer to cast the inputs before adding the bboxes. Default DEF_VAL.embed_gen.pose.KPCPBEG.hidden_layers_kp.

hidden_layers: (Union[list[int], tuple[int, …], None], optional)

Respective size of every hidden layer after adding the bounding boxes. The value can be None to use only one single linear NN-layer to cast the convoluted key points and bboxes to the outputs. Default DEF_VAL.embed_gen.pose.KPCPBEG.hidden_layers.

bias: (bool, optional)

Whether to use a bias term in the linear layers. Default DEF_VAL.embed_gen.pose.KPCPBEG.bias.

nof_kernels: (int, optional)

Define the number of kernels to use for convolution. Default DEF_VAL.embed_gen.pose.KPCPBEG.nof_kernels.

bbox_format: (Union[str, tv_tensors.BoundingBoxFormat], optional)

The target format of the bounding box coordinates. This will have influence on the results. Default DEF_VAL.embed_gen.pose.KPCPBEG.bbox_format.

Important Inherited Params

embedding_size: (int)

Output shape or size of the embedding.

__init__(*args, **kwargs)

Methods

configure_torch_module(module[, train])

Set compute mode and send model to the device or multiple parallel devices if applicable.

embedding_key_exists(s)

Return whether the embedding_key of this model exists in a given state.

forward(ds)

Forward pass of the custom key point convolution model.

terminate()

Terminate this module and all of its submodules.

validate_params(validations[, attrib_name])

Given per key validations, validate this module's parameters.

Attributes

device

Get the device of this module.

is_training

Get whether this module is set to training-mode.

module_name

Get the name of the module.

module_type

name

Get the name of the module.

name_safe

Get the escaped name of the module usable in filepaths by replacing spaces and underscores.

precision

Get the (floating point) precision used in multiple parts of this module.

embedding_size

The size of the embedding.

nof_classes

The number of classes in the dataset / embedding.