dgs.utils.state.State¶
- class dgs.utils.state.State(*args, bbox: torchvision.tv_tensors.BoundingBoxes, validate: bool = True, **kwargs)[source]¶
Class for storing one or multiple samples of data as a ‘State’.
Batch Size¶
Even if the batch size of a State is 1, or even zero (!), the dimension containing the batch size should always be present.
Validation¶
By default, this object validates all new inputs. If you validate elsewhere, use an existing dataset, or you don’t want validation for performance reasons, validation can be turned off.
Additional Values¶
The model might be given additional values during initialization, or at any time using the given setters or the get_item call. Additionally, the object can compute / load further values.
All args and keyword args can be accessed using the States’ properties. Additionally, the underlying dict structure (‘self.data’) can be used, but this does not allow validation nor on the fly computation of additional values. So make sure you do so, if needed.
- keypoints (
torch.Tensor
) The key points for this bounding box as torch tensor in global coordinates.
Shape
[B x J x 2|3]
- filepath (
FilePaths
) The respective filepath(s) of every image.
Length
B
.- person_id (
torch.Tensor
) The person id, only required for training and validation.
Shape
[B]
.- class_id (
torch.Tensor
) The class id, only required for training and validation.
Shape
[B]
.- device (
Device
) The torch device to use. If the device is not given, the device of
bbox
is used as the default.- heatmap (
torch.Tensor
) The heatmap of this bounding box. Currently not used.
Shape
[B x J x h x w]
.- image (
Images
) A list containing the original image(s) as
tv_tensors.Image
object.A list of length
B
containing images of shape[1 x C x H x W]
.- image_crop (
Image
) The content of the original image cropped using the bbox.
Shape
[B x C x h x w]
- joint_weight (
torch.Tensor
) Some kind of joint- or key-point confidence. E.g., the joint confidence score (JCS) of AlphaPose or the joint visibility of
PoseTrack21
.Shape
[B x J x 1]
- keypoints_local (
torch.Tensor
) The key points for this bounding box as torch tensor in local coordinates.
Shape
[B x J x 2|3]
- param bbox:
One single bounding box as torchvision bounding box in global coordinates.
Shape
[B x 4]
- type bbox:
tv_tensors.BoundingBoxes
- param kwargs:
Additional keyword arguments as shown in the ‘Additional Values’ section.
- __init__(*args, bbox: torchvision.tv_tensors.BoundingBoxes, validate: bool = True, **kwargs) None [source]¶
Methods
cast_joint_weight
([dtype, decimals, overwrite])Cast and return the joint weight as tensor.
clean
([keys])Given a state, remove one or more keys to free up memory.
clear
()copy
()Obtain a copy of this state.
extract
(idx)Extract the i-th State from a batch B of states.
fromkeys
(iterable[, value])get
(k[,d])items
()keypoints_and_weights_from_paths
(paths[, ...])Given a tuple of paths, load the (local) key-points and weights from these paths.
keys
()load_image
([store])Load the images using the filepaths of this object.
load_image_crop
([store])Load the images crops using the crop_paths of this object.
pop
(k[,d])If key is not found, d is returned if given, otherwise KeyError is raised.
popitem
()as a 2-tuple; but raise KeyError if D is empty.
setdefault
(k[,d])split
()Given a batched State object, split it into a list of single State objects.
to
(*args, **kwargs)Override torch.Tensor.to() for the whole object.
update
([E, ]**F)If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
values
()Attributes
Get the batch size.
Get the number of joints in every skeleton.
Get this States bounding-box.
Get the bounding box coordinates in relation to the width and height of the full image.
Get the class-ID of the bounding-boxes.
Get the path to the image crops.
Get the device of this State.
If data filepath has a single entry, return the filepath as a string, otherwise return the list.
Get the original image(s) of this State.
Get the image crop(s) of this State.
Get the dimensionality of the joints.
Get the weight of the joints.
Get the key-points.
Get the local key-points.
Get the global key points in coordinates relative to the full image size.
Get the ID of the respective person shown on the bounding-box.
Get the ID of the tracks associated with the respective bounding-boxes.
Whether to validate the inputs into this state.
All the data in this state as a dict.
- keypoints (