dgs.utils.utils.extract_crops_and_save

dgs.utils.utils.extract_crops_and_save(img_fps: list[str] | tuple[str, ...], boxes: torchvision.tv_tensors.BoundingBoxes, new_fps: list[str] | tuple[str, ...], key_points: torch.Tensor | None = None, **kwargs) tuple[torchvision.tv_tensors.Image | torch.Tensor, torch.Tensor][source]

Given a list of original image paths and a list of target image-crops paths, use the given bounding boxes to extract the image content as image crops and save them as new images.

Does only work if the images have the same size, because otherwise the bounding-boxes would not match anymore.

Notes

It is expected that img_fps, new_fps, and boxes have the same length.

Parameters:
  • img_fps – An iterable of absolute paths pointing to the original images.

  • boxes – The bounding boxes as tv_tensors.BoundingBoxes of arbitrary format.

  • new_fps – An iterable of absolute paths pointing to the image crops.

  • key_points – Key points of the respective images. The key points will be transformed with the images. Default None just means that a placeholder is passed.

Keyword Arguments:
  • crop_size (ImgShape) – The target shape of the image crops. Defaults to DEF_VAL.images.crop_size.

  • transform (tvt.Compose) – A torchvision transform given as Compose to get the crops from the original image. Defaults to a cleaner version of CustomCropResize.

  • crop_mode (str) – Defines the resize mode in the transform function. Has to be in the modes of CustomToAspect. Default DEF_VAL.images.mode.

  • quality (int) – The quality to save the jpegs as. The default of torchvision is 75. Default DEF_VAL.images.quality.

Returns:

The computed image crops and their respective key points on the device specified in kwargs. The image-crops are saved already, which means in most cases the return values can be ignored.

Return type:

crops, key_points

Raises:

ValueError – If input lengths don’t match.