annotations_to_mask

hybrid_learning.datasets.custom.coco.keypoints_processing.annotations_to_mask(*, annotation_wh, annotations, keypoint_idxs, skeleton, pt_radius=0.025, link_width=None)[source]

Create a mask of the linked keypoints from the annotations list. Keypoints are specified by their keypoint_idxs. It is assumed that the original image considered in annotations has annotation_wh, which is the used as the size of the created mask.

Parameters
  • annotation_wh (Tuple[int, int]) – PIL.Image.Image size of the original image assumed by the annotation and output size of the mask; format: (width, height) in pixels

  • annotations (List[dict]) – annotations from which to create mask

  • keypoint_idxs (Union[Sequence[int], Iterable[Iterable[int]]]) – indices of the starting positions of keypoints to process; the keypoints are saved in a list of the form [x1,y1,v1, x2,y2,v2, ...], and to process keypoints 1 and 2 one needs keypoint_idxs=[0,3]; keypoint_idxs should be given as a list of “parts”, where each part is a list of keypoint indices which should be connected by a link line in the mask; in case just a list of int values is given, these are assumed to represent only one part

  • skeleton (Sequence[Tuple[int, int]]) – list of links as tuples [kpt1_idx, kpt2_idx]; this is the skeleton list from the COCO annotations, only each entry reduced by 1

  • pt_radius (float) – radius of a point relative to the height of the annotated person (as returned by annotation_to_tot_height()); if the height cannot be estimated, it is assumed to be the image height

  • link_width (Optional[float]) – the width the line of a link should have relative to the person height (see pt_radius); defaults to 2x the pt_radius

Returns

mask (same size as original image)

Return type

Image