annotation_to_sizes

hybrid_learning.datasets.custom.coco.keypoints_processing.annotation_to_sizes(annotation, all_keypoint_names=('nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'), assumed_height=1.7, factors=           bbox_width  bbox_height  ...  upper_arm  lower_arm slope               1            1  ...     3.7200     4.4600 intersect           0            0  ...     0.4486     0.5694  [2 rows x 11 columns])[source]

Estimate the body size of a person in an image in pixels from skeletal keypoints and linear formulas given by factors.

The factors holds parameters of linear functions that each calculate an estimate of the physical body height of a person in meters from a given anatomical size in meters (see hybrid_learning.datasets.custom.person_size_estimation.FACTORS). Assuming a real height in meters of assumed_height, missing sizes are inferred using these linear relations.

Parameters
  • annotation (Dict[str, Any]) – MS COCO style annotation dict for a single instance holding skeletal keypoint information

  • all_keypoint_names (Sequence[str]) – list of names for each keypoint occurring in the annotation in order

  • assumed_height – the assumed total height of the person in meters

  • factors – see parameters for linear relations between anatomic sizes and total body height in meters as in hybrid_learning.datasets.custom.person_size_estimation.FACTORS

Returns

a pandas.DataFrame representing a mapping of anatomic size identifiers (columns) each to a mapping with keys (index)

  • 'len': the value of the anatomic size in pixels

  • 'tot_height': the value of the total body size of the person in pixels assuming a real body height of assumed_height in meters and estimated from the one anatomic size using relation from factors

Return type

Mapping[str, Mapping[str, float]]