Skip to content

Parameter Estimation Facility#

Overview#

The estimation facility is the only multi-purpose facility in FaceEngine. It is designed as a collection of tools that help to estimate various images or depicted object properties. These properties may be used to increase the precision of algorithms implemented by other FaceEngine facilities or to accomplish custom user tasks.

Best shot selection functionality#

Eyes Estimation#

The estimator is trained to work with warped images (see Chapter "Image warping" for details).

This estimator aims to determine:

  • Eye state: Open, Closed, Occluded;
  • Precise eye iris location as an array of landmarks;
  • Precise eyelid location as an array of landmarks.

You can only pass warped image with detected face to the estimator interface. Better image quality leads to better results.

Eye state classifier supports three categories: "Open", "Closed", "Occluded". Poor quality images or ones that depict obscured eyes (think eyewear, hair, gestures) fall into the "Occluded" category. It is always a good idea to check eye state before using the segmentation result.

The precise location allows iris and eyelid segmentation. The estimator is capable of outputting iris and eyelid shapes as an array of points together forming an ellipsis. You should only use segmentation results if the state of that eye is "Open".

The estimator:

  • Implements the estimate() function that accepts warped source image (see Chapter "Image warping") and warped landmarks, either of type Landmarks5 or Landmarks68. The warped image and landmarks are received from the warper (see IWarper::warp());
  • Classifies eyes state and detects its iris and eyelid landmarks;
  • Outputs EyesEstimation structures.

Orientation terms 'left' and 'right' refer to the way you see the image as it is shown on the screen. It means that left eye is not necessarily left from the person's point of view, but is on the left side of the screen. Consequently, right eye is the one on the right side of the screen. More formally, the label 'left' refers to subject\'s left eye (and similarly for the right eye), such that xright < xleft.

EyesEstimation::EyeAttributes presents eye state as enum EyeState with possible values: Open, Closed, Occluded.

Iris landmarks are presented with a template structure Landmarks that is specialized for 32 points.

Eyelid landmarks are presented with a template structure Landmarks that is specialized for 6 points.

BestShotQuality Estimation#

The BestShotQuality estimator was added to evaluate image quality to choose the best image before descriptor extraction.

The estimator (see IBestShotQualityEstimator in IEstimator.h): - Implements the estimate() function that needs fsdk::Image in R8G8B8 format, fsdk::Detection structure of corresponding source image (see section "Detection structure" in chapter "Face detection facility"), fsdk::IBestShotQualityEstimator::EstimationRequest structure and fsdk::IBestShotQualityEstimator::EstimationResult to store estimation result; - Implements the estimate() function that needs the span of fsdk::Image in R8G8B8 format, the span of fsdk::Detection structures of corresponding source images (see section "Detection structure" in chapter "Face detection facility"), fsdk::IBestShotQualityEstimator::EstimationRequest structure and span of fsdk::IBestShotQualityEstimator::EstimationResult to store estimation results.

Before using this estimator, user is free to decide whether to estimate or not some listed attributes. For this purpose, estimate() method takes one of the estimation requests:

  • fsdk::IBestShotQualityEstimator::EstimationRequest::estimateAGS to make only AGS estimation;
  • fsdk::IBestShotQualityEstimator::EstimationRequest::estimateHeadPose to make only Head Pose estimation;
  • fsdk::IBestShotQualityEstimator::EstimationRequest::estimateAll to make both AGS and Head Pose estimations;

The description of attributes returned by the estimate() method is given below.

AGS#

AGS (garbage score) aims to determine the source image score for further descriptor extraction and matching.

Estimation output is a float score which is normalized in range [0..1]. The closer score to 1, the better matching result is received for the image.

When you have several images of a person, it is better to save the image with the highest AGS score.

Recommended threshold for AGS score is equal to 0.2. But it can be changed depending on the purpose of use. Consult VisionLabs about the recommended threshold value for this parameter.

Head Pose#

Head Pose determines person head rotation angles in 3D space, namely pitch, yaw and roll.

Head pose
Head pose

Since 3D head translation is hard to determine reliably without camera-specific calibration, only 3D rotation component is estimated.

Head pose estimation characteristics:

  • Units (degrees);
  • Notation (Euler angles);
  • Precision (see table below).

Prediction precision decreases as a rotation angle increases. We present typical average errors for different angle ranges in the table below.

"Head pose prediction precision"

Range -45°...+45° < -45° or > +45°
Average prediction error (per axis) Yaw ±2.7° ±4.6°
Average prediction error (per axis) Pitch ±3.0° ±4.8°
Average prediction error (per axis) Roll ±3.0° ±4.6°

Zero position corresponds to a face placed orthogonally to camera direction, with the axis of symmetry parallel to the vertical camera axis.

Back to top