Skip to content

Face and image parameters#

This section lists the general parameters of faces and images estimated by LUNA PLATFORM 5 and how to get them.

You can get parameters using a some of tools and resources. Basically, getting parameters is done using the following methods:

  1. Extract gender and age from the image. Gender and age belong to the concept of basic attributes.

    This method uses "/extractor", "/sdk" resources and "extract_policy" of "/handlers" and "/verifiers" resources.

    To extract this parameters using the "/extractor" resource, you should first create a sample using the "/detector" resource. In response to the "/extractor" request, the gender and age of the person will be returned. The extracted data have a TTL (time to live) and will be removed from the database after the specified period.

    See the detailed description of extracting basic attributes in the "Descriptor extraction and attribute creation" section.

    When estimating parameters using the "/sdk" resource, you should send the source image to LUNA PLATFORM 5 and specify the "estimate_basic_attributes" parameter in the query parameters. In response to the request, the gender and age of the person will be received. These parameters will not be saved to the database.

    To extract this parameters using the "/handlers" and "/verifiers" requests, you should use the "extract_basic_attributes" parameter of the "extract_policy".

  2. Perform estimation of face and image parameters.

    Various resources are used to estimate the parameters. The resources mainly used are "/detector", "/sdk", "/handlers" and "/verifiers".

    When estimating parameters using the "/detector" resource, you need to send the source image to LUNA PLATFORM 5 and specify the estimation of the required face or image parameters in the query parameters. In response to request, a face sample will be created and the specified parameters will be given. The estimated parameters will not be saved in the database.

    The method for getting parameters using the "/sdk" resource is similar to the method described above, however, the sample will not be created. The estimated parameters will also not be saved to the database.

    To estimate parameters using "/handlers" and "/verifiers" resources, you should use the "detect_policy" with required parameters.

  3. Perform check of face and image parameters according to ISO/IEC 19794-5 or custom conditions.

    The ability to perform such checks is adjusted by a special parameter in the LUNA PLATFORM 5 license key.

    The "/iso" resource and the "face_quality" check group of the "detection_policy" of the "/handlers" and "/verifiers" requests are used to perform the check.

    The responses to requests show the overall result of passing all checks ("0" or "1"), as well as the results of each check.

    See the detailed description of this functionality in the "Image check" section.

All returned values and the response format depend on the resource where the estimation is performed.

In order to get results when sending requests to "/handlers" or "/verifiers" resources, you need to generate an event and perform verification on the specified handlers. See the Handler and Verifier objects section for more information about working with these resources.

Gender and age#

This estimation determines the basic attributes (gender and age) of a person in the image.

For details on basic attributes, see Attribute object.

Resources where the estimation is performed:

  • "/extractor"

    Estimation name - "extract_basic_attributes".

  • "/handlers"

    Estimation name - "policies" > "extract_policy" > "extract_basic_attributes".

  • "/verifiers"

    Estimation name - "policies" > "extract_policy" > "extract_basic_attributes".

  • "/sdk"

    Estimation name - "estimate_basic_attributes".

Face parameters#

Eyes attributes#

This estimation determines the following state of eyes for each of the eyes:

  • open;
  • closed;
  • occluded.

Poor quality images or ones that depict obscured eyes (think eyewear, hair, gestures) fall into the "Occluded" category.

Iris landmarks are determined. An array of 34 landmarks is returned for each eye.

Resources where the estimation is performed:

  • "/detector"

    Estimation name - "estimate_eyes_attributes".

  • "/handlers"

    Estimation name - "policies" > "detect_policy" > "estimate_eyes_attributes".

  • "/verifiers"

    Estimation name - "policies" > "detect_policy" > "estimate_eyes_attributes".

  • "/sdk"

    Estimation name - "estimate_eyes_attributes".

Eyes attributes estimation using image checking tools:

  • "/iso" (see sections "7.2.3 Expression", point "a", "7.2.11 Visibility of pupils and irises" and "7.2.13 Eye patches" of the standard ISO/IEC 19794-5:2011).

    Checks names - "left_eye", "right_eye".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources.

    Checks names - "left_eye", "right_eye".

Distance between eyes#

Note. It is not possible to use a sample as an input image for this estimation.

It is possible to estimate the distance between the centers of the eyes in pixels.

This estimation can only be performed using image checking tools:

  • "/iso" (see section "5.6.5 Eye and nostril centre Landmark Points" of the standard ISO/IEC 19794-5:2011).

    Check name - "eye_distance".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources.

    Check name - "eye_distance".

Red eyes effect#

This estimation determines the presence of the red eyes effect, where:

  • "0" - the red eyes effect is not present in the image;
  • "1" - the red eyes effect is present in the image.

Image requirements:

For correct checking results, the following requirements should be met:

  • image quality:

    • illumination = [0.61...1]
    • specularity = [0.57...1]
    • blurriness = [0.5...1]
    • dark = [0.1...1]
    • light = [0.1...1]
  • natural light:

    • natural_light= [0.5...1]

Resources where the estimation is performed:

Red eyes effect estimation is available only using image checking tools:

  • "/iso" (see section "7.3.4 Unnatural colour" of the standard ISO/IEC 19794-5:2011)

    Check name - "red_eyes".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "red_eyes".

Gaze#

This estimation determines the gaze. The gaze is represented by the following angles for both eyes at once:

  • pitch;
  • yaw.

Zero position corresponds to a gaze direction orthogonally to the face plane, with the axis of symmetry parallel to the vertical camera axis.

Resources where the estimation is performed:

  • "/detector"

    Estimation name - "estimate_gaze".

  • "/handlers"

    Estimation name - "policies" > "detect_policy" > "estimate_gaze".

  • "/verifiers"

    Estimation name - "policies" > "detect_policy" > "estimate_gaze".

  • "/sdk"

    Estimation name - "estimate_gaze".

Gaze estimation using image checking tools:

  • "/iso" (see section "7.2.3 Expression" point "e" of the standard ISO/IEC 19794-5:2011)

    Checks names - "gaze_yaw", "gaze_pitch".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Checks names - "gaze_yaw", "gaze_pitch".

Glasses#

This estimation determines the predominant state of the glasses from the following states:

  • sun_glasses;
  • glasses;
  • no_glasses.

Resources where the estimation is performed:

  • "/sdk"

    Estimation name - "estimate_glasses".

Glasses estimation using image checking tools:

  • "/iso" (see section "7.2.9 Eye glasses" of the standard ISO/IEC 19794-5:2011)

    Check name - "glasses".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "glasses".

Eyebrows#

This estimation determines the predominant state of the eyebrows from the following states:

  • neutral - eyebrows are in the usual position;
  • raised - eyebrows are raised;
  • squnting - eyes are narrowed, eyebrows are lowered;
  • frowning - eyebrows are frowned.

It is possible to specify several eyebrow states as acceptable.

Eyebrows state. From left to right - neutral, raised, squinting, frowning
Eyebrows state. From left to right - neutral, raised, squinting, frowning

Image requirements:

For correct checking results, the following requirements should be met:

Resources where the estimation is performed:

Eyebrows estimation is available only using using image checking tools:

  • "/iso" (see section "7.2.3 Expression", points "d", "f" and "g" of the standard ISO/IEC 19794-5:2011)

    Check name - "eyebrows_state".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "eyebrows_state".

Mouth attributes#

This estimation determines a probabilistic score for each of the following parameters in the range [0..1]:

  • opened;
  • smile;
  • occluded.

The probability that the mouth is opened is also determined.

Resources where the estimation is performed:

  • "/detector"

    Estimation name - "estimate_mouth_attributes".

  • "/handlers"

    Estimation name - "policies" > "detect_policy" > "estimate_mouth_attributes".

  • "/verifiers"

    Estimation name - "policies" > "detect_policy" > "estimate_mouth_attributes".

  • "/sdk"

    Estimation name - "estimate_mouth_attributes".

Mouth attributes estimation using image checking tools:

  • "/iso" (see section "7.2.3 Expression" point "a", "b" and "c" of the standard ISO/IEC 19794-5:2011)

    Checks names - "mouth_smiling", "mouth_occluded", "mouth_open".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Checks names - "mouth_smiling", "mouth_occluded", "mouth_open".

Recommended thresholds for check:

  • mouth_occluded: 0.5
  • mouth_smiling: 0.5
  • mouth_opened: 0.5

Smile state#

This estimation determines the predominant state of a smile from the following states:

  • none - no smile found, so no additional parameters are determined
  • smile_lips - the usual smile with closed lips
  • smile_teeth - smile with open teeth

If necessary, you can specify several smile states as acceptable.

Image requirements:

For correct checking results, the following requirements should be met:

Resources where the estimation is performed:

Smile state estimation is only available using image checking tools:

  • "/iso" (see section "7.2.3 Expression" point "a", "b" and "c" of the standard ISO/IEC 19794-5:2011)

    Check name - "smile_properties".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "smile_properties".

Image quality#

This estimation determines a probabilistic score for each of the following parameters in the range [0..1], where 0 corresponds to low quality and 1 to high quality:

  • darkness - the parameter shows if the image is underexposed,
  • light - the parameter shows if the image is overexposed,
  • blur - the parameter shows if the image is blurred,
  • illumination - illumination uniformity corresponds to the face illumination in the image. The lower the difference between light and dark zones of the face, the higher the estimated value. When the illumination is evenly distributed throughout the face, the value is close to "1".
  • specularity - specularity is a face possibility to reflect light. The higher the estimated value, the lower the specularity, and the better the image quality. If the estimated value is low, there are bright glares on the face.

Image quality is determined using a specially trained VisionLabs neural network. If necessary, you can determine the illumination of the face in the image using an algorithm that performs an estimation in accordance with the ICAO standard (see the section "Illumination uniformity according to ICAO standard).

Examples are presented in the images below. Good quality images are shown on the right.

Blurred image (left), not blurred image (right)
Blurred image (left), not blurred image (right)
Dark image (left), good quality image (right)
Dark image (left), good quality image (right)
Light image (left), good quality image (right)
Light image (left), good quality image (right)
Image with uneven illumination (left), image with even illumination (right)
Image with uneven illumination (left), image with even illumination (right)
Image with specularity - image contains flares on face (left), good quality image (right)
Image with specularity - image contains flares on face (left), good quality image (right)

The most important image quality parameters for face recognition are darkness, light, and blur, so you should select them carefully.

The illumination and secularity parameters enable you to select images of better visual quality. Face recognition is not greatly affected by uneven illumination or glares.

Resources where the estimation is performed:

  • "/detector"

    Estimation name - "estimate_quality".

  • "/handlers"

    Estimation name - "policies" > "detect_policy" > "estimate_quality".

  • "/verifiers"

    Estimation name - "policies" > "detect_policy" > "estimate_quality".

  • "/sdk"

    Estimation name - "estimate_quality".

Image quality estimation using image checking tools:

  • "/iso" (see sections "7.2.7 Subject and scene lighting", "7.3.2 Contrast and saturation", "7.3.3 Focus and depth of field", "7.2.8 Hot spots and specular reflections", "7.2.12 Lighting artefacts", "7.2.7 Subject and scene lighting" and "7.2.12 Lighting artefacts" of the standard ISO/IEC 19794-5:2011)

    Checks names - "illumination_quality", "specularity_quality", "blurriness_quality", "dark_quality", "light_quality".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Checks names - "illumination_quality", "specularity_quality", "blurriness_quality", "dark_quality", "light_quality".

Acceptable ranges for specifying thresholds using "face_quality":

  • illumination_quality - [0...0.3]
  • specularity_quality - [0...0.3]
  • blurriness_quality - [0.57...0.65]
  • dark_quality - [0.45...0.52]
  • light_quality - [0.44...0.61]

Illumination uniformity according to ICAO standard#

It is possible to estimate the uniformity of illumination according to the requirements specified in ICAO standard.

In accordance with the standard, it is recommended to use color images. When using black and white images, the results may be unexpected.

The uniformity of illumination according to the ICAO standard is only available using image checking tool - "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources.

Check name - "illumination_uniformity".

Dynamic range according to ICAO standard#

This estimation determines the ratio of brightness of the lightest and darkest areas of the face according to the requirements specified in ICAO standard.

Dynamic range estimation is only available using image checking tool - "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources.

Check name - "dynamic_range".

Natural light#

This estimation determines whether there is natural lighting on the face, where:

  • "0" - the lighting is unnatural;
  • "1" - the lighting is natural.
lighting is unnatural (left), light is natural (right)
lighting is unnatural (left), light is natural (right)

Image requirements:

For correct checking results, the following requirements should be met:

Resources where the estimation is performed:

Natural light estimation is available only using image checking tools:

  • "/iso" (see section "7.3.4 Unnatural colour" of the standard ISO/IEC 19794-5:2011)

    Check name - "natural_light".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "natural_light".

Face color type#

This estimation determines the most likely type of face color from the following:

  • color;
  • grayscale;
  • infrared (near infrared range).

Resources where the estimation is performed:

Face color type estimation is only available using image checking tools:

  • "/iso" (see section "7.4.4 Use of near infra-red cameras" of the standard ISO/IEC 19794-5:2011)

    Check name - "face_color_type".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "face_color_type".

Head pose#

This estimation determines the head pose. The pose is defined by three angles:

  • pitch;
  • roll;
  • yaw.

The angle values are specified in the range from "-180" to "180".

Head pose
Head pose

In all the resources listed below, with the exception of "/iso", the possibility to filtered out by head pose is availiable.

In the resources "/detector", "/handlers", "/verifiers" and "/sdk", the threshold value is specified from "0" to "180". The default value is "180", which means that the head in the image can be rotated to any angle from "-180" to "180". When you set any other value (for example, "30"), all the detections with the etimated angle less than or equal to "-30" and greater than or equal to "30" will be filtered.

For the "face_quality" field of the "/handlers" and "/verifiers" resources, the minimum and maximum thresholds are set in separate fields.

Resources where the estimation is performed:

  • "/detector"

    Estimation name - "estimate_head_pose".

  • "/handlers"

    Estimation name - "policies" > "detect_policy" > "estimate_head_pose".

  • "/verifiers"

    Estimation name - "policies" > "detect_policy" > "estimate_head_pose".

  • "/sdk"

    Estimation name - "estimate_head_pose".

Below are the recommended thresholds for estimation in the "/detector", "/handlers", "/verifiers" and "/sdk" resources.

Recommended maximum estimation thresholds for cooperative mode:

  • roll_threshold: 30
  • pitch_threshold: 15
  • yaw_threshold: 15

Recommended maximum estimation thresholds for non-cooperative mode:

  • roll_threshold: 30
  • pitch_threshold: 30]
  • yaw_threshold: 30

Head pose estimation using image checking tools:

  • "/iso" (see section "7.2.2 Pose" of the standard ISO/IEC 19794-5:2011)

    Checks names - "head_roll", "head_pitch", "head_yaw".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Checks names - "head_roll", "head_pitch", "head_yaw".

Below are the recommended thresholds to check with "face_quality".

Vertical and horizontal face position#

Note. It is not possible to use a sample as an input image for these estimations.

These estimations determine the position of the center point vertically and horizontally relative to the image.

Resources where the estimation is performed:

Vertical and horizontal face position estimation is available only using image checking tools:

  • "/iso" (see sections "8.3.2 Horizontally centred face" and "8.3.3 Vertical position of the face" of the standard ISO/IEC 19794-5:2011)

    Checks names - "head_horizontal_center", "head_vertical_center".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Checks names - "head_horizontal_center", "head_vertical_center".

Vertical and horizontal head sizes#

Note. It is not possible to use a sample as an input image for these estimations.

This estimations determine the vertical and horizontal head size relative to the size of the image.

Resources where the estimation is performed:

Vertical and horizontal head sizes estimation is available only using image checking tools:

  • "/iso" (see sections "8.3.4 Width of head" and "8.3.5 Length of head" of the standard ISO/IEC 19794-5:2011)

    Checks names - "head_width", "head_height".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Checks names - "head_width", "head_height".

Face width and height#

Note. It is not possible to use a sample as an input image for these estimations.

These estimations determine the face width and height in pixels.

Resources where the estimation is performed:

Face width and height estimations are only available using image checking tool - "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources.

Checks names - "face_width", "face_height".

Indents from image edges#

Note. It is not possible to use a sample as an input image for these estimations.

This estimation is determined as the indent from the image border (left, right, top, bottom) to the face border (left, right, top, bottom) in pixels.

Resources where the estimation is performed:

Indents from image edges estimation is only available using image checking tool - "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources.

Checks names - "indent_upper", "indent_lower", "indent_right", "indent_left".

Mask (medical or fabric)#

This estimation determines a probabilistic score for each of the following parameters in the range [0..1]:

  • medical_mask - the probability that the person wears a medical mask;
  • missing - the probability that the person does not wear a mask;
  • occluded - the probability that the face is occluded by an object other than a medical mask.

The predominant state of the mask is also determined.

Resources where the estimation is performed:

  • "/detector"

    Estimation name - "estimate_mask".

  • "/handlers"

    Estimation name - "policies" > "detect_policy" > "estimate_mask".

  • "/verifiers"

    Estimation name - "policies" > "detect_policy" > "estimate_mask".

  • "/sdk"

    Estimation name - "estimate_mask".

Emotions#

This estimation determines a probabilistic score for each of the following parameters in the range [0..1]:

  • anger;
  • disgust;
  • fear;
  • happiness;
  • neutral;
  • sadness;
  • surprise.

The predomintant emotion is also estimated.

Emotions can be saved in the event object during the event creation.

Resources where the estimation is performed:

  • "/detector"

    Estimation name - "estimate_emotions".

  • "/handlers"

    Estimation name - "policies" > "detect_policy" > "estimate_emotions".

  • "/verifiers"

    Estimation name - "policies" > "detect_policy" > "estimate_emotions".

  • "/sdk"

    Estimation name - "estimate_emotions".

Headwear#

This estimation determines the most predominant type of headwear from the following:

  • none
  • baseball_cap;
  • beanie;
  • peaked_cap;
  • shawl;
  • hat_with_ear_flaps;
  • helmet;
  • hood;
  • hat;
  • other.

It is possible to specify several types of headwear as acceptable.

Image requirements:

For correct checking results, the following requirements should be met:

Resources where the estimation is performed:

Headwear estimation is available only using image checking tools:

  • "/iso" (see section "B.2.7 Head coverings" of the standard ISO/IEC 19794-5:2011)

    Check name - "headwear_type".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "headwear_type".

Radial distortion (Fisheye effect)#

This estimation determines the presence of the Fisheye effect, where:

  • "0" - the Fisheye effect is not present in the image;
  • "1" - the Fisheye effect is present in the image.

Image requirements:

For correct checking results, the following requirements should be met:

Resources where the estimation is performed:

Fisheye effect estimation is available only using image checking tools:

  • "/iso" (see section "7.3.6 Radial distortion of the camera lens" of the standard ISO/IEC 19794-5:2011)

    Check name - "radial_distortion".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "radial_distortion".

Image parameters#

Image format#

This estimation determines the correspondence of the incoming image format to one of the following formats - "JPEG", "JPEG2000", "PNG".

Resources where the estimation is performed:

Image format estimation is available only using image checking tools:

  • "/iso" (see section "7.5 Format requirements for the Frontal Image Type" of the standard ISO/IEC 19794-5:2011)

    Check name - "image_format".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Check name - "image_format".

Image size#

This estimation determines the image size in bytes.

Resources where the estimation is performed:

Image size estimation is only available using image checking tool - "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources.

Check name - "image_size".

Image width and height#

These estimations determine the image width and height in pixels.

Resources where the estimation is performed:

Image width and height estimations are available only using image checking tools:

  • "/iso" (see sections "5.7.4 Width" and "5.7.5 Height" of the standard ISO/IEC 19794-5:2011)

    Checks names - "image_height", "image_width".

  • "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources

    Checks names - "image_height", "image_width".

Aspect ratio#

This estimation determines the proportional ratio of the image width to height.

Resources where the estimation is performed:

Aspect ratio estimation is only available using image checking tool - "detect_policy" > "face_quality" group of checks in the "/handlers" and "/verifiers" resources.

Check name - "aspect_ratio".

EXIF metadata#

When the EXIF estimation is enabled, all the tags of the image are parsed and their names and values are outputted. Please refer to JEITA CP-3451 EXIF specification for details. The following data is returned:

  • make;
  • model;
  • orientation;
  • latitude;
  • longitude;
  • artist;
  • software;
  • dateTime;
  • digitalZoomRatio;
  • flash;
  • uid.

Resources where the estimation is performed:

  • "/detector"

    Estimation name - "extract_exif".

  • "/handlers"

    Estimation name - "policies" > "detect_policy" > "extract_exif".

  • "/verifiers"

    Estimation name - "policies" > "detect_policy" > "extract_exif".

  • "/sdk"

    Estimation name - "use_exif_info".