Skip to content

Liveness facility#

Overview#

Liveness detection facility is responsible for the determination whether or not a living person is in the image sequence. By image sequence here we mean a video stream from the camera or a local video file.

Liveness facility contains liveness detection algorithm structures implemented as state machines. These machines change their state each time update() function is called. Combined with FaceEngine descriptor processing facility it can be transformed into the powerful tool for user authentication.

Liveness detection structure can be created via

createLiveness(), createUnifiedLiveness(), createComplexLiveness()

methods for simple, unified and complex liveness types respectively.

Liveness facility architecture#

Liveness types are implemented according to the inheritance architecture.

Liveness architecture
Liveness architecture

Coordinate system#

LivenessEngine uses the three-dimensional coordinate system. The coordinate system's center is represented in the image below.

Coordinate system midpoint
Coordinate system midpoint

Actions required in liveness scenarios are performed regarding the accepted coordinate system (for example, "left turn" equals to counterclockwise head spin around the vertical axis). Graphical illustrations in the next chapters provide a visual representation of each scenario for better understanding.

Liveness types#

Implemented liveness classes are divided into Simple and Complex types.

Simple liveness#

These liveness types require single video sequence for operation. Frames should be in R8G8B8 format. Refer to FaceEngine Handbook for additional information about fsdk::Image structure. All simple liveness tests have common interface represented as ILiveness structure.

Basic liveness#

Each liveness type is inherited from the basic liveness class which utilizes a generic execution cycle and performs such common tasks as:

  • basic initialization;
  • face detection;
  • additional data extraction / calculation;
  • face tracking analysis;
  • using detection rectangles;
  • using landmark points;
  • state change.

Each liveness test traces and analyzes primary for the test estimated attribute, and produces some result. The result is positive if a user succeeds in the correct alteration of the attribute. Otherwise, the result is negative.

Angle liveness#

Angle liveness additionally performs head pose estimation. Refer to FaceEngine Handbook (chapter "Parameter estimation facility" section "Head pose estimation") for more information on angle estimation.

Angle liveness types are the following:

  1. Pitch angle

a. Nod scenario means smooth head tilt in a positive direction until the required threshold is exceede.

Head tilt in a positive direction
Head tilt in a positive direction

b. Head raise scenario requires smooth head tilt in a negative direction until the required threshold is exceede.

Head tilt in a negative direction
Head tilt in a negative direction
  1. Yaw angle

c. Left turn scenario requires smooth head rotation in a positive direction until the required threshold is exceeded.

Smooth head rotation in a positive direction
Smooth head rotation in a positive direction

d. Right turn scenario requires smooth head rotation in a negative direction until the required threshold is exceeded.

Smooth head rotation in a negative direction
Smooth head rotation in a negative direction

Mouth liveness#

Mouth liveness performs mouth landmarks analysis. In this scenario distance between mouth landmarks increases until the required threshold is exceeded (i.e. a user opens a mouth).

Mouth liveness
Mouth liveness

Refer to FaceEngine Handbook for more information on face alignment.

Eyes liveness#

Eyes liveness performs eye state estimation and analysis. In this scenario a user should blink, i.e., both eyes are opened, closed and opened again simultaneously.

Eyes liveness
Eyes liveness

Refer to FaceEngine Handbook (Chapter "Parameters estimation facility" section "Eyes estimation") for more information on eyes estimation.

Eyebrows liveness#

Eyebrow liveness performs eyebrow landmarks analysis. This scenario requires increase of the distance between eyebrows and eyes landmarks (i.e. eyebrow rising) until the required threshold is exceeded.

Eyebrow liveness
Eyebrow liveness

Refer to FaceEngine Handbook (chapter "Face detection facility" section "Face alignment") for more information on face alignment.

Flow liveness#

Flow liveness performs optical flow analysis. This scenario requires a smooth increase of face detection rectangle area to obtain a required number of frames to calculate the optical flow.

Flow liveness
Flow liveness

This liveness type is designed for mobile phones, and the results may be erroneous on other platforms.

Smile liveness#

Smile liveness performs face warp analysis. This scenario requires user to smile until the probability calculated by neural network will be above threshold, specialized by configuration file.

#

Infrared liveness#

Infrared liveness performs face warp analysis using image acquired from infrared camera. This scenario requires user to normally appear in front of a camera until the probability calculated by neural network will be above threshold, specialized by configuration file.

Note: Most Infrared cameras provide 1-channel grayscale image, such image should be converted to 3-channel grayscale image before passing into infrared liveness.

Unified liveness#

Unified liveness combines previous types algorithms with the exception of flow and blink types, and apart from tracking of main entity, which was specialized on creation stage, performs additional calculation and analysis in order to detect fraud attempts.

Calculated and tracked entities:

  • angles;
  • mouth landmarks distance;
  • eyebrow landmarks distance;
  • eye states (blinks);
  • smile probability;

Rigidity of fraud tracking is set by configuration parameters.User can adjust fraud checking by enabling or disabling additional verifications with consider method.

The main action scenario is borrowed from simple liveness type at creation stage, however user should not perform any actions except main one, because it will be considered as a fraud attempt.

Complex liveness#

These liveness types require additional unordinary data for analysis. Such data cannot be obtained by common rgb camera, so it requires the use of complementary devices for operating.

Depth liveness#

Currently depth liveness is the only complex type that is supported. It requires 16 bit depth matrix, which contains information relating to the distance of the surfaces of scene objects from a viewpoint in millimeters. For correct operation face should be placed on distance from 0.5 to 4.5 meters.

This liveness type does not require any actions because it performs depth map face region of interest analysis using neural networks. For additional information refer to Doxygen API documentation that is delivered with LivenessEngine.

Rgb image and 16 bit depth map must be aligned.

Error Codes#

The LSDKError structure (possible return codes).

LSDKError code Description
Ok Ok.
NotInitialized Liveness not initialized.
NotReady Liveness not ready, require more updates.
PreconditionFailed Starting condition is not accomplished, liveness not started yet.
Internal Internal error.

The most common error state is PreconditionFailed. And related to the liveness type, interpretation may be different. Possible options are below (details in include/lsdk/ILiveness.h).

Liveness type PreconditionFailed interpretation
LA_PITCH_DOWN Yaw, pith or roll absolute value larger than expected value.
LA_PITCH_UP Yaw, pith or roll absolute value larger than expected value.
LA_YAW_LEFT Yaw, pith or roll absolute value larger than expected value.
LA_YAW_RIGHT Yaw, pith or roll absolute value larger than expected value.
LA_SMILE Smiling score is less than 0.0.
LA_MOUTH The currentDistance value less than expected value.
LA_EYEBROW Yaw, pith or roll absolute value larger than expected value.
LA_EYE State of eyes aren't open!
LA_FLOW Detected face rect's sides ratio is greater than expected value.
LA_INFRARED IR estimation score is less than 0.0.

And at least the result value PreconditionFailed may be related with detection rectangle. Before starting liveness estimation, there is one more verification of detection rectangle, which is checkBorder. The checkBorder method checks is detection rectangle inside allowed area. Otherwise, You can see logging message as below:

[30.08.2021 13:40:42] [Debug]  [BasicLiveness] checkBorder failed! Detection rectangle is not inside allowed area: x,y{4, 4} w,h{479, 463}

Note Find details about values for each Liveness type, mentioned above in data/livenessengine.conf. Please pay attention, level of log verbosity is 4 - Debug.