Skip to content

Component description#

RSEngine component#

RSEngine provides the interaction of the following libraries within the system:

  • VisionLabs LUNA SDK
  • RealSense2 SDK
  • VLS LUNA CAMERA 3D SDK
  • VLS LUNA CAMERA 2D SDK

This integration ensures efficient and reliable communication between these libraries, enabling advanced functionalities for image and video processing.

VisionLabs LUNA SDK component#

VisionLabs LUNA SDK is a comprehensive software development kit that includes specialized libraries and neural networks designed for advanced image analysis. Its key capabilities include:

  • Face detection: Identifying faces in images and locating key facial landmarks.
  • Best shot selection: Automatically selecting the highest-quality frame from a video stream for further processing.
  • Image attribute estimation: Analyzing image attributes for further Liveness estimations.
  • Liveness estimation: Evaluating faces in images using Liveness algorithms to prevent spoofing attacks.

Note: All estimations described below are performed to ensure that the image meets Liveness requirements. These estimations are internal. Results are only displayed in cases of errors, such as when an image or face attribute is unsuitable for Liveness estimation. For more details on error codes and their descriptions, see Appendix 2: Status Codes and Error Descriptions.

All estimations described below are performed to ensure that the image meets Liveness requirements. These estimations are internal. and the results are not transmitted externally. The result of the check can only be shown in case of an error, if any attribute of the image/face is not suitable for the Liveness estimation (see the description of errors in “Appendix 2. Status codes and error descriptions”);

RealSense2 SDK component#

RealSense2 SDK is a component that provides the following functionalities:

  • Image acquisition: Receives incoming images from Intel RealSense cameras.
  • Parameter configuration: Allows you to configure detection parameters to suit specific requirements.
  • Camera control: Enables turning the camera on or off and adjusting various settings, such as:
  • Laser backlight brightness
  • Auto exposure
  • Brightness levels
  • Automatic connection management:
  • Automatically updates the connection with the camera.
  • If the connection is lost, the system attempts to reconnect to the camera.
  • In case reconnection fails, a soft reset of the connection cable is performed.
  • If all recovery operations are unsuccessful, the issue will be logged in the camera status report within the system logs.

Component: VLS LUNA CAMERA 3D SDK#

VLS LUNA CAMERA 3D SDK is a component that provides the following functionalities:

  • Image acquisition: Receives incoming images from VLS LUNA CAMERA 3D or VLS LUNA CAMERA 3D Embedded devices.
  • Parameter configuration: Allows you to configure detection parameters to meet specific requirements.
  • Camera control: Provides the ability to turn the camera on or off and adjust various settings, such as:
  • Laser illumination brightness
  • Auto exposure
  • Brightness levels

Component VLS LUNA CAMERA 2D SDK#

VLS LUNA CAMERA 2D SDK is a component that provides the following functionalities:

  • Image acquisition: Receives incoming images from VLS LUNA CAMERA 2D infrared cameras.
  • Parameter configuration: Allows you to configure detection parameters to meet specific requirements.
  • Camera control: Provides the ability to turn the camera on or off as needed.
  • Frame rotation adjustment: Enables changing the rotation angle of the camera's video frame.

Camera functions#

Face detection#

The detector employs advanced face detection algorithms to address the following tasks:

  • Face detection: Identifying faces within an image.
  • Key point localization: Locating five key facial landmarks: two for the eyes, one for the tip of the nose, and two for the corners of the mouth.
  • Detection quality estimation: Evaluating the probability that the detected object is indeed a face, ensuring high accuracy and reliability.

Image quality estimation#

The quality of an image is evaluated based on the following parameters:

  • Blurriness
  • Lightness or overexposure
  • Darkness or Underexposure

Mouth estimation#

The mouth estimation evaluates the following parameters:

  • Open: Indicates whether the mouth is open.
  • Occluded: Detects if the mouth is blocked or covered by an external object.
  • Smiling: Identifies the presence of a smile.

Eye state estimation#

The eye state estimation evaluates the following parameters:

  • Closed: Eyes are shut.
  • Open: Eyes are open.
  • Occluded: Eyes are covered, for example, by sunglasses or other objects.

Head pose estimation#

The head pose estimation determines a person's head rotation angles in 3D space, specifically along the pitch, yaw, and roll axes:

  • Pitch: This angle measures the vertical tilt of the head. It limits the head rotation along the X-axis.
  • Yaw: This angle measures the horizontal rotation of the head. It limits the head rotation along the Y-axis.
  • Roll: This angle measures the lateral tilt of the head. It limits the head rotation along the Z-axis.

Depth Liveness estimation#

The "vitality" of a person in the image is verified using a depth map.

The process involves analyzing a 16-bit depth matrix. It contains detailed information about the distances of scene objects (such as faces) relative to the camera's viewpoint. This analysis helps determine whether the subject is a live person or a spoof, such as a photograph or mask.

IR Liveness estimation#

The "vitality" of a person in the image is verified through infrared (IR) image analysis. This process ensures that the subject is a live human and not a spoof, such as a printed photo or mask.

Note: The camera must be equipped with infrared illumination to perform this check effectively.

LivenessOneShotRGB estimation#

The LivenessOneShotRGB estimation determines whether a person's face is real or fake by detecting and mitigating various types of spoofing attacks. These include:

  • Printed photo attack: One or more printed photos of another person are used.
  • Video replay attack: A pre-recorded video of another person is displayed to trick the camera.
  • Printed mask attack: An imposter cuts out a face from a photo and uses it to cover their own face.
  • 3D mask attack: An imposter wears a 3D mask designed to resemble the face of another person.
Estimation configuration#

Regardless of the platform, you can configure the LivenessOneShotRGB settings via the faceengine.conf file in the LivenessOneShotRGBEstimator::Settings section:

Parameter Description
version Specifies the algorithm version (10 or 11).
deny2XLmode Enables or disables the 2XL or XL mode.

On Linux, in the rsengine.conf file, you can configure the following parameters:

Parameter Description
liveness-depth-osl Enables or disables LivenessOneShotRGB estimation. The default value is 1 (enabled).
liveness-depth-osl-threshold Specifies the LivenessOneShotRGB threshold value. The default value is 0.7.

On Windows, in the registry under HKLM\Software\VisionLabs\RSEServer, you can configure the following parameters:

Parameter Description
LivenessDepthOSL Enables or disables LivenessOneShotRGB estimation. The default value is 1 (enabled).
LivenessDepthOSLThreshold Specifies the LivenessOneShotRGB threshold value. The default value is 0.7.

Camera Monitoring Component#

Camera monitoring is used to check the status of the camera.

Camera monitoring queries the following camera parameters:

  • firmware data;
  • operating status of infrared cameras–on/off;
  • RGB camera operating status–on/off;
  • camera serial number;
  • operation status of the entire camera–on/off;
  • camera temperature;
  • date of the last update.

An example of the contents of the registry in the monitoring section is shown below (Figure 4).

Example of registry contents in the monitoring section
Example of registry contents in the monitoring section

RSE Server component#

The RSE Server is a WebSocket server that processes commands from external systems.

The RSE Server accepts requests and sends responses via WebSocket.

Request Format:

  • Operation request code (1 byte)
  • Additional payload (MessagePack or string)

Example of a request:

GET ws://127.0.0.0.1:4444/–establishing a WebSocket connection.

0–content of the message to start the session.

Response Format:

  • Operation response code (1 byte)
  • Additional payload (MessagePack or string)

Only one request can be processed at a time.

Depending on the type of integration required (selected at the discretion of the external system developer), you can configure RSE Server in the following ways:

  • RSE Server expects requests (presented in Table 5) to connect to the camera from the external system–set the cs_communication = msg-pack parameter;
  • RSE Server starts the process of receiving video stream and face detection process as soon as WebSocket connection is established –set the cs_communication = json parameter.

Table 5. Description of requests to RSE Server

Request Name

Request Code

Description

Payload

Possible responses to the request

RSE_START_CAPTURE

0

Starts the process of receiving a video stream and the process of detecting faces

No

RSE_CAPTURE_OK (54),

RSE_CAPTURE_META (55)

RSE_STOP

1

Stops all running processes

No

RSE_STOP_OK (50)

Depending on the type of integration selected (chosen at the discretion of the external system developer), the server response can be presented in two formats:

  • if the external system developer has set the cs_communication = msg-pack parameter, each response will arrive in msg-pack format and will contain the messageType field with the response code and some additional data fields (payloads) described in Table 6;
  • if the external system developer has set the cs_communication = json parameter, each response will arrive in json format and will be categorized into the message types described in Table 7.

Table 6. Responses to RSE Server requests with MessagePack response format

Answer title

Code

Description **

Payload

RSE_CAPTURE_OK

54

Captured set of video frames

— rgbFrame—RGB frame in uint8 array format;

— rgbFrameWidth—RGB frame width in pixels in int format;

— rgbFrameHeight—RGB frame height in pixels in int format;

— irFrame—IR frame in uint8 array format;

— depthFrame—Depth frame in uint8 array format

RSE_CAPTURE_META

55

Metadata of detected persons

— gotBestshot—indicator of whether bestshot was received, in bool format, returns:

  • true—if bestshot was received;

  • false—if bestshot was not received

  • failureReason—the status or error code for liveness checks in int format (see error description in "Appendix 2. Status codes and error descriptions";)

– bestshot—RGB frame, in uint8 array format:

  • if gotBestshot=True, the response is an RGB frame, that passes all checks;

  • if gotBestshot=False, the field is blank

RSE_STOP_OK

50

All processing has stopped.

RSE Server is ready for new requests

No payload

RSE_UNKNOWN

51

The request was not recognized

No payload

RSE_INTERNAL_ERROR

52

An error occurred while processing the request

No payload

RSE_BUSY

53

Request denied because server is busy

No payload

Table 7. Responses to RSE Server requests with JSON response format

Message type

Description

Payload

visual

The type of response that is used for broadcasting the video stream to the user

– msg_type–the type of message returned (visual);

– img_b64–base64 camera frame;

– metadata–parameters of the returned image:

  • frame_size–image dimensions:

  • h–height of the image in pixels;

  • w–width of the image in pixels;

  • detections–coordinates of the detected person:

  • h–height of the frame of the detected face;

  • w–width of the detected face frame

  • x–coordinates of the upper left corner of the detected face frame

– y–coordinates of the upper left corner of the detected face frame

  • progress–displays the stages of Liveness of the detected person's background checks (percentage);

  • track_id–track identifier

bestshot

The type of response when a person is successfully found.

This frame can be used for subsequent processing (e.g. in an external face recognition system)

– msg_type–the type of message returned (bestshot);

– img_b64–face from camera frame in base64 format;

– metadata–parameters of the returned image:

  • frame_size–image dimensions:

  • h–height of the image in pixels;

  • w–width of the image in pixels;

  • detections–coordinates of the detected person:

  • h–height of the frame of the detected face;

  • w–width of the detected face frame

  • x–coordinates of the upper left corner of the detected face frame

  • y–coordinates of the upper left corner of the detected face frame

  • progress–displays the stages of Liveness of the detected person's background checks (percentage);

  • track_id–track identifier

WebSocket Client component#

WebSocket Client is an external component for interacting with RSE Server.

WebSocket Client is a JavaScript library for communicating with RSE Server via WebSocket. It uses the minimized binary serialization format MessagePack as a protocol library to encode and decode messages if the server returns responses in MessagePack format.