Skip to content

Streams management configuration#

Parameters for stream management are set in the LUNA Streams service. The service enables you to create and store streams in the LUNA Streams database.

Important: LUNA Streams has two API versions. This documentation describes the second version of the API. See the description of the first version of the API in the documentation of FaceStream v.5.1.49 and below.

Important: Streams management settings are not stored in the LUNA Configurator service and can only be set using HTTP requests to the LUNA Streams service. The LUNA Streams settings set in LUNA Configurator are described in the section "LUNA Streams settings".

A basic description of the parameters is given in the LUNA Streams OpenAPI specification. This section provides extended descriptions for some parameters.

In fact, all stream management settings can be divided into the following categories:

General stream settings#

General settings enables you to define key parameters necessary for authorization, control and characterization of the stream.

General stream settings:

  • "account_id" — Sets the value of the required field "account_id", which is sent in the request header in LUNA PLATFORM 5 to the LUNA API service. The parameter is used to bind the received data to a specific user.
  • "name" — Stream name. Serves to identify the source of sent frames.
  • "description" — Custom description of the stream.
  • "location" — Sets information about the location of the video source (city, region, district, etc.).
  • "autorestart" — Setting for automatically restarting the stream. See "Streams automatic restart" for more information.
  • "status" — Stream status at the start of processing.
  • "group_name" and "group_id" - Parameters for linking a stream to group.

The "data" section is also available, which specifies the main settings for processing the stream. See "Stream processing settings" for more details.

Stream processing settings#

To begin processing, FaceStream must receive information about the type of stream, its address and some additional information. The corresponding parameters are set in the "data" section.

Basic settings in the "data" section:

  • "type" — Stream transmission type:
    • TCP network protocol
    • UDP network protocol
    • set of images
    • video file
  • "reference" — Source of the stream (link, path to a video file or set of images, etc.)
  • "roi" — Region of interest in which the detection and tracking of a face in the frame occurs.
  • "ffmpeg_threads_number" — Number of threads for video decoding using FFMPEG.
  • "real_time_mode_fps" — Number of FPS with which the video file will be processed.
  • "frame_processing_mode" — Parameter that determines whether the full or scaled frame will be processed.
  • "rotation" — Angle of rotation of the image from the source. Used if the incoming stream is rotated, for example if the camera is installed on the ceiling. The rotation is performed clockwise.
  • "preferred_program_stream_frame_width" — Enables you to automatically select the optimal channel from several in the stream.
  • "endless" — Control of stream restart when receiving a network error.
  • "mask" — Mask of file names in the directory with images.

Analytics settings are also specified in the "data" section. See "Analytics settings" for more details.

type#

Stream transfer type. After selecting the stream transfer type, you must specify the path to the source/images/USB device, etc. in setting "reference".

FaceStream can use one of the following stream transfer types:

  • tcp — Transport layer network protocol to receive video data.
  • udp — Transport layer network protocol to receive video data.
  • images — Set of frames as separate image files.
  • videofile — Video file.

Only transport layer protocols (TCP or UDP) are specified in FaceStream. It is necessary to understand on which transport layer protocol the application layer protocol is based (HTTP, RTSP, HLS, etc.). See "Protocols" for details.

TCP Protocol implements an error control mechanism that minimizes the loss of information and the skip of the reference frames at the cost of increasing the network delay. Key frames are the basis of various compression algorithms used in video codecs (for example, h264). Only the reference frames contain enough information to restore (decode) the image completely, while the intermediate frames contain only differences between adjacent reference frames.

In terms of broadcasting on the network, there is a risk of package loss due to imperfect communication channels. In case of loss of the package containing the data keyframe, the stream fragment cannot be correctly decoded. Consequently, distinctive artifacts appear, that are easily and visually distinguishable. These artifacts do not allow the face detector to operate in normal mode.

The UDP protocol does not implement an error control mechanism, so the stream is not protected from damage. The use of this protocol is recommended only, if there is a high-quality network infrastructure.

With a large number of streams (10 or more), it is strongly recommended to use the UDP protocol. When using the TCP protocol, there may be problems with reading streams.

FaceStream processes data from the images and videofile types only once. After processing all images or video files, a message indicating the completion of processing will be displayed in the FaceStream logs. If a set of images or video files have been changed, then you need to restart the stream processing, after which FaceStream will again process all the images or video file once.

reference#

Full path to the source (for "tcp"/"udp" type):

"reference": "rtsp://some_stream_address"

USB device number (for "tcp"/"udp" type):

"reference": "/dev/video0"

To use USB device, you should specify the --device flag with the address of the USB device when launching the FaceStream Docker container. See the "Launching keys" section of the FaceStream installation manual.

Full path to the video file (for "videofile" type):

"reference": "/example/path/to/video/video.mp4"

Full path to the directory with the images (for "images" type):

"reference": "/example/path/to/images/"

To use video files and images, you should first move them to a docker container.

roi#

This parameter is used only for working with faces.

ROI specifies the region of interest in which the face detection and tracking are performed.

The specified rectangular area is cut out from the frame and FaceStream performs further processing using this image.

Correct exploitation of the "roi" parameter significantly improves the performance of FaceStream.

ROI on the source frame is specified by the "x", "y", "width", "height" and "mode" parameters, where:

  • "x", "y" – Coordinates of the upper left point of the ROI area of interest.
  • "width" and "height" – Width and height of the processed area of the frame.
  • "mode" – Mode for specifying "x", "y", "width" and "height". Two modes are available:

    • "abs" – Parameters "x", "y", "width" and "height" are set in pixels.
    • "percent" – Parameters "x", "y", "width" and "height" are set as percentages of the current frame size.

    If the "mode" field is not specified in the request body, then the value "abs" will be used.

With width and height values of "0", the entire frame is considered the region of interest.

The coordinate system on the image is set similarly to the figure below.

ROI coordinate system
ROI coordinate system

Below is an example of calculating ROI as a percentage:

Example of calculating ROI as a percentage
Example of calculating ROI as a percentage

frame_processing_mode#

This parameter is used for "tcp", "udp" and "videofile" types only.

This parameter is similar to convert_full_frame, but is set for a specific FaceStream instance.

If the value is set to "full", the frame is immediately converted to RGB image of the required size after decoding. This results in a better image quality and reduces the speed of frames processing.

When set to "scale", the image is scaled according to the settings in the TrackEngine configuration (standard behavior for releases 3.2.4 and earlier).

The default value is "auto". In this case, one of the two modes is selected automatically.

real_time_mode_fps#

This parameter is used for "videofile" type only.

This parameter enables you to set the number of FPS with which the video stream will be processed.

If a video has high FPS value and FaceStream cannot work with the specified number of frames per second, frames are skipped.

Thus, the video file emits a stream from a real video camera. It can be useful for performance tuning. The video will be played at the specified speed, which is convenient for load testing and subsequent analysis.

This parameter is disabled when set to "0".

ffmpeg_threads_number#

The parameter enables to specify the number of threads for decoding video using FFMpeg.

The number of processor cores involved in decoding process increases according to the number of threads. An increase in the number of threads is recommended when processing high-resolution video (4K or higher).

preferred_program_stream_frame_width#

This parameter is used for tcp or udp types only.

This parameter is intended to work with protocols that imply the presence of several channels with different bitrates and resolutions (for example, HLS).

If the stream has several such channels, then this parameter will enable you to select from all the channels of the whole stream the channel whose frame width is closer to the value specified in this parameter.

For example, there are 4 channels whose frame widths are 100, 500, 1000 and 1400. If the parameter "preferred_program_stream_frame_width" is equal to "800", then a channel with a frame width of 1000 will be selected.

If the stream has only one channel, this parameter will be ignored. You can set the value very large to deliberately select the largest channel.

In FaceStream logs (tag "ffmpeg") the selected value will be indicated as a message:

Url: [url] selected preferred stream #[res], [width]x[height] preferred width: [preferredProgramStreamFrameWidth]

The default value is 800.

endless#

This parameter enables you to control the restart of the stream when a network error is received (the error is determined by the system as an "eof" (end-of-file) marker).

The parameter is available only for the "udp" and "tcp" source types.

If the "endless" parameter takes the value "true", then in case of receiving "eof" and successful reconnection, the processing of the stream will continue. If all reconnection attempts failed (see the "healthcheck" section), then the stream will take the "failure" status. If the parameter takes the value "false", then the processing of the stream will not continue and the status of the stream will take the "done" status.

When using a video file as a "tcp" or "udp" source, it is assumed to use the value "false". This will avoid re-processing an already processed fragment of the video file when receiving "eof". If, when using a video file, the value of the "endless" parameter is "true", then after the processing is completed, the video file will be processed from the beginning.

mask#

This parameter is used for images type only.

A mask of file names in the directory with images. The mask allows FaceStream to understand which files from the specified folder should be used and in what order.

If you set the mask "Img_%02d.jpg", then FaceStream will take from the folder files which names consist of: Prefix (Img_) + two-digit number (%02d) + format (.jpg)

The following images will be taken in turn:

  • Img_00.jpg
  • Img_01.jpg
  • Img_02.jpg
  • Img_03.jpg

Another example of a mask is Photo-%09d.jpg. The following images will be taken:

  • Photo-000000000.jpg
  • Photo-000000001.jpg
  • Photo-000000002.jpg
  • Photo-000000003.jpg

FaceStream processes files in numerical order and does not skip nonexistent files. If there is a missing file in the file sequence FaceStream stops files processing.

The specified mask "example1_%04d.jpg" in the example will result in image processing, which name is composed of an "example1_" prefix and of a sequential frame number, of 4 characters size (for example: example1_0001.jpg, example1_0002.jpg, etc.).

"mask": "example1_%04d.jpg"

Analytics settings#

Analytics refers to analyzing the content of a video stream to extract key data and characteristics. Setting analytics settings also provides the ability to configure data filtering and send it to an external system. The corresponding parameters are set in the "analytics" section.

General settings in the "analytics" section:

  • "enabled" — Enable analytics.
  • "mode" — Analytics mode (1 — only faces, 2 — only bodies, 3bodies and faces).
  • "droi" — Source frame selection area.
  • "filtering" — Objects for filtering images and sending the resulting best shots.
  • "sending" — Setting up parameters related to compiling a collection of the best shots, as well as the period during which the frames will be analyzed to select the best shot.
  • "event_handler" — Setting parameters related to integration with an external system for subsequent frame processing.
  • "healthcheck" — Setting parameters responsible for reconnecting to the stream if errors occur during video streaming.

Also in the "analytics" section, settings for the Liveness check and the Primary Track policy are specified. As a rule, these parameters are rarely used in cooperative mode. See sections "Liveness Settings" and "Primary Track Policy Settings" for more information

Best shot selection region#

This parameter is used only for working with faces.

The region of interest within the source frame or ROI is specified using the "droi" parameter. If an ROI region is used, then face detection is performed in the ROI region, but the best shot is selected only in the DROI region. Face detection must be completely within the DROI zone for the frame to be considered as a best shot. Neither detection side should protrude from the DROI area even by a millimeter.

DROI is recommended to use when working with Access Control Systems and when the "use_mask_liveness_filtration" mode is enabled.

For example, it can be used, if there are several turnstiles close to each other and their cameras should find faces only in a small area and simultaneously perform Liveness check. Using DROI enables to limit the area of the best shot selection without losing information about the background.

DROI on the source frame is specified by the "x", "y", "width", "height" and "mode" parameters, where:

  • "x", "y" – Coordinates of the upper left point of the DROI.
  • "width" and "height" – Width and height of the processed area of the frame.
  • "mode" – Mode for specifying "x", "y", "width" and "height". Two modes are available:

    • "abs" – Parameters "x", "y", "width" and "height" are set in pixels.
    • "percent" – Parameters "x", "y", "width" and "height" are set as percentages of the current frame size.

    If the "mode" field is not specified in the request body, then the value "abs" will be used.

When calculating DROI, one must take into account that this region of interest is calculated relative to the original frame, and not relative to ROI.

DROI coordinate system
DROI coordinate system

When the ROI size is changed and the DROI size remains the default (0, 0, 0, 0), the DROI is not considered. If you change the size of the DROI, it will be considered when choosing the best shot.

Below is an example of calculating DROI as a percentage:

Example of calculating DROI as a percentage
Example of calculating DROI as a percentage

Frame filtering#

The "filtering" section describes objects for filtering images and sending the resulting best shots.

min_score_face#

This parameter sets the threshold for filtering face detections sent to the server.

For each detection, the AGS (Approximate Garbage Score) and the general detection quality are calculated.

Detections whose AGS value is less than the value specified in the "min_score_face" threshold will not be considered acceptable for further work. Next, the best shots will be selected according to the filtered detections in accordance with the general detection quality.

If the "min_score_face" parameter is set to "0", then the best shot will be determined by the general detections quality.

Default value is 0.5187.

Recommended value was established through research and analysis of detections on various face and body images.

min_score_body#

This parameter sets the threshold for filtering body detections sent to the server.

For each detection, the general detection quality is calculated.

Detections whose overall quality is less than the value specified in the "min_score_body" threshold will not be considered acceptable for further work. Next, the best shots will be selected based will be selected according to the filtered detections.

If the "min_score_body" parameter is set to "0", then the best shots will be determined by the general detections quality.

Default value is 0.5.

Recommended value was established through research and analysis of detections on various face and body images.

detection_yaw_threshold#

This parameter is used only for working with faces.

This parameter sets the maximum value of head yaw angle in relation to camera.

If, in a frame, head yaw angle is above the value of this parameter, the frame is considered as not appropriate for further analysis.

To disable this filtering, you must set the value "180".

detection_pitch_threshold#

This parameter is used only for working with faces.

This parameter sets the maximum value of head pitch angle in relation to camera.

If, in a frame, head pitch angle is above the value of this parameter, the frame is considered as not appropriate for further analysis.

To disable this filtering, you must set the value "180".

detection_roll_threshold#

This parameter is used only for working with faces.

This parameter sets the maximum value of head yaw angle in relation to camera.

If, in a frame, head roll angle is above the value of this parameter, the frame is considered as not appropriate for further analysis.

Head pose
Head pose

To disable this filtering, you must set the value "180".

yaw_number#

This parameter is used only for working with faces.

This parameter defines the number of frames for image filtration based on head yaw angle. This filter removes images where head’s yaw angle is too high.

How it works:

Parameter specifies the number of frames to analyze. A special algorithm analyzes head yaw angles on each of those frames. If on one of them the angle is significantly different from the average value of angles, the frame will not be considered as a candidate for best shot.

Example. Parameter value is set "7", meaning 7 frames will be analyzed. If on six of the frames the rotation angle is in the range between 50-60 degrees and the angle on the seventh frame is estimated at 0, the angle on the seventh frame is, most likely, estimated incorrectly. Reason is: a person cannot turn his head so abruptly in such short period of time. The seventh frame will not be considered for best shot.

By default, the parameter is disabled, the value is "1". The recommended value is "7".

yaw_collection_mode#

This parameter is used only for working with faces.

This parameter sets the number of frames the system must collect the number of frames specified in the "yaw_number" parameter to analyze the head yaw angle.

If "yaw_collection_mode" parameter is disabled, the system will analyze the frames sequentially, meaning it analyzes one frame, then two, then three and so on. Maximum number of frames to analyze is set in "yaw_number" parameter.

Parameter is disabled by default.

The purpose of utilizing "yaw_number" and "yaw_collection_mode" parameters is to increase the accuracy of best shot selection from a track.

mouth_occlusion_threshold#

This parameter is used only for working with faces.

This parameter determines how much the mouth can be obscured in the frame.

I.e. when the value is equal to "0.5", 50% of the mouth can be occluded.

If mouth occlusion of a face in a frame exceeds the value of this threshold, the frame is considered as not appropriate for further analysis.

The filtration is performed when the set value is "0.3" or higher. When the value is lover, the filtration is disabled.

min_body_size_threshold#

The parameter sets the body detection size, less than which it will not be considered as the best shot. It is calculated as the square root of the product of the body detection height (in pixels) by its width (in pixels).

Example: min_body_size_threshold = sqrt (64*128) = 90.5

If the value is "0", then filtering of body detection by size will not be performed.

Sending to external system#

In the "sending" section, the period during which the frames will be analyzed to select the best shot is determined, and all parameters associated with compiling a collection of the best shots are also defined.

Best shots
Best shots

Sending body data settings#

Along with the initial frames of bodies, detections with the coordinates of the person’s body can be sent so that the person’s path can be tracked. Settings for sending body data are regulated in the "body" section.

Two parameters are available:

  • "delete_track_after_sending" — Enable deletion of the best shots and detections after sending data. If the value is "false" (default), then the data will remain in memory.
  • "send_only_full_set" — Enable sending data to an external system only if a certain number of best shots is collected (parameter "body_bestshots_to_send") and only if a certain number of detections with the coordinates of the human body is collected (parameter "minimal_body_track_length_to_send" of the FaceStream settings).

Sending best shots settings#

The settings for sending the best shots are regulated in the "bestshot_settings" section.

Following options are available:

  • "type" — Mode for sending the best shots:
    • Only the best facial shots
    • Only the best body shots
    • Best shots of faces and bodies in one request
    • Best shots of faces and bodies in different requests
  • "face_bestshots_to_send" and "body_bestshots_to_send" - Number of best shots that the user wants to get from a track or from a certain period of time on this track.

    Using this parameter involves creating a collection of the best shots of a track or a time period of a track specified in the "time_period_of_searching" parameter. This collection will be sent to the external system.

    Increasing the value increases the probability of correct object recognition, but affects the network load.

full_frame_settings#

Important: Currently, face and body detection collaborative mode only sends raw face and body frames.

This parameter specifies the mode for sending source frames:

  • Only source frames of faces
  • Only source frames of bodies
  • Source frames of faces and bodies in one request
  • Source frames of faces and bodies in different requests

Important: Sending source frames must be enabled in the "send_source_frame" parameter in FaceStream settings.

time_period_of_searching#

Interval in track after the end of which a best shot is sent to the server (period starts with the first detection – person appears in the frame). Lowering this parameter speeds up recognition but decreases precision.

Sending period
Sending period

The measurement type is set in the "type" parameter (see below). If the value equals "-1" (by default), analysis is conducted on all frames until the end of track. Once the track is over (person leaves the frame), best shot is sent to an external service.

silent_period#

This parameter is used only for working with faces.

Interval between period. Once the analysis period is over, the system holds this silent_period before starting next period of frame analysis.

Silent period
Silent period

The measurement type is set in the "type" parameter (see below). If the value equals "-1", system holds silent period indefinitely.

Endless waiting period
Endless waiting period

type#

The parameter specifies the measurement type for the "silent_period" and "time_period_of_searing" parameters. It can take two values - "frames" or "sec".

Integration with external system settings#

Frames sent by FaceStream can be processed by an external system. To do this you need to specify:

frame_store#

This parameter sets a URL for saving the source frames of faces or bodies in LUNA PLATFORM 5.

As the URL, you can specify either the address to the LUNA Image Store service bucket, or the address to the "/images" resource of the LUNA API service. When specifying the address to the "/images" resource, the source frame will be saved under the "image_id".

The "send_source_frame" parameter should be enabled for sending source frames.

Example of address to LUNA Image Store bucket:

"frame_store": "http://127.0.0.1:5020/1/buckets/<frames>/images"

Here:

  • 127.0.0.1 - IP address where the LUNA Image Store service is deployed.
  • 5020 - Default Image Store service port.
  • 1 - API version of the LUNA Image Store service.
  • <frames> - Name of the LUNA Image Store bucket where the source image of face or body should be saved. The bucket should be created in advance.

An example of the "source-images" bucket creation:

curl -X POST http://127.0.0.1:5020/1/buckets?bucket=source-images

Example of address to "/images" resource of LUNA API service:

"frame_store": "http://127.0.0.1:5000/6/images"

Here:

  • 127.0.0.1 - IP address where the LUNA API service is deployed.
  • 6 - API version of the LUNA API service.
  • 5000 - Default port of the API service.

See the LUNA PLATFORM 5 administrator's manual for more information about buckets and the "/images" resource.

authorization#

In this section, either token or account_id are set to make requests to the LUNA API service.

The "event_handler" > "authorization" > "account_id" parameter must match the "account_id" parameter specified in the request. If the authorization field is not filled in, then the "account_id" specified when the stream was created will be used.

See the LUNA PLATFORM 5 administrator manual for details on LUNA PLATFORM authorization.

Healthcheck settings#

The section is used only for the "tcp", "udp" and "videofile" types.

In this section, you can set the parameters for reconnecting to the stream when errors occur while the video is streamed.

max_error_count#

The maximum number of errors when playing the stream.

The parameter works in conjunction with the "period" and "retry_delay" parameters. After receiving the first error, the wait specified in the "retry_delay" parameter is performed, and then the connection to the stream is retried. If during the time specified in the "period" parameter, the number of errors greater than or equal to the number specified in "max_error_count" was accumulated, then the processing of the stream will be terminated and its status will change to "failure".

For example, when it is unable to retrieve or decode a frame. Network problems or inaccessibility of a video can cause the errors.

period#

The parameter represents the period during which the number of errors is calculated. The value is set in seconds.

The parameter works in conjunction with the "retry_delay" and "max_error_count" parameters. See the description of working with this parameter in the "max_error_count" section.

retry_delay#

The parameter specifies the period after which the reconnection attempt is performed. The value is set in seconds.

The parameter works in conjunction with the "period" and "max_error_count" parameters. See the description of working with this parameter in the "max_error_count" section.

timeout#

The parameter specifies the timeout in milliseconds for reading the encoded packet.

Using this parameter, it is possible to provide more flexible processing of the video stream, control the speed of reading video packets and prevent data buffer overflow.

Liveness settings#

Liveness is not used for the "images" type.

Liveness is used only for working with faces.

Liveness is used to check whether a person in the frame is real and prevents fraud when printed photos or images on the phone are used to pass the Liveness check.

Liveness settings are set in the "data" > "analytics" > "liveness" section.

It is recommended to use this functionality only after discussing it with the VisionLabs team.

General recommendations for Liveness usage#

Liveness can be used at access control checkpoints only. This is a case when a person does not stay in front of the camera for more than ten seconds.

Liveness is used to minimize the risk of fraud when someone is trying to enter a secured area using a printed photo or a photo on a phone of someone who has the access rights.

Liveness returns a value, which defines the degree of the system certainty on whether the person in the frame is real. The value is in the range of 0 to 1.

Camera placement requirements#

The following conditions must be met for Liveness check set up:

  • Face should remain within a frame. The distance from left and right edges of the frame should be greater than or equal to the width of the face, the distance from the top and bottom edges of the frame should be greater than or equal to the height of the face;

  • The frame should include the chest region;

  • A camera should be located about waist height and should look upwards capturing the body and head;

  • The frame should not include rectangular elements framing the face area from all four sides (such as doorways or windows).

An example of the correct camera location is given in the image below.

Proper camera placement for Liveness
Proper camera placement for Liveness

FS starts collecting frames and selecting the best shot at a distance of 3-4 meters when a camera is placed properly.

Foreign objects and people who do not pass through the turnstile do not get into the camera view zone.

FS sends the best shot when a person is at a distance of 1 meter from the camera. At this distance, the face reaches the size required for sending.

An example of inappropriate camera placement is given in the image below.

Inappropriate camera placement for Liveness
Inappropriate camera placement for Liveness

If the camera is not configured correctly:

  • The person gets into the frame too late. FS does not have time to get the required number of frames for processing.
  • The person looks upside-down at the camera. This degrades the quality of the frame for subsequent processing.
  • The camera field of view covers the area outside the area of interest. This space may contain people or objects that interfere with the correct operation of the FS.

Recommendations for configuring FS#

The recommended values for the "Liveness" section parameters are given below.

"use_mask_liveness_filtration": true,
"use_flying_faces_liveness_filtration": true,
"liveness_mode": 1,
"number_of_liveness_checks": 10,
"liveness_threshold": 0.8,
"livenesses_weights": [0.0, 0.25, 0.75],
"mask_backgrounds_count": 300

We do not recommend changing these settings.

The "best_shot_min_size" parameter should be set based on the fact that the person is at a distance of 3-4 meters from the turnstile.

The "best_shot_proper_size" parameter should be set based on the fact that the person is at a distance of 1 meter from the turnstile.

To control the selection of the right person, use the "droi" parameter. The rectangle is selected so that people who have the intention to approach this turnstile appear in the rectangle as early as possible. This is true for turnstiles located close to each other. People from neighboring queues can get into the view zone of the cameras of such turnstiles.

FAQ Liveness#

Stream processing is slow when using Liveness

When the camera resolution is 1920 x 1080 and higher, Mask Liveness is working slowly.

To solve the problem, you should manually reduce the resolution in the camera to 720p. This will not affect the quality of recognition and the work of Liveness, because they work without loss of quality with faces that are approximately 100 pixels in size.

People cannot pass the Liveness check under the default FS settings

Possible causes:

  • The default settings in the Liveness section have been changed.

    Do not change the settings in the Liveness section, except for the "liveness_threshold" setting.

    The value of the "liveness_threshold" parameter can be reduced, but it should not be lower than "0.6".

  • Liveness is not applied to the target case.

    FS Liveness is not intended for authorization processes and cases of a long stay in front of the camera.

  • Unacceptable objects fall into the camera's view zone.

    For example, if there is a screen broadcasting a video in the background, Liveness will not work.

  • The camera is set to the wrong resolution.

    Check the camera resolution. See "Stream processing is slow when using Liveness".

  • There is a delay in the transmission of frames.

    If the camera does not transmit frames in real-time, then the frames may arrive with a delay.

  • The value "best_shot_min_size" is set incorrectly.

    If the "best_shot_min_size" parameter is too high, Liveness does not have time to accumulate the required number of different frames.

The Primary Track Policy is often used together with Liveness. See FaceStream activity diagram when using Liveness and Primary Track Policy.

use_mask_liveness_filtration#

The parameter enables checking the presence of a real person in the frame based on backgrounds.

The check performance depends on the size of the video frames. If the processing speed decreases when the parameter is enabled, it is necessary to reduce the video resolution in the camera settings (e.g., up to 1280x720).

use_flying_faces_liveness_filtration#

The parameter enables checking the presence of a real person in the frame based on the facial surrounding.

liveness_mode#

This parameter enables to specify which frames from a track will undergo Liveness check. There are three options for selecting a frame:

  • 0 — First N frames.
  • 1 — Last N frames before the best shot sending (recommended value).
  • 2 — All frames in a track.

N value is specified in the "number_of_liveness_checks" parameter.

number_of_liveness_checks#

The parameter enables to specify the number of frames to check fo Liveness. The specified value is used in the "liveness_mode" parameter.

It is not recommended to set a value less than 10.

liveness_threshold#

The "liveness_threshold" parameter value is used to define the presence of a real person in a frame. The system confirms that it is a real person in the frame, and not a photo, only if Liveness returned a value higher than the one specified in the parameter.

The recommended value is "0.8". It is not recommended to set a value lower than "0.6".

livenesses_weights#

The parameter determines the involvement of each liveness check type (mask, and flying_faces) in the resulting estimation of the presence of a human face in the frame.

User must specify two values assigned to different types of liveness. Values are specified in decimals in the following order:

  • "use_mask_liveness_filtration"
  • "use_flying_faces_liveness_filtration"

Values are indicated in fractions of a unit. In the example below 0.25 - 25% on "mask_liveness" and 0.75 - 75% on "flying_faces_liveness". The ratio is always scaled based on the given numbers, regardless of whether they add up to one or which liveness methods are enabled.

"livenesses_weights": [0.25, 0.75]

mask_backgrounds_count#

The number of background frames that are used for the corresponding checks.

Do not change this parameter.

Primary Track policy settings#

Primary Track policy is used only for working with faces.

Primary Track policy is not used for the "images" type. Important: Primary Track policy is currently not supported for face and body detection collaborative mode.

This section is designed to work with Access Control Systems (ACS, turnstiles at the entrances to banks/office buildings) to simplify the control and the introduction of facial recognition technology at the entrance to a protected area.

The Primary Track policy settings are set in the "data" > "analytics" > "primary_track_policy" section.

Liveness is often used together with the Primary Track policy. See FaceStream activity diagram when using Liveness and Primary Track Policy.

use_primary_track_policy#

If the parameter value is "true", the primary track implementation mode is enabled.

Out of all detections, one of the biggest sizes is selected and its track becomes the primary one. Further analysis is conducted on this track. The best shot from this track is then sent to the server.

All other tracks are processed in regular mode. However, the best shot is sent only from the primary track.

As soon as another face reaches a larger size than the face from the primary track, this face track becomes primary and the processing is performed for it.

While using this parameter at the access control checkpoint, only the best shots of the person who is the closest to the turnstiles will be sent to the server (here the biggest detection size condition is held)

best_shot_min_size#

The parameter is used when "use_primary_track_policy" parameter is enabled.

The "best_shot_min_size" parameter sets the minimal height of detection in pixels at which the analysis of frames and best shot definition begins.

best_shot_proper_size#

The parameter is used when "use_primary_track_policy" parameter is enabled.

The "best_shot_proper_size" parameter sets the height of detection in pixels for Primary track policy. When the detection size reaches the specified value, FaceStream determines the best shots before the end of the track, and then sends them to the LP.