Skip to content

Streams management configuration#

The application supports simultaneous work with several stream sources.

Parameters for stream management are set in the LUNA Streams service. The service enables you to create and store streams in the LUNA Streams database.

Several types of sources are supported:

  • "tcp", "udp" – Real-time video signal sources. These can be both USB cameras and IP cameras (via RTSP protocol).

  • "videofile" – Video files.

  • "images" – Set of frames in the form of separate image files.

Streams management settings are not stored in the LUNA Configurator service and can only be set using HTTP requests to the LUNA Streams service. A detailed description of requests and example requests can be found in the Open API specification of LUNA Streams service. The LUNA Streams settings set in LUNA Configurator are described in the section "LUNA Streams settings".

account_id#

The parameter specifies the mandatory "account_id" field, which is passed to LUNA PLATFORM 5 service API in the request header.

Account ID is set in the UUID4 format. You can find the requirements for the Account ID in the LUNA PLATFORM 5 documentation.

The parameter is used to bind the received data to a specific user.

name#

Source name. It is used to identify the source of the sent frames.

Recorded in the "source" field of the LP event.

description#

User description of the stream.

This parameter is not recorded to the LP event.

data#

The general parameters required to configure stream are listed below.

type#

Stream transfer type. After selecting the stream transfer type, you must specify the path to the source/images/USB device, etc. in setting "reference".

FaceStream can use one of the following stream transfer types:

  • tcp - Transport layer network protocol to receive video data.
  • udp - Transport layer network protocol to receive video data.
  • images - Set of frames as separate image files.
  • videofile - Video file.

Only transport layer protocols (TCP or UDP) are specified in FaceStream. It is necessary to understand on which transport layer protocol the application layer protocol is based (HTTP, RTSP, HLS, etc.). See "Protocols" for details.

TCP Protocol implements an error control mechanism that minimizes the loss of information and the skip of the reference frames at the cost of increasing the network delay. Key frames are the basis of various compression algorithms used in video codecs (for example, h264). Only the reference frames contain enough information to restore (decode) the image completely, while the intermediate frames contain only differences between adjacent reference frames.

In terms of broadcasting on the network, there is a risk of package loss due to imperfect communication channels. In case of loss of the package containing the data keyframe, the stream fragment cannot be correctly decoded. Consequently, distinctive artifacts appear, that are easily and visually distinguishable. These artifacts do not allow the face detector to operate in normal mode.

The UDP protocol does not implement an error control mechanism, so the stream is not protected from damage. The use of this protocol is recommended only, if there is a high-quality network infrastructure.

With a large number of streams (10 or more), it is strongly recommended to use the UDP protocol. When using the TCP protocol, there may be problems with reading streams.

reference#

Full path to the source (for "tcp"/"udp" type):

"reference": "rtsp://some_stream_address"

USB device number (for "tcp"/"udp" type):

"reference": "/dev/video0"

To use USB device, you should specify the --device flag with the address of the USB device when launching the FaceStream Docker container. See the "Launching keys" section of the FaceStream installation manual.

Full path to the video file (for "videofile" type):

"reference": "/example/path/to/video/video.mp4"

Full path to the directory with the images (for "images" type):

"reference": "/example/path/to/images/"

To use video files and images, you should first move them to a docker container.

roi#

This parameter is used only for working with faces.

ROI specifies the region of interest in which the face detection and tracking are performed.

The specified rectangular area is cut out from the frame and FaceStream performs further processing using this image.

Correct exploitation of the "roi" parameter significantly improves the performance of FaceStream.

ROI on the source frame is specified by the "x", "y", "width", "height" and "mode" parameters, where:

  • "x", "y" – Coordinates of the upper left point of the ROI area of interest.
  • "width" and "height" - Width and height of the processed area of the frame.
  • "mode" – Mode for specifying "x", "y", "width" and "height". Two modes are available:

    • "abs" - Parameters "x", "y", "width" and "height" are set in pixels.
    • "percent" - Parameters "x", "y", "width" and "height" are set as percentages of the current frame size.

    If the "mode" field is not specified in the request body, then the value "abs" will be used.

With width and height values of "0", the entire frame is considered the region of interest.

The coordinate system on the image is set similarly to the figure below.

ROI coordinate system
ROI coordinate system

When the values of width and height are set to "0", the entire frame will be the region of interest.

Below is an example of calculating ROI as a percentage:

Example of calculating ROI as a percentage
Example of calculating ROI as a percentage

droi#

This parameter is used only for working with faces.

The parameter specifies the region of interest within the ROI zone. Face detection is performed in ROI, but the best shot is selected only in the DROI area. Face detection must be completely within the DROI zone for the frame to be considered as a best shot. Neither detection side should protrude from the DROI area even by a millimeter.

DROI is recommended to use when working with Access Control Systems and when the "use_mask_liveness_filtration" mode is enabled.

For example, it can be used, if there are several turnstiles close to each other and their cameras should find faces only in a small area and simultaneously perform Liveness check. Using DROI enables to limit the area of the best shot selection without losing information about the background.

DROI on the source frame is specified by the "x", "y", "width", "height" and "mode" parameters, where:

  • "x", "y" – Coordinates of the upper left point of the DROI.
  • "width" and "height" - Width and height of the processed area of the frame.
  • "mode" – Mode for specifying "x", "y", "width" and "height". Two modes are available:

    • "abs" - Parameters "x", "y", "width" and "height" are set in pixels.
    • "percent" - Parameters "x", "y", "width" and "height" are set as percentages of the current frame size.

    If the "mode" field is not specified in the request body, then the value "abs" will be used.

When calculating DROI, one must take into account that this region of interest is calculated relative to the original frame, and not relative to ROI.

Coordinate system is set in the same way as it is shown in the picture below.

DROI coordinate system
DROI coordinate system

When the ROI size is changed and the DROI size remains the default (0, 0, 0, 0), the DROI is not considered. If you change the size of the DROI, it will be considered when choosing the best shot.

Below is an example of calculating DROI as a percentage:

Example of calculating DROI as a percentage
Example of calculating DROI as a percentage

rotation#

The rotation angle of the image source. It is used when the incoming stream is rotated, for example, if the camera is installed on the ceiling. Rotation is clockwise.

preferred_program_stream_frame_width#

This parameter is used only for tcp or udp types and is intended to work with protocols that imply the presence of several channels with different bitrates and resolutions (for example, HLS).

If the stream has several such channels, then this parameter will enable you to select from all the channels of the whole stream the channel whose frame width is closer to the value specified in this parameter.

For example, there are 4 channels whose frame widths are 100, 500, 1000 and 1400. If the parameter "preferred_program_stream_frame_width" is equal to "800", then a channel with a frame width of 1000 will be selected.

If the stream has only one channel, this parameter will be ignored.

The default value is 800.

endless#

This parameter enables you to control the restart of the stream when a network error is received (the error is determined by the system as an "eof" (end-of-file) marker).

The parameter is available only for the "udp" and "tcp" source types.

If the "endless" parameter takes the value "true", then in case of receiving "eof" and successful reconnection, the processing of the stream will continue. If all reconnection attempts failed (see the "healthcheck" section), then the stream will take the "failure" status. If the parameter takes the value "false", then the processing of the stream will not continue and the status of the stream will take the "done" status.

When using a video file as a "tcp" or "udp" source, it is assumed to use the value "false". This will avoid re-processing an already processed fragment of the video file when receiving "eof". If, when using a video file, the value of the "endless" parameter is "true", then after the processing is completed, the video file will be processed from the beginning.

mask#

This parameter is used only for the images type and is a mandatory parameter.

A mask of file names in the directory with images. The mask allows FaceStream to understand which files from the specified folder should be used and in what order.

If you set the mask "Img_%02d.jpg", then FaceStream will take from the folder files which names consist of: Prefix (Img_) + two-digit number (%02d) + format (.jpg)

The following images will be taken in turn:

  • Img_00.jpg
  • Img_01.jpg
  • Img_02.jpg
  • Img_03.jpg

Another example of a mask is Photo-%09d.jpg. The following images will be taken:

  • Photo-000000000.jpg
  • Photo-000000001.jpg
  • Photo-000000002.jpg
  • Photo-000000003.jpg

FaceStream processes files in numerical order and does not skip nonexistent files. If there is a missing file in the file sequence FaceStream stops files processing.

The specified mask "example1_%04d.jpg" in the example will result in image processing, which name is composed of an "example1_" prefix and of a sequential frame number, of 4 characters size (for example: example1_0001.jpg, example1_0002.jpg, etc.).

"mask": "example1_%04d.jpg"

event_handler#

This section defines the parameters of the handler created in the LUNA PLATFORM, with which video streams will be processed. Different handlers should be used for the face and bodies. Handler should be created in LP 5 in advance.

For more information about handlers, see the LUNA PLATFORM administrator manual.

origin#

The full network path to the API service of the deployed LUNA PLATFORM, which includes the LUNA Handlers and LUNA Events services necessary to generate an event by handler.

"origin": "http://luna_address:port/"

Here:

  • luna_address - LUNA API service address.
  • port - Port used by the LUNA API service. The default port is 5000.

api_version#

The API version for generating events in the LUNA PLATFORM. Currently, version 6 of the API is supported.

bestshot_handler > handler_id#

The parameter enables you to use the external "handler_id" LUNA PLATFORM to process face or body samples according to the specified rules. When using this handler, LUNA PLATFORM generates an event that contains all the information received from FaceStream and processes it in accordance with the processing rules.

Handler should be created in LP 5 in advance.

frame_store#

This parameter sets a URL for saving the source frames of faces or bodies in LUNA PLATFORM 5.

As the URL, you can specify either the address to the LUNA Image Store service bucket, or the address to the "/images" resource of the LUNA API service. When specifying the address to the "/images" resource, the source frame will be saved under the "image_id".

The "send_source_frame" parameter should be enabled for sending source frames.

Example of address to LUNA Image Store bucket:

"frame_store": "http://127.0.0.1:5020/1/buckets/<frames>/images"

Here:

  • 127.0.0.1 - IP address where the LUNA Image Store service is deployed.
  • 5020 - Default Image Store service port.
  • 1 - API version of the LUNA Image Store service.
  • <frames> - Name of the LUNA Image Store bucket where the source image of face or body should be saved. The bucket should be created in advance.

An example of the "source-images" bucket creation:

curl -X POST http://127.0.0.1:5020/1/buckets?bucket=source-images

Example of address to "/images" resource of LUNA API service:

"frame_store": "http://127.0.0.1:5000/6/images"

Here:

  • 127.0.0.1 - IP address where the LUNA API service is deployed.
  • 6 - API version of the LUNA API service.
  • 5000 - Default port of the API service.

See the LUNA PLATFORM 5 administrator's manual for more information about buckets and the "/images" resource.

authorization#

In this section, either token or account_id are set to make requests to the LUNA API service.

The "event_handler" > "authorization" > "account_id" parameter must match the "account_id" parameter specified in the request. If the authorization field is not filled in, then the "account_id" specified when the stream was created will be used (see Account_id parameter).

See the LUNA PLATFORM 5 administrator manual for details on LUNA PLATFORM authorization.

policies#

sending#

This section defines a period during which frames will be analyzed to select the best shot, as well as all parameters associated with compiling a collection of the best shots.

Best shot
Best shot

Example:

"sending": {
    "time_period_of_searching": -1,
    "silent_period" : 0,
    "type" : "sec",
    "number_of_bestshots_to_send": 1
    "send_only_full_set": true
    "delete_track_after_sending" false
},

time_period_of_searching#

Interval in track after the end of which a best shot is sent to the server (period starts with the first detection – person appears in the frame). Lowering this parameter speeds up recognition but decreases precision.

Sending period
Sending period

The measurement type is set in the "type" parameter (see below). If the value equals "-1" (by default), analysis is conducted on all frames until the end of track. Once the track is over (person leaves the frame), best shot is sent to an external service.

silent_period#

Interval between period. Once the analysis period is over, the system holds this silent_period before starting next period of frame analysis.

Silent period
Silent period

The measurement type is set in the "type" parameter (see below). If the value equals "-1", system holds silent period indefinitely.

Endless waiting period
Endless waiting period

type#

The parameter specifies the measurement type for the "silent_period" and "time_period_of_searing" parameters. It can take two values - "frames" or "sec".

number_of_bestshots_to_send#

Number of best shots that the user sets to receive from the track or certain periods of this track. This parameter enables collection of best shots from a track or a certain period of a track set in parameter "time_period_of_searching".

Increasing parameter’s value increases the probability of correct object recognition but affects the network load.

send_only_full_set#

This parameter is used only for working with bodies.

This parameter enables to send data only if the required number of best shots ("number_of_bestshots_to_send" parameter of FaceStream settings) and detections with human body coordinates ("minimal_body_track_length_to_send" parameter of FaceStream settings) have been collected.

delete_track_after_sending#

This parameter is used only for working with bodies.

This parameter enables to delete the best shots and detections with human body coordinates after sending the data. If the value is "false" (by default), then the data will remain in memory.

primary_track_policy#

This section is used only for working with faces.

This section is designed to work with Access Control Systems (ACS, turnstiles at the entrances to banks/office buildings) to simplify the control and the introduction of facial recognition technology at the entrance to a protected area. The parameters group is not used for the "images" type.

Liveness is often used together with the Primary Track policy. See FaceStream activity diagram when using Liveness and Primary Track Policy.

use_primary_track_policy#

This parameter is used in cases of Access Control Systems (turnstiles/gates at the office/bank entrances) for easier control and face recognition implementation in a secured area.

If the parameter value is "true", the primary track implementation mode is enabled.

Out of all detections, one of the biggest sizes is selected and its track becomes the primary one. Further analysis is conducted on this track. The best shot from this track is then sent to the server.

All other tracks are processed in regular mode. However, the best shot is sent only from the primary track.

As soon as another face reaches a larger size than the face from the primary track, this face track becomes primary and the processing is performed for it.

While using this parameter at the access control checkpoint, only the best shots of the person who is the closest to the turnstiles will be sent to the server (here the biggest detection size condition is held)

best_shot_min_size#

The parameter is used when "use_primary_track_policy" parameter is enabled.

The "best_shot_min_size" parameter sets the minimal height of detection in pixels at which the analysis of frames and best shot definition begins.

best_shot_proper_size#

The parameter is used when "use_primary_track_policy" parameter is enabled.

The "best_shot_proper_size" parameter sets the height of detection in pixels for Primary track policy. When the detection size reaches the specified value, FaceStream determines the best shots before the end of the track, and then sends them to the LP.

liveness#

This section is used only for working with faces.

Liveness is used to check whether a person in the frame is real and prevents fraud when printed photos or images on the phone are used to pass the Liveness check.

It is recommended to use this functionality only after discussing it with the VisionLabs team.

The parameters group is not used for the "images" type.

General recommendations for Liveness usage#

Liveness can be used at access control checkpoints only. This is a case when a person does not stay in front of the camera for more than ten seconds.

Liveness is used to minimize the risk of fraud when someone is trying to enter a secured area using a printed photo or a photo on a phone of someone who has the access rights.

Liveness returns a value, which defines the degree of the system certainty on whether the person in the frame is real. The value is in the range of 0 to 1.

Camera placement requirements

The following conditions must be met for Liveness check set up:

  • Face should remain within a frame. The distance from left and right edges of the frame should be greater than or equal to the width of the face, the distance from the top and bottom edges of the frame should be greater than or equal to the height of the face;

  • The frame should include the chest region;

  • A camera should be located about waist height and should look upwards capturing the body and head;

  • The frame should not include rectangular elements framing the face area from all four sides (such as doorways or windows).

An example of the correct camera location is given in the image below.

Proper camera placement for Liveness
Proper camera placement for Liveness

FS starts collecting frames and selecting the best shot at a distance of 3-4 meters when a camera is placed properly.

Foreign objects and people who do not pass through the turnstile do not get into the camera view zone.

FS sends the best shot when a person is at a distance of 1 meter from the camera. At this distance, the face reaches the size required for sending.

An example of inappropriate camera placement is given in the image below.

Inappropriate camera placement for Liveness
Inappropriate camera placement for Liveness

If the camera is not configured correctly:

  • The person gets into the frame too late. FS does not have time to get the required number of frames for processing.
  • The person looks upside-down at the camera. This degrades the quality of the frame for subsequent processing.
  • The camera field of view covers the area outside the area of interest. This space may contain people or objects that interfere with the correct operation of the FS.

Recommendations for configuring FS

The recommended values for the "Liveness" section parameters are given below.

"use_mask_liveness_filtration": true,
"use_flying_faces_liveness_filtration": true,
"liveness_mode": 1,
"number_of_liveness_checks": 10,
"liveness_threshold": 0.8,
"livenesses_weights": [0.0, 0.25, 0.75],
"mask_backgrounds_count": 300

We do not recommend changing these settings.

The "best_shot_min_size" parameter should be set based on the fact that the person is at a distance of 3-4 meters from the turnstile.

The "best_shot_proper_size" parameter should be set based on the fact that the person is at a distance of 1 meter from the turnstile.

To control the selection of the right person, use the "droi" parameter. The rectangle is selected so that people who have the intention to approach this turnstile appear in the rectangle as early as possible. This is true for turnstiles located close to each other. People from neighboring queues can get into the view zone of the cameras of such turnstiles.

FAQ Liveness

Stream processing is slow when using Liveness

When the camera resolution is 1920 x 1080 and higher, Mask Liveness is working slowly.

To solve the problem, you should manually reduce the resolution in the camera to 720p. This will not affect the quality of recognition and the work of Liveness, because they work without loss of quality with faces that are approximately 100 pixels in size.

People cannot pass the Liveness check under the default FS settings

Possible causes:

  • The default settings in the Liveness section have been changed.

    Do not change the settings in the Liveness section, except for the "liveness_threshold" setting.

    The value of the "liveness_threshold" parameter can be reduced, but it should not be lower than "0.6".

  • Liveness is not applied to the target case.

    FS Liveness is not intended for authorization processes and cases of a long stay in front of the camera.

  • Unacceptable objects fall into the camera's view zone.

    For example, if there is a screen broadcasting a video in the background, Liveness will not work.

  • The camera is set to the wrong resolution.

    Check the camera resolution. See "Stream processing is slow when using Liveness".

  • There is a delay in the transmission of frames.

    If the camera does not transmit frames in real-time, then the frames may arrive with a delay.

  • The value "best_shot_min_size" is set incorrectly.

    If the "best_shot_min_size" parameter is too high, Liveness does not have time to accumulate the required number of different frames.

The Primary Track Policy is often used together with Liveness. See FaceStream activity diagram when using Liveness and Primary Track Policy.

use_mask_liveness_filtration#

The parameter enables checking the presence of a real person in the frame based on backgrounds.

The check performance depends on the size of the video frames. If the processing speed decreases when the parameter is enabled, it is necessary to reduce the video resolution in the camera settings (e.g., up to 1280x720).

use_flying_faces_liveness_filtration#

The parameter enables checking the presence of a real person in the frame based on the facial surrounding.

liveness_mode#

This parameter enables to specify which frames from a track will undergo Liveness check. There are three options for selecting a frame:

  • 0 - First N frames.
  • 1 - Last N frames before the best shot sending (recommended value).
  • 2 - All frames in a track.

N value is specified in the "number_of_liveness_checks" parameter.

number_of_liveness_checks#

The parameter enables to specify the number of frames to check fo Liveness. The specified value is used in the "liveness_mode" parameter.

It is not recommended to set a value less than 10.

liveness_threshold#

The "liveness_threshold" parameter value is used to define the presence of a real person in a frame. The system confirms that it is a real person in the frame, and not a photo, only if Liveness returned a value higher than the one specified in the parameter.

The recommended value is "0.8". It is not recommended to set a value lower than "0.6".

livenesses_weights#

The parameter determines the involvement of each liveness check type (mask, and flying_faces) in the resulting estimation of the presence of a human face in the frame.

User must specify two values assigned to different types of liveness. Values are specified in decimals in the following order:

  • Use_shoulders_liveness_filtration (Deprecated. Any value will be considered 0.0).
  • Use_mask_liveness_filtration.
  • Use_flying_faces_liveness_filtration.

In the example below (which is the default setting), the number 0.0 does not indicate anything, because the Shoulders Liveness check is declared Deprecated, 0.25 - 25% on "Mask Liveness", and 0.75 - 75% on Flying Faces Liveness.

"livenesses_weights": [0.0, 0.25, 0.75]

The ratio is always calculated based on "livenesses_weights" values, even if they don’t add up to one, or not all liveness types are active.

mask_backgrounds_count#

The number of background frames that are used for the corresponding checks.

Do not change this parameter.

filtering#

The section describes the filter object parameters and modes of sending the resulting best shots.

Example:

"filtering": {
    "min_score": 0.5187,
    "detection_yaw_threshold": 40,
    "detection_pitch_threshold": 40,
    "detection_roll_threshold": 30,
    "yaw_number": 1,
    "yaw_collection_mode": false,
    "mouth_occlusion_threshold" : 0.0
},

min_score#

This parameter sets the threshold for filtering face or body detections sent to the server.

The value of this parameter depends on the FaceStream scenario - with faces or with bodies.

Working with faces

For each detection, the AGS (Approximate Garbage Score) and the general detection quality are calculated.

Detections whose AGS value is less than the value specified in the "min_score" threshold will not be considered acceptable for further work. Next, the best shots will be selected according to the filtered detections in accordance with the general detection quality.

If the "min_score" parameter is set to "0", then the best shot will be determined by the general detections quality.

Default value - 0.5187.

Working with bodies

For each detection, the general detection quality is calculated.

Detections whose overall quality is less than the value specified in the "min_score" threshold will not be considered acceptable for further work. Next, the best shots will be selected based will be selected according to the filtered detections.

If the "min_score" parameter is set to "0", then the best shots will be determined by the general detections quality.

Default value - 0.5.

Recommended value was established through research and analysis of detections on various face and body images.

detection_yaw_threshold#

This parameter is used only for working with faces.

This parameter sets the maximum value of head yaw angle in relation to camera.

If, in a frame, head yaw angle is above the value of this parameter, the frame is considered as not appropriate for further analysis.

To disable this filtering, you must set the value "180".

detection_pitch_threshold#

This parameter is used only for working with faces.

This parameter sets the maximum value of head pitch angle in relation to camera.

If, in a frame, head pitch angle is above the value of this parameter, the frame is considered as not appropriate for further analysis.

To disable this filtering, you must set the value "180".

detection_roll_threshold#

This parameter is used only for working with faces.

This parameter sets the maximum value of head yaw angle in relation to camera.

If, in a frame, head roll angle is above the value of this parameter, the frame is considered as not appropriate for further analysis.

Head pose
Head pose

To disable this filtering, you must set the value "180".

yaw_number#

This parameter is used only for working with faces.

This parameter defines the number of frames for image filtration based on head yaw angle. This filter removes images where head’s yaw angle is too high.

How it works:

Parameter specifies the number of frames to analyze. A special algorithm analyzes head yaw angles on each of those frames. If on one of them the angle is significantly different from the average value of angles, the frame will not be considered as a candidate for best shot.

Example. Parameter value is set "7", meaning 7 frames will be analyzed. If on six of the frames the rotation angle is in the range between 50-60 degrees and the angle on the seventh frame is estimated at 0, the angle on the seventh frame is, most likely, estimated incorrectly. Reason is: a person cannot turn his head so abruptly in such short period of time. The seventh frame will not be considered for best shot.

By default, the parameter is disabled, the value is "1". The recommended value is "7".

yaw_collection_mode#

This parameter is used only for working with faces.

This parameter sets the number of frames the system must collect the number of frames specified in the "yaw_number" parameter to analyze the head yaw angle.

If "yaw_collection_mode" parameter is disabled, the system will analyze the frames sequentially, meaning it analyzes one frame, then two, then three and so on. Maximum number of frames to analyze is set in "yaw_number" parameter.

Parameter is disabled by default.

The purpose of utilizing "yaw_number" and "yaw_collection_mode" parameters is to increase the accuracy of best shot selection from a track.

mouth_occlusion_threshold#

This parameter is used only for working with faces.

This parameter determines how much the mouth can be obscured in the frame.

I.e. when the value is equal to "0.5", 50% of the mouth can be occluded.

If mouth occlusion of a face in a frame exceeds the value of this threshold, the frame is considered as not appropriate for further analysis.

The filtration is performed when the set value is "0.3" or higher. When the value is lover, the filtration is disabled.

min_body_size_threshold#

The parameter sets the body detection size, less than which it will not be sent for processing. It is calculated as the square root of the product of the body detection height (in pixels) by its width (in pixels).

Example: min_body_size_threshold = sqrt (64*128) = 90.5

If the value is "0", then filtering of body detection by size will not be performed.

frame_processing_mode#

This parameter is used for "tcp", "udp" and "videofile" types only.

This parameter is similar to convert_full_frame, but is set for a specific FaceStream instance.

If the value is set to "full", the frame is immediately converted to RGB image of the required size after decoding. This results in a better image quality and reduces the speed of frames processing.

When set to "scale", the image is scaled according to the settings in the TrackEngine configuration (standard behavior for releases 3.2.4 and earlier).

The default value is "auto". In this case, one of the two modes is selected automatically.

real_time_mode_fps#

This parameter is used for "videofile" type only.

This parameter enables you to set the number of FPS with which the video stream will be processed.

If a video has high FPS value and FaceStream cannot work with the specified number of frames per second, frames are skipped.

Thus, the video file emits a stream from a real video camera. It can be useful for performance tuning. The video will be played at the specified speed, which is convenient for load testing and subsequent analysis.

This parameter is disabled when set to "0".

ffmpeg_threads_number#

The parameter enables to specify the number of threads for decoding video using FFMpeg.

The number of processor cores involved in decoding process increases according to the number of threads. An increase in the number of threads is recommended when processing high-resolution video (4K or higher).

Healthcheck#

The section is used only for the "tcp", "udp" and "videofile" types.

In this section, you can set the parameters for reconnecting to the stream when errors occur while the video is streamed.

max_error_count#

The maximum number of errors when playing the stream.

The parameter works in conjunction with the "period" and "retry_delay" parameters. After receiving the first error, the wait specified in the "retry_delay" parameter is performed, and then the connection to the stream is retried. If during the time specified in the "period" parameter, the number of errors greater than or equal to the number specified in "max_error_count" was accumulated, then the processing of the stream will be terminated and its status will change to "failure".

For example, when it is unable to retrieve or decode a frame. Network problems or inaccessibility of a video can cause the errors.

period#

The parameter represents the period during which the number of errors is calculated. The value is set in seconds.

The parameter works in conjunction with the "retry_delay" and "max_error_count" parameters. See the description of working with this parameter in the "max_error_count" section.

retry_delay#

The parameter specifies the period after which the reconnection attempt is performed. The value is set in seconds.

The parameter works in conjunction with the "period" and "max_error_count" parameters. See the description of working with this parameter in the "max_error_count" section.

timeout#

The parameter specifies the timeout in milliseconds for reading the encoded packet.

Using this parameter, it is possible to provide more flexible processing of the video stream, control the speed of reading video packets and prevent data buffer overflow.

location#

This section includes information about the location of the video source.

  • "city"
  • "area"
  • "district"
  • "street"
  • "house_number"
  • "geo_position" - Latitude and longitude in degrees. Geo position is considered as properly specified if both longitude and latitude are set.

The "send_location_data" parameter enables the sending of location data of the video source.

"location": {
    "send_location_data" : false,
    "city": "Moscow",
    "area": "CAO",
    "district": "Arbat",
    "street": "Arbat",
    "house_number": "37",
    "geo_position": {
    "longitude": 36.616,
    "latitude": 55.752
    }
}

This parameter is used to generate events in the LUNA PLATFORM (see the LUNA PLATFORM documentation).

autorestart#

This section enables you to configure the automatic restart of the stream. Three parameters are available:

  • "restart" - Whether to use automatic restart of the stream.
  • "attempt_count" - Number of attempts to automatically restart the thread (default 10).
  • "delay" - Stream automatic restart delay, in seconds (default 60 seconds).

See "Streams automatic restart" for details on how automatic stream restarting works.

status#

The status at the start of processing. Two states are available - "pending" and "pause".

In addition to the two states at the start of processing, other states that occur during FaceStream operation are also available (see the "Stream distribution in LUNA Streams" section).

group_name and group_id#

Parameters for linking a stream to a group. You can specify either the "group_id" or "group_name".