Skip to content

Overview#

FaceStream conducts several functions:

  • Stream reading

Web-cameras, USB and IP-cameras (via RTSP protocol), video files and images can act as data sources.

  • Stream processing

It searches for faces and bodies in the stream and tracks them until they leave the frame or are blocked.

  • Liveness check

Liveness check is performed on one or more frames of the track.

  • Sending face or body best shots as HTTP-requests onto external service

VisionLabs Software LUNA PLATFORM 5 acts as an external service.

FaceStream workflow depends on the setting of four configurations.

  • Streams management configuration set in LUNA Streams

    Here you can set the settings regarding stream sources such as source type, source address, filtering settings, etc. The settings are set by sending requests with a body in JSON format to the LUNA Streams service. FaceStream takes the settings from LUNA Streams for further processing. A detailed description of how FaceStream works with LUNA Streams is given in the "Interaction of FaceStream with LUNA Streams" section.

  • FaceStream settings set in LUNA Configurator

    Here you can set general FaceStream settings, such as logging, setting up sending images from FaceStream to external services, debugging, etc.

  • TrackEngine settings set in LUNA Configurator

    Here you can set general TrackEngine settings regarding the face or body detection and tracking.

  • LUNA Streams settings set in LUNA Configurator

    Here you can set general settings for the LUNA Streams service, such as logging, database settings, address of the LUNA Licenses service, etc.

  • FaceEngine settings set in "faceengine.conf" configuration file and transferred during the launch of the FaceStream container.

    Here you can set the settings for face and body recognition. It is recommended to change the parameters in this configuration only in consultation with VisionLabs employees.

The following features are also available when working with FaceStream:

  • Dynamic creation, editing, and deletion of stream sources via API requests.
  • Real time video streams preview in a browser for the streams with specified parameters.
  • Stream metrics (number of streams, number of errors, number of faces, number of skipped frames, FPS).

FaceStream can be configured to work:

FaceStream workflow with faces and bodies#

FaceStream can handle both faces and bodies. Each object has its own scheme of operation and its own set of parameters described below.

The required minimum parameters for working with both objects can be found in the section "Priority parameters list".

FaceStream workflow with faces#

FaceStream application workflow with faces is shown in the image below:

FaceStream workflow with faces
FaceStream workflow with faces
  1. FaceStream receives video from a source (IP or USB camera, web-camera, video file) or images. FaceStream can work with several sources of video streams (the number is set by the license). Sources and additional stream management settings are specified in the HTTP-request body to the LUNA Streams service. These settings are then retrieved by the FaceStream application.

  2. FaceStream decodes video frames.

  3. The ROI area is cut out from the frame if the "roi" (streams management configuration) parameter is specified.

  4. The received image is scaled to the "scale-result-size" (TrackEngine configuration) size if the "detector-scaling" (TrackEngine configuration) is set.

  5. Faces are detected in the frame.

  6. The face is redetected in the frame instead of detection if the "detector-step" parameter (TrackEngine configuration) is set.

  7. A track is created for each new face in the stream; then it is reinforced with new detections of this face from the subsequent frames. The track is interrupted if the face disappears from the frame. You can set the "skip-frames" parameter (TrackEngine configuration) so the track will not be interrupted immediately, and the system will wait for the face to appear in the area for several frames.

  8. FaceStream filters the frames of low quality and selects best shots. There are several algorithms for choosing the best detection(s) in the track. See the "Frame filtering" section.

  9. If the frame is best shot, it is added to the collection of best shots. Depending on the "face_bestshots_to_send" (streams management configuration) setting one or several best detections are collected from each track.

  10. Optional. If the "warp" type is set in the "portrait_type" (streams management configuration) parameter, the best shots are normalized to the LUNA PLATFORM standard, and normalized images are created. Normalized image is better for processing using LUNA PLATFORM.

  11. The best shots, source images (optional) and additional information from stream management configuration are sent to the LUNA PLATFORM in the form of an HTTP request to the resource "/6/handlers/{handler_id}/stream_events" to generate an event.

    The general parameters of the video stream (data transfer protocol, path to the source, region of interest on the frame, etc.) are set in the "data" (streams management configuration) section.

    The frequency of images sending is specified in the "sending" (streams management configuration) section.

    The LUNA PLATFORM address is specified in the "lunaplatform" (FaceStream configuration) section.

FaceStream workflow with bodies#

FaceStream application workflow with bodies is shown in the image below:

FaceStream workflow with bodies
FaceStream workflow with bodies
  1. FaceStream receives video from a source (IP or USB camera, web-camera, video file) or images. FaceStream can work with several sources of video streams (the number is set by the license). Sources and additional stream management settings are specified in the HTTP-request body to the LUNA Streams service. These settings are then retrieved by the FaceStream application.

  2. FaceStream decodes video frames.

  3. The received image is scaled to the "scale-result-size" (TrackEngine configuration) size if the "detector-scaling" (TrackEngine configuration) is set.

  4. Bodies are detected in the frame.

  5. The body is redetected in the frame instead of detection if the "detector-step" parameter (TrackEngine configuration) is set.

  6. A track is created for each new body in the stream. then it is reinforced with new detections of this body from the subsequent frames. The track is interrupted if the body disappears from the frame. You can set the "skip-frames" parameter (TrackEngine configuration) so the track will not be interrupted immediately, and the system will wait for the body to appear in the area for several frames.

  7. FaceStream filters low quality frames and selects the best shots. See "min_score_body" (streams management configuration).

  8. If the frame is best shot, it is added to the collection of best shots. Depending on the "body_bestshots_to_send" (streams management configuration) parameter one or several best detections are collected from each track.

  9. The best shots are normalized to the LUNA PLATFORM standard, and normalized images are created. Normalized image is better for processing using LUNA PLATFORM.

  10. The best shots, source images (optional) and additional information from stream management configuration are sent to the LUNA PLATFORM in the form of an HTTP request to the resource "/6/handlers/{handler_id}/stream_events" to generate an event. Along with the best shots, detections with the coordinates of the human body are sent. The number of detections is set in the "minimal_body_track_length_to_send" parameter (streams management configuration).

    The general parameters of the video stream (data transfer protocol, path to the source, region of interest on the frame, etc.) are set in the "data" (streams management configuration) section.

    The frequency of images sending is specified in the "sending" (streams management configuration) section.

    The LUNA PLATFORM address is specified in the "lunaplatform" (FaceStream configuration) section.

Interaction of FaceStream with LUNA Streams#

To work with FaceStream, you should first launch an additional service — LUNA Streams (the default port is 5160). In the "create stream" request body to the LUNA Streams service, settings for stream management are specified. After sending the request, a stream is created, whose settings are taken by FaceStream for further processing. See the LUNA Streams Open API Specification for request examples.

LUNA Streams has its own user interface designed to work with streams. For more information, see "LUNA Streams user interface".

To use the LUNA Streams service, you should use the LUNA PLATFORM 5 services — LUNA Licenses and LUNA Configurator, as well as PostgreSQL or Oracle and Influx.

The Influx database is needed for the purposes of monitoring the status of LUNA PLATFORM services. If necessary, monitoring can be disabled.

The FaceStream documentation does not describe the use of an Oracle database.

If necessary, you can launch LUNA Streams without LUNA Configurator. This method is not described in the documentation.

FaceStream is licensed using the LUNA PLATFORM 5 key, which contains information about the maximum number of streams that LUNA Streams can process. The license is regulated by the LUNA Licenses service.

See the FaceStream installation manual for detailed information on activating the LUNA Streams license.

The PostgreSQL/Oracle database stores all the data of LUNA Streams.

LUNA Streams API versions#

LUNA Streams has two API versions.

To switch between API versions, you need to update the corresponding value in the "api_version" parameter of the "lunastreams" section in the FaceStream settings.

The first API version is used by default. This documentation describes the second API version. See the description of the first API version in the FaceStream documentation v.5.1.49 and below.

Important: Attempts to execute requests to the second version of the API with the value "api_version" = "1" and vice versa will lead to errors, since for the first version of the API the LUNA PLATFORM address is specified in the body of the request to create a stream, and for the second version the address is specified in the "lunaplatform" parameter group in the FaceStream settings.

Stream creation and processing sequence diagram#

The stream creation and processing sequence diagram is shown below:

Stream creation and processing sequence diagram
Stream creation and processing sequence diagram

1․ The user sets the settings of FaceStream, TrackEngine and the settings of the LUNA Streams service in the LUNA Configurator service.

LUNA Streams service settings contain only service settings (LUNA Streams database address, InfluxDB address, etc.) and do not contain video stream management settings (parameters for sending the best shots, LP address, handler IDs, etc.). Such settings are set using a separate HTTP request after starting FaceStream (see point 4).

If necessary, the user can set the FaceEngine settings in a separate configuration file.

2․ When starting FaceStream, the application reads the settings from the LUNA Configurator service.

3․ FaceStream switches to the waiting mode for the stream to appear.

4․ The user sends an HTTP request "create stream" to the LUNA Streams service containing stream management settings in the request body.

5․ The LUNA Streams service checks the availability of a license feature that regulates the number of streams for the operation of LUNA Streams in the LUNA Licenses service. If any streams are already being processed at the moment, then the number of streams already being processed at the time of the request is additionally checked using the FaceStream report (not reflected in the diagram, see point 17).

6․ The LUNA Licenses service returns a response.

7․ If there is no license feature or the maximum number of available streams is being processed at the time of stream creation, the corresponding error is returned to the user.

8․ If the license feature is present and the maximum number of available streams is not processed yet at the time of stream creation, the LUNA Streams service creates a stream and records the stream management settings in the LUNA Streams database.

9․ The LUNA Streams service receives the "stream_id".

10․ The LUNA Streams service returns the unique "stream_id" to the user.

11․ The LUNA Streams service adds a stream to the internal queue.

The queue is implemented in the LUNA Streams service itself and is not external.

12․ The LUNA Streams service updates the status of the stream to "pending".

13․ The LUNA Streams service updates the status of the stream in the LUNA Streams database.

14․ The LUNA Streams service receives a response.

If FaceStream is disabled at the time of stream creation, then only the number of streams with the “pending” status that is stipulated by the license can be created. After the FaceStream is launched, the streams created in the queue order will be accepted for processing.

You can view the streams in the queue by filtering them in a certain way using the "streams/processing/queue" GET request.

Streams can be created with the status "pause". In this case, they will be added to the database and will wait for a manual status update to "pending".

15․ FaceStream retrieves the stream(s) from the queue with the status "pending".

16․ FaceStream starts sending data to the main LUNA PLATFORM 5 services for further processing of frames according to the specified handler and generating events.

17․ FaceStream starts sending stream processing reports to LUNA Streams.

The time of sending reports is fixed and cannot be changed.

18․ The LUNA Streams service updates the status of processed streams to "in_progress".

19․ The LUNA Streams service removes the processed stream from the queue.

20․ The LUNA Streams service updates the status of the stream in the LUNA Streams database.

21․ The LUNA Streams service receives a response.

Completion of processing for a set of images, a video file, or a finite video stream:

22․ FaceStream stops sending data to LUNA PLATFORM.

If it is necessary for the video stream to be perceived as finite, then it is necessary to enable the "endless" parameter.

23․ FaceStream sends the latest report to the LUNA Streams service.

If the report says that some stream has been processed, then the FaceStream handler takes the following parameters of the stream with the status "pending" from the LUNA Streams queue, and the service changes the status of the stream from "pending" to "in_progress", removing it from the queue. If, for unknown reasons, the report was not transferred, then the streams are re-queued.

24․ The LUNA Streams service updates the status of the stream to "done" in the Streams database.

25․ The LUNA Streams service updates the status of the stream in the LUNA Streams database.

26․ The LUNA Streams service receives a response.

Endless video stream:

27․ FaceStream will send data to LUNA PLATFORM and reports to LUNA Streams until the video stream is interrupted.

For a more detailed description of the processing of the LUNA Streams stream, see "The process of processing streams in LUNA Streams".

For more detailed description of LUNA Streams stream processing, see "Stream processing pipeline".

Requests to LUNA Streams#

The following requests to the LUNA Streams service are available for working with streams:

A detailed description of requests and example requests can be found in the Open API specification of LUNA Streams service.

Stream distribution in LUNA Streams#

As mentioned earlier, the ability to process multiple streams at the same time is available.

For each stream, its current status is assumed:

  • "pending" — Stream is waiting for FaceStream worker.
  • "in_progress" — Stream processing is in progress.
  • "done" — Stream processing is completed (relevant for stream transmission types "videofile"/"images" or "tcp/udp" with "endless" set to "false").
  • "pause" — Stream processing is paused by user (not applicable for stream transmission types "videofile" or "images").
  • "restart" — Stream processing is restarted by server.
  • "cancel" — Stream processing is cancelled by user.
  • "failure" — Stream processing is failed by FaceStream worker.
  • "handler_lost" — Stream processing worker is lost, needs to be passed to another worker (not applicable for stream transmission types "videofile" or "images").
  • "not_found" — Stream was removed during the processing.
  • "deleted" — Stream was removed intentionally.

Statuses "pause" and "cancel" can be specified when updating a stream using the "update stream" request.

Statuses "restart", "handler_lost" are transient. With these statuses, it is impossible to receive a stream, however, the transition through these statuses is logged as usual. The "restart" status can only occur when using the "autorestart" section (see the "Streams automatic restart" section below).

The "not_found" status is internal and will be sent back for feedback if the stream was removed during processing. With this status, it is impossible to receive a stream.

The "deleted" status is virtual. Stream with this status cannot exist, but this status can be seen in the stream logs.

Statuses transition table#

The following table shows statuses that may be received after each listed status.

The "+" symbol means that the status listed in the first row may occur after the status in the first column. An empty field means that there are no cases when the status may occur.

The "-" symbol means that there is no stream in the system (it was not created or it was already deleted).

None pending in_progress done restart pause cancel failure handler_lost
None + +
pending + + + + +
in_progress + + + + +* + + +
done + + + +
restart + +
pause + + + +
cancel + + + +
failure + + + +
handler_lost +

* not supported for stream transmission types "videofile" or "images"

Stream processing pipeline#

By default, the new stream is created with the "pending" status and immediately enters the processing queue. Stream processing can be postponed by specifying the pause status when creating.

As soon as a free stream worker appears with a request for a pool from the queue, the stream is accepted for processing and it is assigned the "in_progress" status.

After the stream has been processed by the worker, it is assigned to the status "done" in case of success (relevant for stream transmission types "videofile"/"images" or "tcp/udp" with "endless" set to "false"), or "failure" if any errors have occurred. However, stream processing status may be downgraded from "in_progress" for the following reasons:

  • No feedback from stream worker: process will be downgraded by server and record with "handler_lost" status will be added to the stream logs.

  • Replacing the stream by user: record with "restart" status will be added to the stream logs.

For stream transmission types "tcp" or "udp" with "endless" set to "true", the status cannot change to "done".

During the processing routine, any change in the stream status is logged. Thus, you can restore the stream processing pipeline from the logs.

The number of simultaneous processing streams (statuses "pending" and "in_progress") is regulated by the license, but the LUNA Streams database can store an infinite number of streams with a different status, for example, "pause".

Streams with "failure" status can be automatically restarted.

Streams automatic restart#

The ability to automatically restart streams is relevant only for streams with a "failure" status. Automatic restart options (restart possibility, maximum number of restart attempts, delay between attempts) are specified by the user for each stream in the "autorestart" section of stream management settings. The parameters and automatic restart status can be received using the "get stream" request.

The automatic restart statuses are listed below:

  • "disabled" — Stream automatic restart is disabled by user ("restart" parameter is disabled).
  • "enabled" — Automatic restart is enabled but is not currently active because the stream is not in the "failure" status.
  • "in_progress" — Automatic restart in progress.
  • "failed" — Allowed number of automatic restart attempts was exceeded and none of the attempts were successful.
  • "denied" — Automatic restart is allowed by the user, but not possible due to a fatal error* received in the FaceStream report.

* fatal error is considered a Failed to authorize in Luna Platform error.

The process of processing streams with automatic restart enabled is described below.

When an attempt is made to automatically reload the data stream, the following changes occur:

  • Stream status changes first to "restart" and then to "pending".
  • Counter of automatic restart attempts "current_attempt" increases by 1.
  • Time record of the last attempt "last_attempt_time" is updated to reflect the current time.

In order for a restart to occur, the following conditions must be met:

  • Status of the stream should be in the "failure" state.
  • Automatic restart of the stream must be enabled (the "restart" parameter).
  • Value of the current automatic restart attempt "current_attempt" must be equal to "null" or less than the maximum number of attempts "attempt_count".
  • Time of the last attempt of automatic restart "last_attempt_time" should be equal to "null" or the difference between the current time and the time of the last attempt should be greater than or equal to the delay.

If the conditions below are met, then the automatic restart of the stream will fail (stopping restart attempts):

  • Stream status is in the "failure" state.
  • Status of the automatic restart of the stream is in the "in_progress" state.
  • Value of the current automatic restart attempt "current_attempt" is equal to the value of the maximum number of attempts "attempt_count".

If the conditions below are met, then the automatic restart of the stream will be completed:

  • Status of the stream is not equal to the "failure" status.
  • Status of the automatic restart of the stream is in the "in_progress" state.
  • Time of the last attempt of the automatic restart "last_attempt_time" is "null" or the difference between the current time and the time of the last attempt is greater than or equal to the delay.

The completion of the automatic restart of the stream means changing the status of the automatic restart of the stream to "enabled" and resetting the values of the current attempt of automatic restart "current_attempt" and the time of the last attempt of automatic restart "last_attempt_time".

Streams logs automatic deletion#

If necessary, you can start automatic deletion of stream logs. Automatic deletion of logs helps to clear the "log" table of the LUNA Streams database from a large number of unnecessary logs.

The most recent entry for each stream will not be deleted.

Automatic deletion of stream logs is configured using the following parameters from the "LUNA_STREAMS_LOGS_CLEAR_INTERVAL" section:

  • "interval" — Sets the interval for deleting logs. Logs older than this interval will be deleted.
  • "interval_type" — Sets the interval type (weeks, days, hours, minutes, seconds).
  • "check_interval" — Sets the frequency of checking logs for deletion (seconds).
  • "active" — Enables/disables automatic deletion of stream logs.

By default, automatic deletion of logs is disabled ("active" = false).

The default settings include automatic deletion of logs ("active" = true) with checking of log streams in the database every 180 seconds ("check_interval" = 180) and delete logs older than 7 days ("interval" = 7 and "interval_type" = days).

Example of checking logs for deletion every 5 minutes and deleting logs older than 4 weeks:

{
    "interval": 4,
    "interval_type": "weeks",
    "check_interval": 300,
    "active": true
}

Streams grouping#

Streams can be grouped. Grouping is intended to combine streams with multiple cameras into logical groups. For example, you can group streams by territorial characteristic.

A stream can be linked to several groups.

The group is created using the "create group" request. To create a group, you need to specify the required parameters "account_id" and "group_name". If necessary, you can specify a description of the group.

Stream can be linked to a group in two ways:

  • Using the "group_name" or "group_id" parameters during stream creation ("create stream" request).
  • Using the "linker" request. In the request, you should specify the streams IDs and the group to which they need to be linked.

Using the "linker" request you can also unlink streams from a group.

If the stream was linked to a group, then the "get stream" or "get streams" requests will show the group in the "groups" field.

LUNA Streams database description#

The LUNA Streams database general scheme is shown below.

LUNA Streams database
LUNA Streams database

See "Streams management configuration" for a description of the database data.

Recommendations for FaceStream configuration#

This section provides general guidelines for setting up FaceStream.

The names of the configuration, which describes the configured parameters, are mentioned in this section.

Before starting configuration#

You should perform the FaceStream configuration for each camera used separately. FaceStream should work with the stream of the camera, located in the standard operating conditions. The following reasons lead to these requirements:

  • Frames with different cameras may differ by:
    • Noise level
    • Frame size
    • Light
    • Blurring
    • Etc.
  • FaceStream settings depend on the lighting conditions, therefore, will be different for the cameras placed in a dark room and a light.
  • FaceStream performance depends on the number of faces or bodies in the frame. Therefore, the settings for the camera, which detects one face every 10 seconds, will be different from the settings for the camera detecting 10 faces per second.
  • The number of detected faces and bodies and the quality of these detections depend on correct location of the camera. When the camera is at a wrong angle, faces are not detected in frames. Moreover, head angles can also exceed the acceptable degree hence the frame with the detected face could not be used for further processing.
  • Faces and bodies in the zone of camera view can be partially or completely blocked by some objects. There can be background objects that can prevent the proper functioning of recognition algorithms.

The camera can be positioned so that the lighting or shooting conditions change throughout the day. It is recommended to test FaceStream work under different conditions and choose the best mode, providing reliable FaceStream operation under any conditions.

You can specify the FPS for video processing using the "real_time_mode_fps" parameter.

FaceStream performance configuration#

The mentioned above parameters have the greatest impact on the FaceStream performance.

Reduction of face search area#

Not all the areas of the frame contain faces. Besides, not all the faces in the frame have the required size and quality. For example, the sizes of faces in the background may be too small, and the faces near the edge of the frame may have unacceptable pitch, roll, or yaw angles.

The "roi" parameter (streams management configuration, section "data"), enables you to specify a rectangular area to search for faces.

Source frame with DROI area specified
Source frame with DROI area specified

The specified rectangular area is cut out from the frame and FaceStream performs further processing using this image.

Cropped image processed by FaceStream
Cropped image processed by FaceStream

The smaller the search area, the less resources are required for processing each frame.

Correct exploitation of the "roi" parameter significantly improves the performance of FaceStream.

The parameter should be used only when working with faces.

Frame scaling#

Frame scaling before processing can significantly increase the performance of FaceStream. Frame scaling can be enabled in the following settings:

Operating mode Parameter name Default value Default state
Only with face detector "minFaceSize" FaceEngine settings 50 (reduce by 2.5 times) Enabled
Only with body detector "ImageSize" FaceEngine settings 640 (reduce to 640 pixels on the largest side) Enabled
With face detector and with body detector "detector-scaling" and "scale-result-size" settings TrackEngine 640 (reduce to 640) Only with body detector

Defining area with movement#

Three parameters are responsible for determining the area with movement, set in the TrackEngine configuration:

Below are the recommended values for these settings when using CPU and GPU:

frg-subtractor frg-regions-alignment frg-regions-square-alignment
Recommended value when utilizing CPU 1 0 0
Recommended value when utilizing GPU 1 360 0

Batch processing of frames#

Three parameters are responsible for batch processing of frames, set in TrackEngine configuration:

General configuration information#

Working with track#

A new track is created for each detected face or body. Best shots are defined and sent for each track.

In general, the track is interrupted when the face can no longer be found in the frame. If a track was interrupted and the same person appears in the frame, a new track is created.

There can be a situation when two faces or bodies interact in a frame (one person behind the other). In this case, the tracks for both persons are interrupted, and new tracks are created.

There can be a situation when a person turns away, or a face or body is temporarily blocked. In this case, you can specify the "skip-frames" parameter (TrackEngine configuration) instead of interrupting the track immediately. The parameter sets the number of frames during which the system will wait for the face to reappear in the area where it disappeared.

When working with the track, it is also useful to use the "detector-step" parameter, which enables you to specify the number of frames on which face redetection will be performed in the specified area before face detection is performed.

Best shot sending#

The "sending" parameters group (streams management configuration) enables you to set parameters for the best shot sending. FaceStream sends the received best shots to LUNA PLATFORM (see "Priority parameters list").

You can send several best shots for the same face or body to increase the recognition accuracy. You should enable the "face_bestshots_to_send" or "body_bestshots_to_send" (streams management configuration) parameters in this case.

LUNA PLATFORM enables you to aggregate the best shots and create a single descriptor of a better quality using them.

If the required number of best shots was not collected during the specified period or when the track was interrupted the collected best shots are sent.

The "time_period_of_searching" and "silent_period" parameters (streams management configuration) can be specified in seconds or in frames. Use the "type" parameter to choose the type.

Note: The "silent_period" parameter only works when the face detection mode is enabled. It will not work when working with bodies or in collaborative mode.

The general options for configuring the "time_period_of_searching" and "silent_period" parameters of the "sending" section from streams management configuration when working with faces are listed below.

  • The best shot is sent after the track is interrupted and the person left the video camera zone of view.

    All the frames with the person's face or body are processed and the best shot is selected.

    time_period_of_searching = -1 silent_period = 0

  • It is required to quickly receive the best shot and then send best shots with the specified frequency.

    For example, it is required to send a best shot soon after an intruder entered the shop. The intruder will be identified by the blacklist.

    The mode is also used for the demonstration of FaceStream capabilities in real-time.

    The best shot will be sent after the track is interrupted even if the specified period did not exceed.

    time_period_of_searching = 3 silent_period = 0

  • It is required to quickly send the best shot and then send the best shot only if the person is in the frame for a long time.

    time_period_of_searching = 3 silent_period = 20

  • It is required to quickly send the best shot and never send the best shot from this track again.

    time_period_of_searching = 3 silent_period = -1

Frames filtration#

The filtration of face frames is performed by three main criteria (they are all set in the streams management configuration):

The "yaw_number" and "yaw_collection_mode" parameters are additionally set for the yaw angle. The parameters reduce the possibility of the error occurrence when the "0" angle is returned instead of a large angle.

If a frame did not pass at least one of the specified filters, it cannot be selected as a best shot.

If the "face_bestshots_to_send" or "body_bestshots_to_send" parameter is set, the frame is added to the array of best shots to send. If the required number of best shots to send was already collected, the one with the lowest frame quality score is replaced with the new best shot if its quality is higher.

The filtration of body frames is performed only by one criterion - "min_score_body".

Working with ACMS#

Work with ACMS is performed only with faces.

Use the "primary_track_policy" settings when working with ACMS. The settings enables you to activate the mode for working with a single face, which has the largest size. It is considered, that the face of interest is close to the camera.

The track of the largest face in the frame becomes primary. Other faces in the frame are detected but they are not processed. Best shots are not sent for these faces.

As soon as another face reaches a larger size than the face from the primary track, this face track becomes primary and the processing is performed for it.

The mode is enabled using the "use_primary_track_policy" parameter.

The definition of the best shots is performed only after the size (vertical) of the face reaches the value specified in the "best_shot_min_size" parameter. Frames with smaller faces can't be the best shots. When the face detection vertical size reached the value set in the "best_shot_proper_size" parameter the best shot is sent as a best shot at once.

The "best_shot_min_size" and "best_shot_proper_size" are set depending on the video camera used and its location.

The examples below show configuration of the "sending" group parameters from streams management configuration for working with ACMS.

  • The turnstile will only open once. To re-open the turnstile you should interrupt the track (move away from the video camera zone of view).
time_period_of_searching = -1
silent_period  = 0
  • The turnstile will open at certain intervals (in this case, every three seconds) if a person stands directly in front of it.
time_period_of_searching = 3    
silent_period  = 0

If the "use_primary_track_policy" parameter is enabled, the best shot is never sent when the track is interrupted.

Formats, video compression standards, and protocols#

FaceStream utilizes the FFMPEG library to convert videos and get a stream using various protocols. All the main formats, video compression standards, and protocols that were tested when working with FaceStream are listed in this section.

FFMPEG supports more formats and video compression standards. They are not listed in this section, because they are rarely used when working with FaceStream.

Video formats#

Video formats that are processed using FaceStream:

  • AVI
  • MP4
  • MOV
  • MKV
  • FLV

Encodings (codecs)#

Basic video compression standards that FaceStream works with:

  • MPEG1, MPEG2, MPEG3, MPEG4
  • MS MPEG4
  • MS MPEG4v2
  • MJPEG
  • H.264
  • H.265
  • VC1
  • HEVC
  • VP8
  • VP9
  • AV1
  • Other

Protocols#

Basic transport layer protocols used by FaceStream to receive data:

  • TCP
  • UDP

Basic application layer protocols used by FaceStream to receive data:

  • HTTP (based on the transport layer protocol TCP)
  • HTTPS (based on the transport layer protocol TCP)
  • RTP (based on the transport layer protocol UDP)
  • RTSP (based on transport layer protocols TCP or UDP)
  • RTMP (based on the transport layer protocol TCP)
  • HLS (based on the transport layer protocol TCP)

To use application layer protocols, you must specify the appropriate transport layer protocol in the "type" parameter of the streams management settings.

Memory consumption when running FaceStream#

This section lists the reasons for increasing RAM consumption when running FaceStream.

  • Each stream increases memory consumption. The amount of the consumed memory depends on the settings set for FaceStream:

  • Number of Ffmpeg threads in the "ffmpeg_threads_number" parameter (streams management configuration).

  • Image cache size in the "stream_images_buffer_max_size" parameter (FaceStream configuration).

  • Set buffer sizes in the "frames-buffer-size" parameter (TrackEngine configuration).

  • If the number of threads specified in the "ffmpeg_threads_number" parameter is greater than "1" (streams management configuration), the memory consumption increases significantly. At the same time, the increase in consumption is extremely slow and can be noticed after several hours of operation only.

For RTSP streams, you can set the "ffmpeg_threads_number" parameter to "0" or "1" (streams management configuration). In this case, memory growth is not noticed.

  • Memory consumption increases after FaceStream starts. Growth occurs within 1-2 hours. This is related to caches filling (see point 1). If no new streams are created and step 2 is not executed, the memory consumption stops growing.

  • Memory consumption increases when settings in the Debug section are enabled (FaceStream and TrackEngine configurations).

Stream playback interface#

FaceStream has the ability to view the stream in real time. To view the stream, you should enter the following address in the browser bar after the FaceStream starts processing the stream:

http://127.0.0.1:34569/api/1/streams/preview/<stream_id>.

When objects appear in the camera's field of view, FaceStream displays them in a certain way.

Yellow bounding box occurs if a detection fails at least one of the "detection_yaw_threshold", "detection_pitch_threshold" or "detection_roll_threshold" parameters.

Red color bounding box occurs if the detection acceptance score is lower than the value specified in the "min_score_face" or "min_score_body" parameters.

Blue color bounding box occurs when an object is detected (redetected) or tracked.

Green bounding box occurs in all other cases when all conditions are met.

Orange bounding box occurs when using ROI.

Face and body detection collaborative mode#

FaceStream has a face and body detection collaborative mode available.

Important: The functionality is in beta testing. Some functions may not work.

Collaborative mode is enabled by simultaneously enabling the "use-face-detector" and "use-body-detector" settings. The use of collaborative mode is controlled by the "data" > "analytics" > "mode" setting in the stream management settings.

Also, for the сollaborative mode to work, you need to use the V2 API for LUNA Streams. The API version for LUNA Streams is controlled by the "lunastreams" > "api_version" FaceStream settings.

Note: The key feature of the V2 API is that the LUNA PLATFORM address is specified in the "lunaplatform" section of the FaceStream settings, and not in the body of the stream creation request.

When you enable collaborative mode, FaceStream sends the following data to the handlers/{handler_id}/stream_events LUNA PLATFORM resource:

  • Face and body best shots
  • Detection time
  • Time relative to the beginning of the video file
  • Coordinates of faces and bodies
  • Source images

In collaborative mode, the threshold "min_score_body" is used.

Periodic sending of best shots in collaborative mode works similarly to working with bodies. This means that the "silent_period" parameter will not work.

Unsupported features#

The following features are not currently used in collaboration mode:

  • Primary Track Policy.
  • Parameter "data" > "analytics" > "send" > "full_frame_settings" that determines which source frames will be sent (face, body, or face and body). Currently, only source face and body frames are sent.
  • Periodic sending of faces. At the moment, the scenario of periodic sending of bodies is working (see above).
  • Parameter "data" > "analytics" > "sending" > "bestshot_settings" > "type" that determines which images will be sent best (face, body, or face and body). At this time, only the best shots of faces and bodies will be featured.