Skip to content

FaceStream configuration#

Settings configuration is performed by one of the following ways:

Below are the settings divided into logical blocks depending on the main functions performed by the block.

Logging section#

Settings section of the application logging process. It is responsible for message output on errors and/or current state of the application.

Severity parameter#

Severity parameter defines which information the user receives in logs. There are three information filter options:

0 - outputs all the information,

1 - outputs system warnings only,

2 - outputs errors only.

"severity": 
{
    "value": 1,
    "description": "Logging severity levels … "
}

Tags parameter#

Tags enable you to get information about the processing of frames and errors that occur only for FaceStream processes of interest.

This parameter enables you to list tags that are associated with logging of relevant information.

If a corresponding tag is not specified, the information is not logged.

Information on specified tags is displayed according to the severity parameter value.

Logs text includes the corresponding tag. It can be used for logs filtration.

Errors are always written in the log. They do not require additional tags.

Tags description

Tag Description
streams Information about LUNA Streams operation
common General information
ffmpeg Information about FFMPEG library operation
gstreamer Information about GStream library operation
liveness Information about the presence of a living person in the frame ("liveness" section): is there enough information for liveness check, and does the frame pass the liveness check
primary-track Information about the primary track ("primary_track_policy" section): the frame passed the specified thresholds and what track is selected as primary
bestshot Information about the best shot selection: best shot occurrence, its change and sending to external service
angles Information about filtration by head pose
ags Information corresponding to the frames quality. The information is used for further processing using LUNA PLATFORM
mouth-occlusion Information about mouth occlusion is recorded to the log file
statistics Information about performance, the number of frames processed per second, and the number of frames skipped
image Information about frames processing
http_api Information about API requests sent to FaceStream in server mode and received responses
client Information about sending messages to LUNA PLATFORM and the responses received
json Information about processing parameters from configuration files and the Configurator service
debug Debug information. It is recommended to use this tag when debugging only and not during FS operation. It provides a large amount of debugging information
"tags" : {
    "value" : ["common", "ffmpeg", "gstreamer", "bestshot", "primary-track", "image", "http_api", "client", "json", "streams"], 
    "description" : "Logging specificity tags, full set: [streams, common, ffmpeg, gstreamer, liveness, primary-track, bestshot, angles, ags, mouth-occlusion, statistics, image, http_api, client, json, debug]"
},

Mode parameter#

Mode parameter sets the logging mode of the application: file or console. There are three modes available:

  • "l2c" – output information to console only;

  • "l2f" – output information to file only;

  • "l2b" – output to both, console and file.

"mode":
{
    "value": "l2b",
    "description": " The mode of logging … "
}

In the FaceStream mode of working with configuration files, you can configure the directory to save logs when information is output to a file using the --log-dir launching parameter.

Sending section#

This section is used to send portraits in the form of HTTP-requests from FaceStream to external services.

Request_type section#

Request_type is a type of query that is used to send portraits to external services. There are 2 supported types (to work with different versions of LUNA):

  • "jpeg" is used to send normalized images to VisionLabs LUNA PLATFORM;

  • "json" may be used to send portraits to custom user services or VisionLabs FaceStreamManager for further image processing.

"request_type":
{
    "value": "jpeg",
    "description": " Type of request to server with portrait ..."
},

For a detailed description of the requests, see the table below.

Request types

Format Request type Authorization headers Body
JSON PUT Authorization: Basic, login/password(Base64) Media type: application/json; frame – the original frame in Base64 (if send_source_frame option is on); data – a portrait in Base64; identification – Cid parameter value. JSON example: {frame":"","data":"image_in_base_64","identification":"camera_1"}
JPEG POST Authorization: Basic, login/password(Base64) or X-Auth-Token: 11c59254-e83f-41a3-b0eb-28fae998f271(UUID4) Media type: image/jpeg

Portrait_type parameter#

This parameter is used only for working with faces.

Portrait_type parameter defines the format of a detected face to send to an external service. Possible values:

  • "warp" - use a normalized image;

  • "gost" - do not use the transformation, cut the detected area from the source frame, considering indentation.

Properties of the normalized image (warp):

  • size of 250x250 pixels;

  • face is centered;

  • face should be aligned in the way that, if you draw an imaginary line connecting the corners of the eyes, it is close to horizontal.

Such image format when working with LUNA PLATFORM offers the following advantages:

  • a constant minimal predictable amount of data for network transfer;

  • face detecting phases in LUNA PLATFORM automatically turn off for such images, which leads to the significant reduction of interaction time.

"portrait_type":
{
    "value": "warp",
    "description": "Image format type..."
}

Send_source_frame parameter#

This parameter enables to send a full source frame where the face was detected.

When sending image to LUNA PLATFORM you should specify the URL of LUNA Image Store service in the "frame_store" parameter.

"send_source_frame" :
{
    "value": false,
    "description": "Send source frame for portrait from stream (false by default)."
}

Send_detection_path parameter#

This parameter is used only for working with bodies.

The "send_detection_path" parameter enables you, along with the bestshots, to send a certain number of detections with the coordinates of the human body - x, y, width and height (see the "save event" request in the OpenAPI LUNA PLATFORM document). The maximum number of sent detections is regulated by the "detection_path_length" parameter, the minimum - by the "minimal_body_track_length_to_send" parameter.

The parameter should be used in conjunction with the detection handler > handler_id parameter. When this parameter is enabled, in addition to generating the general event, one more event will be created, associated with the general one by "track_id". This event will contain only the coordinates of the human body.

"send_detection_path" : {
    "value" : false,
    "description" : "Send detection path for Luna api version 6 or higher ('false' by default)."
}

Detection_path_length parameter#

This parameter is used only for working with bodies.

This parameter sets the maximum number of detections for the "send_detection_path" parameter. Values from 1 to 100 inclusive are available.

"detection_path_length" : {
    "value" : 100,
    "description" : "Maximum length of detection path allowed for sending to Luna ('100' by default)."
}

Minimal_body_track_length_to_send parameter#

This parameter is used only for working with bodies.

This parameter sets the number of detections for the send_detection_path parameter, less than the value of which they will not be sent. Values from 1 to 100 inclusive are available.

"minimal_body_track_length_to_send" : {
    "value": 3,
    "description" : "Minimal body track length to send"
}

Async_requests parameter#

The parameter specifies whether to execute requests asynchronously or synchronously in LUNA PLATFORM.

By default, the asynchronous mode is set, in which all requests to the LUNA PLATFORM are performed in parallel.

"async_requests" : {
    "value" : true,
    "description" : "Asynchronous requests to Luna server (true by default)."
},

Aggregate_attr_requests parameter#

The "aggregate_attr_requests" parameter enables the bestshots aggregation to receive a single descriptor in LUNA PLATFORM.

Aggregation is performed if there is more than one bestshot sent. The number of frames to send is set by the "number_of_bestshots_to_send" parameter.

The accuracy of face and body recognition is higher when using an aggregated descriptor.

"aggregate_attr_requests" : 
{            
    "value" : true,            
    "description" : "Set aggregate attributes in request to luna api 6 if there are more than one bestshot (true by default)."        
},

Jpeg_quality_level parameter#

JPEG quality for source frames sending:

  • 'best' - compression is not performed
  • 'good' - 75% of source quality
  • 'normal' - 50% of source quality
  • 'average' - 25% of source quality
  • 'bad' - 10% of source quality

The 'best' quality is set by default.

High quality images sending can affect the frames processing speed.

"jpeg_quality_level" : {
    "value" : "best",
    "description" : "Level of jpeg quality for source frames ['best', 'good', 'normal', 'average', 'bad'] ('best' by default)."
}

Lunastreams section#

This section describes how to send ready-made images as HTTP requests from FaceStream to the LUNA Streams service.

See "Interaction of FaceStream with LUNA Streams" section for details on how LUNA Streams works with FaceStream.

Origin parameter#

The address and port of the server where the LUNA Streams service is running.

"origin": {
    "value": "http://127.0.0.1:5160",
    "description": "LunaStreams url address."
}

Api_version parameter#

The parameter specifies the API version of the LUNA Streams service. At the moment, the API version "1" is supported.

"api_version": {
    "value": 1,
    "description": "Api version."
}

The current version of the API can always be found in the API service documentation.

Max_number_streams parameter#

The parameter sets the upper bound on the number of streams. The value must be greater than 0.

"max_number_streams": {
    "value": 50,
    "description": "Upper bound on the number of streams FS processes. Value must be greater than 0."
}

Request_stream_period parameter#

The parameter sets the time period between requests to receive new streams from LUNA Streams in the range from 0.1 to 3600 seconds.

The default value is 1 second.

"request_stream_period": {
    "value": 1.0,
    "description": "Time period for requesting new streams from LUNA Streams. Available range [0.1, 3600]. Default value 1 second."

Send_feedback_period parameter#

The parameter sets the time period between sending reports on processed streams to LUNA Streams in the range from 1.0 to 3600 seconds.

The default value is 5 seconds.

The value of this parameter should not exceed the value of the "STREAM_STATUS_OBSOLETING_PERIOD" parameter, set in the LUNA Streams service settings.

"send_feedback_period": {
    "value": 5.0,
    "description": "Time period for sending report of streams. Available range [1.0, 3600]. Default value 5 seconds. Must not be larger than STREAM_STATUS_OBSOLETING_PERIOD in LUNA Streams."
}

Max_feedback_delay parameter#

The parameter sets the maximum report sending delay in the range from 1.0 to 3600 seconds. If the report has not been sent within the given time, then FaceStream will stop processing the current streams.

The default value is 10 seconds.

The value of this parameter should not be less than the value of the parameter ["send_feedback_period"] (7-facestream-configuration.md#send_feedback_period-parameter) and should not exceed the value of the parameter "STREAM_STATUS_OBSOLETING_PERIOD", set in the LUNA Streams service settings.

"max_feedback_delay": {
    "value" : 10.0,
    "description": Max feedback sending delay after which processing streams stops. Available range [1.0, 3600]. Default value 10 seconds. Must not be less than send_feedback_period and larger than STREAM_STATUS_OBSOLETING_PERIOD in LUNA Streams."
}

Performance section#

Stream_images_buffer_max_size parameter#

The parameter specifies the maximum size of buffer with images for a single stream.

When you increase the parameter value, the FaceStream performance increases. The higher is the value, the more memory is required.

We recommend setting this parameter to 40 when working with GPU, if there is enough GPU memory.

"stream_images_buffer_max_size" : {
   "value" : 40,
   "description" : "Max images buffer size for a single stream. Higher value provides better perfomance, but increases memory consumption. When set to 0 buffer is not used. (40 by default)"
   }

Enable_gpu_processing parameter#

This parameter enables you to utilize GPU instead of CPU for calculations.

GPU enables you to speed up calculations, but it increases the consumption of RAM.

GPU calculations are supported for FaceDetV3 only. See "defaultDetectorType" parameter in the Faceengine configuration ("faceengine.conf").

"enable_gpu_processing" : {
            "value" : false,
            "description" : "When 'true' the processing is performed using GPU instead of CPU. GPU could provide better perfomance, but increases memory consumption. ('false' by default)"
        },

Convert_full_frame parameter#

If this parameter is enabled, the frame is immediately converted to an RGB image of the required size after decoding. This results in a better image quality but reduces the speed of frames processing.

If this parameter is disabled, the image is scaled according to the settings in the Trackengine configuration (standard behavior for releases 3.2.4 and earlier).

This parameter is similar to frame_processing_mode parameter, but it is set for all FaceStream instances at once.

"convert_full_frame" : {
            "value" : true,
            "description" : "Enables converting full raw frame from decoder to rgb for processing. If value is 'true', then better quality is achieved. 'false' value provides better perfomance. ('true' by default)"
        }

Debug section#

This section is used to configure and debug the application. Settings of this section are not recommended for use in industrial environment, since they consume significant resources and negatively affect performance.

Save_debug_info parameter#

Save_debug_info parameter makes it possible to save information about the detector operation and recognition results. If the value is "true", then the information is saved and used for debugging purposes to analyze the quality of the system.

"save_debug_info" :
{
    "value": false,
    "description": "Save information for quality analysis ..."
}

Save_only_jpegs_with_honest_detections parameter#

Parameter enables saving only the frames with detected faces. Parameter is used for debugging purposes in frame-by-frame analysis.

This setting can significantly save hard disk space if faces rarely appear in the frame.

"save_only_jpegs_with_honest_detections" : 
{
    "value": false,
    "description": "Filter for save_jpegs flag to save only jpegs with honest detections ('false' by default)."
},

Save_jpegs parameter#

Save-jpeg flag is used to save frames received for processing in the application. The parameter is used for debugging purposes for repeated frame-by-frame analysis.

Saved frames from the original stream may require considerable space on the hard disk.

"save_jpegs" :
{
    "value": false,
    "description": "Save jpegs for research visualization ..."
}