FaceStream configuration#
Settings configuration is performed by one of the following ways:
-
by editing of the JSON file .conf/configs/fs3Config.conf. The file location can be changed and set via the run arguments;
-
by editing parameters in the Configurator service (see "Use FaceStream with Configurator").
All settings listed below are divided into common and individual settings for bodies and faces, as well as logical blocks, depending on the main functions performed by the block.
Common FaceStream settings for faces and bodies#
Below are the common settings that need to be configured when working with faces or bodies.
Logging#
Settings section of the application logging process. It is responsible for message output on errors and/or current state of the application.
Severity#
Severity parameter defines which information the user receives in logs. There are three information filter options:
0 - outputs all the information,
1 - outputs system warnings only,
2 - outputs errors only.
"severity":
{
"value": 1,
"description": "Logging severity levels … "
}
Tags#
Tags enable you to get information about the processing of frames and errors that occur only for FaceStream processes of interest.
This parameter enables you to list tags that are associated with logging of relevant information.
If a corresponding tag is not specified, the information is not logged.
Information on specified tags is displayed according to the severity parameter value.
Logs text includes the corresponding tag. It can be used for logs filtration.
Errors are always written in the log. They do not require additional tags.
Tags description
Tag | Description |
---|---|
common | General information |
ffmpeg | Information about FFMPEG library operation |
gstreamer | Information about GStream library operation |
liveness | Information about the presence of a living person in the frame ("liveness" section): is there enough information for liveness check, and does the frame pass the liveness check |
primary-track | Information about the primary track ("primary_track_policy" section): the frame passed the specified thresholds and what track is selected as primary |
bestshot | Information about the best shot selection: best shot occurrence, its change and sending to external service |
angles | Information about filtration by head pose |
ags | Information corresponding to the frames quality. The information is used for further processing using LUNA PLATFORM |
mouth-occlusion | Information about mouth occlusion is recorded to the log file |
statistics | Information about performance, the number of frames processed per second, and the number of frames skipped |
image | Information about frames processing |
http_api | Information about API requests sent to FaceStream in server mode and received responses |
client | Information about sending messages to LUNA PLATFORM and the responses received |
json | Information about processing parameters from configuration files and the Configurator service |
debug | Debug information. It is recommended to use this tag when debugging only and not during FS operation. It provides a large amount of debugging information |
"tags" : {
"value" : ["common", "ffmpeg", "gstreamer", "bestshot", "primary-track", "image", "http_api", "client", "json"],
"description" : "Logging specificity tags, full set: [common, ffmpeg, gstreamer, liveness, primary-track, bestshot, angles, ags, mouth-occlusion, statistics, image, http_api, client, json, debug]"
},
Mode#
Mode parameter sets the logging mode of the application: file or console. There are three modes available:
-
"l2c" – output information to console only;
-
"l2f" – output information to file only;
-
"l2b" – output to both, console and file.
"mode":
{
"value": "l2b",
"description": " The mode of logging … "
}
When outputting the information to file, the user can configure the
directory to save logs using running parameter --log-dir
.
Sending#
This section is used to send portraits in the form of HTTP-requests from FaceStream to external services.
Request_type#
Request_type is a type of query that is used to send portraits to external services. There are 2 supported types (to work with different versions of LUNA):
-
"jpeg" is used to send normalized images to VisionLabs LUNA PLATFORM;
-
"json" may be used to send portraits to custom user services or VisionLabs FaceStreamManager for further image processing.
"request_type":
{
"value": "jpeg",
"description": " Type of request to server with portrait ..."
},
For a detailed description of the requests, see the table below.
Request types
Format | Request type | Authorization headers | Body |
---|---|---|---|
JSON | PUT | Authorization: Basic, login/password(Base64) | Media type: application/json ; frame – the original frame in Base64 (if send_source_frame option is on); data – a portrait in Base64; identification – Cid parameter value. JSON example: {frame":"","data":"image_in_base_64","identification":"camera_1"} |
JPEG | POST | Authorization: Basic, login/password(Base64) or X-Auth-Token: 11c59254-e83f-41a3-b0eb-28fae998f271(UUID4) | Media type: image/jpeg |
Send_source_frame#
This parameter enables to send a full source frame where the face was detected.
When sending image to LUNA PLATFORM you should specify the Image Store URL in the "image_store_url" parameter.
"send_source_frame" :
{
"value": false,
"description": "Send source frame for portrait from video stream (false by default)."
}
Luna_api#
The version of LUNA PLATFORM API is specified in this parameter. The version is required to create requests to LUNA PLATFORM. API version 6 is supported.
You can always find the current API version in the corresponding documentation of the API service.
"luna_api" : {
"value" : 6,
"description" : "Luna server api version (6 by default)."
},
Async_requests#
The parameter specifies whether to execute requests asynchronously or synchronously in LUNA PLATFORM.
By default, the asynchronous mode is set, in which all requests to the LUNA PLATFORM are performed in parallel.
"async_requests" : {
"value" : true,
"description" : "Asynchronous requests to Luna server (true by default)."
},
Aggregate_attr_requests#
The "aggregate_attr_requests" parameter enables the bestshots aggregation to receive a single descriptor in LUNA PLATFORM.
Aggregation is performed if there is more than one bestshot sent. The number of frames to send is set by the "number_of_bestshots_to_send" parameter.
The accuracy of face and body recognition is higher when using an aggregated descriptor.
"aggregate_attr_requests" :
{
"value" : true,
"description" : "Set aggregate attributes in request to luna api 6 if there are more than one bestshot (true by default)."
},
Jpeg_quality_level#
JPEG quality for source frames sending:
- 'best' - compression is not performed
- 'good' - 75% of source quality
- 'normal' - 50% of source quality
- 'average' - 25% of source quality
- 'bad' - 10% of source quality
The 'best' quality is set by default.
High quality images sending can affect the frames processing speed.
"jpeg_quality_level" : {
"value" : "best",
"description" : "Level of jpeg quality for source frames ['best', 'good', 'normal', 'average', 'bad'] ('best' by default)."
}
In the "Sending" section, individual parameters are also available for face settings or body settings.
Performance#
Stream_images_buffer_max_size#
The parameter specifies the maximum size of buffer with images for a single stream.
When you increase the parameter value, the FaceStream performance increases. The higher is the value, the more memory is required.
We recommend setting this parameter to 40 when working with GPU, if there is enough GPU memory.
"stream_images_buffer_max_size" : {
"value" : 40,
"description" : "Max images buffer size for a single stream. Higher value provides better perfomance, but increases memory consumption. When set to 0 buffer is not used. (40 by default)"
}
Enable_gpu_processing#
This parameter enables you to utilize GPU instead of CPU for calculations.
GPU enables you to speed up calculations, but it increases the consumption of RAM.
GPU calculations are supported for FaceDetV3 only. See "defaultDetectorType" parameter in "faceengine.conf".
"enable_gpu_processing" : {
"value" : false,
"description" : "When 'true' the processing is performed using GPU instead of CPU. GPU could provide better perfomance, but increases memory consumption. ('false' by default)"
},
Convert_full_frame#
If this parameter is enabled, the frame is immediately converted to an RGB image of the required size after decoding. This results in a better image quality but reduces the speed of frames processing.
If this parameter is disabled, the image is scaled according to the settings in the "trackengine.conf" file (standard behavior for releases 3.2.4 and earlier).
This parameter is similar to Frame_processing_mode, but it is set for all FaceStream instances at once.
"convert_full_frame" : {
"value" : true,
"description" : "Enables converting full raw frame from decoder to rgb for processing. If value is 'true', then better quality is achieved. 'false' value provides better perfomance. ('true' by default)"
}
Debug#
This section is used to configure and debug the application. Settings of this section are not recommended for use in industrial environment, since they consume significant resources and negatively affect performance.
Save_debug_info#
save_debug_info allows to save information on the detector work and recognition results. This parameter is used for debugging in order to calculate various statistics, and quality analysis of the system.
"save_debug_info" :
{
"value": false,
"description": "Save information for quality analysis ..."
}
Individual FaceStream settings for faces#
Below are the settings that need to be configured when working only with faces.
Debug#
Save_only_jpegs_with_honest_detections#
Parameter enables saving only the frames with detected faces. Parameter is used for debugging purposes in frame-by-frame analysis.
This setting can significantly save hard disk space if faces rarely appear in the frame.
"save_only_jpegs_with_honest_detections" :
{
"value": false,
"description": "Filter for save_jpegs flag to save only jpegs with honest detections ('false' by default)."
},
Save_jpegs#
Save-jpeg flag is used to save frames received for processing in the application. The parameter is used for debugging purposes for repeated frame-by-frame analysis.
Saved frames from the original video-stream may require considerable space on the hard disk.
"save_jpegs" :
{
"value": false,
"description": "Save jpegs for research visualization ..."
}
Sending#
Portrait_type#
Portrait_type parameter defines the format of a detected face to send to an external service. Possible values:
-
"warp" - use a normalized image;
-
"gost" - do not use the transformation, cut the detected area from the source frame, considering indentation.
Properties of the normalized image (warp):
-
size of 250x250 pixels;
-
face is centered;
-
face should be aligned in the way that, if you draw an imaginary line connecting the corners of the eyes, it is close to horizontal.
Such image format when working with LUNA PLATFORM offers the following advantages:
-
a constant minimal predictable amount of data for network transfer;
-
face detecting phases in LUNA PLATFORM automatically turn off for such images, which leads to the significant reduction of interaction time.
"portrait_type":
{
"value": "warp",
"description": "Image format type..."
}
Individual FaceStream settings for bodies#
Below are the settings that need to be configured when working only with bodies.
Sending#
Send_detection_path#
The send_detection_path
parameter enables you, along with the bestshots, to send a certain number of detections with the coordinates of the human body - x, y, width and height (see thesave event
request in the OpenAPI LUNA PLATFORM document). The maximum number of sent detections is regulated by the detection_path_length
parameter, the minimum - by theminimal_body_track_length_to_send
parameter.
The parameter should be used in conjunction with the luna_dynamic_human_handler_id parameter. When this parameter is enabled, in addition to generating the general event, one more event will be created, associated with the general one by "track_id". This event will contain only the coordinates of the human body.
"send_detection_path" : {
"value" : false,
"description" : "Send detection path for Luna api version 6 or higher ('false' by default)."
}
Detection_path_length#
This parameter sets the maximum number of detections for the send_detection_path
parameter.
Values from 1 to 100 inclusive are available.
"detection_path_length" : {
"value" : 100,
"description" : "Maximum length of detection path allowed for sending to Luna ('100' by default)."
}
Minimal_body_track_length_to_send#
This parameter sets the number of detections for the send_detection_path
parameter, less than the value of which they will not be sent.
Values from 1 to 100 inclusive are available.
"minimal_body_track_length_to_send" : {
"value": 3,
"description" : "Minimal body track length to send"
}