Skip to content

FaceStream Configuration#

Settings configuration is performed by one of the following ways:

All the settings below are divided into logical blocks depending on the main function of the block.

Logging section#

Settings section of the application logging process. It is responsible for message output on errors and/or current state of the application.

Severity#

Severity parameter defines which information the user receives in logs. There are three information filter options:

0 - outputs all the information,

1 - outputs system warnings only,

2 - outputs errors only.

"severity":
{
"value": 1,
"description": "Logging severity levels … "
}

Tags#

Tags enable you to get information about the processing of frames and errors that occur only for FaceStream processes of interest.

This parameter enables you to list tags that are associated with logging of relevant information. If a tag is not specified, the information is not logged. Information on specified tags is displayed according to the severity parameter value.

Table: Logging tags

Tag Description
ffmpeg Information about FFMPEG library operation is logged
bestshot Information about the best shot selection is logged: best shot occurrence, its change and sending to external service
primary-track Information about the primary track is logged ("primary_track_policy" section): the frame passed the specified thresholds and what track is selected as primary
liveness Information about presence of a living person in the frame is logged ("liveness" section): is there enough information for liveness check and does the frame pass the liveness check
angles Information about filtration by head pose is logged
mouth-occlusion Information about mouth occlusion is recorded to the log file
ags Information corresponding to the frames quality is logged. The information is used for further processing using LUNA PLATFORM
statistics Information about performance, the number of frames processed per second, and the number of frames skipped is logged
"tags" : {
"value" : ["ffmpeg", "bestshot", "primary-track"],
"description" : "Logging specificity tags, full set: [ffmpeg, bestshot, primary-track, liveness, angles, ags]"
},

Mode#

Mode parameter sets the logging mode of the application: file or console. There are three modes available:

  • "l2c" – output information to console only;

  • "l2f" – output information to file only;

  • "l2b" – output to both, console and file.

"mode":
{
"value": "l2b",
"description": " The mode of logging … "
}

When outputting the information to file, the user can configure the directory to save logs using running parameter -Ld.

Sending#


This section is used to send portraits in the form of HTTP-requests from FaceStream to external services.

Request-type#

Request-type#

Request-type is a type of query that is used to send portraits to external services. There are 2 supported types (to work with different versions of LUNA):

  • "jpeg" is used to send normalized images to VisionLabs LUNA PLATFORM;

  • "json" may be used to send portraits to custom user services or VisionLabs FaceStreamManager for further image processing.

"request-type":
{
"value": "jpeg",
"description": " Type of request to server with portrait ..."
},

For a detailed description of the requests, see the table below.

Table: Request types

Format Request type Authorization headers Body
JSON PUT Authorization: Basic, login/password(Base64) Media type: application/json; frame – the original frame in Base64 (if send-source-frame option is on); data – a portrait in Base64; identification – Cid parameter value. JSON example: {frame":"","data":"image_in_base_64","identification":"camera_1"}
JPEG POST Authorization: Basic, login/password(Base64) or X-Auth-Token: 11c59254-e83f-41a3-b0eb-28fae998f271(UUID4) Media type: image/jpeg

Portrait-type#

Portrait-type parameter defines the format of a detected face to send to an external service. Possible values:

  • "warp" - use a normalized image;

  • "gost" - do not use the transformation, cut the detected area from the source frame, considering indentation.

Properties of the normalized image (warp):

  • size of 250x250 pixels;

  • face is centered;

  • face should be aligned in the way that, if you draw an imaginary line connecting the corners of the eyes, it is close to horizontal.

Such image format when working with LUNA PLATFORM offers the following advantages:

  • a constant minimal predictable amount of data for network transfer;

  • face detecting phases in LUNA PLATFORM automatically turn off for such images, which leads to the significant reduction of interaction time.

"portrait-type":
{
"value": "warp",
"description": "Image format type..."
}

Send-source-frame#

Send-source-frame parameter allows to send not only portrait type image in a request but the full frame as well.

Available only if request-type = json.

" send-source-frame":
{
"value": false,
"description": "Send source frame for portrait from video stream (false by default)."
}

Luna-api#

The version of LUNA PLATFORM API is specified in this parameter. The version is required to create requests to LUNA PLATFORM.

Table: LUNA PLATFORM API versions

Version Value
LUNA PLATFORM 2 3
LUNA PLATFORM 3 4
LUNA PLATFORM 4 5

You can find the current API version in the corresponding documentation of the API service.

"luna-api" : {
    "value" : 3,
    "description" : "Luna server api version (3 by default)."
},

Async-requests#

The parameter specifies whether to execute requests asynchronously or synchronously in LUNA PLATFORM. The parameter works with all versions of LUNA PLATFORM.

By default, the asynchronous mode is set, in which all requests to the platform are performed in parallel.

"async-requests" : {
"value" : true,
"description" : "Asynchronous requests to Luna server (true by default)."
},

Aggregate-attr-requests#

The "aggregate-attr-requests" parameter enables the bestshots aggregation to receive a single descriptor in LUNA PLATFORM 4 (API version 5). Aggregation is performed if there is more than one bestshot sent.

The accuracy of face recognition is higher when using an aggregated descriptor.

You should specify the aggregation parameter ("aggregate_attributes=1") in the "generate events" query parameters to use this setting.

/handlers//events?aggregate_attributes=1

See the "Url" section.

"aggregate-attr-requests" : {
    "value" : true,
    "description" : "Set aggregate attributes in request to luna api 5 if there are more than one bestshot (true by default)."
},

Luna-account-id#

The parameter specifies the mandatory "luna-account_id" field, which is passed to LUNA PLATFORM 4 in the request header.

Account ID is set in the UUID4 format. You can find the requirements for the Account ID in the LUNA PLATFORM 4 documentation.

The parameter is used to bind the received data to a specific user.

"luna-account-id" : {
    "value" : "aaba1111-2111-4111-a7a7-5caf86621b5a",
    "description" : "Luna-account-Id header value for Luna api version 5 or higher ('' by default)."
}

Web_tasks#

The section contains settings for working with the message queue for the server mode.

    "web_tasks": {
        "concurrent-max-count" : {
            "value" : 3,
            "description" : "Max count of concurrent tasks being processed, (3 by default)."
        },
        "max-file-size" : {
            "value" : 52428800,
            "description" : "Max file size (in bytes) downloaded for web task. (52428800 by default)."
        }
    },

Concurrent-max-count#

The parameter determines the maximum number of concurrently processed tasks.

"concurrent-max-count" : {
"value" : 3,
"description" : "Max count of concurrent tasks being processed, (3 by default)."
},

Max-file-size#

The parameter specifies the maximum size of the file that is downloaded in the server mode.

"max-file-size" : {
       "value" : 52428800,
       "description" : "Max file size (in bytes) downloaded for web task. (52428800 by default)."
    }

Performance#

Stream-images-buffer-max-size#

The parameter specifies the maximum size of buffer with images for a single stream.

When you increase the parameter value, the FaceStream performance increases. The higher is the value, the more memory is required.

We recommend setting this parameter to 20 when working with GPU, if there is enough GPU memory.


"stream-images-buffer-max-size" : {
    "value" : 10,
    "description" : "Max images buffer size for a single stream. Higher value provides better perfomance, however it increases memory consumption. When set to 0 buffer is not used. (10 by default)"
    }

Enable-gpu-processing#

This parameter enables you to utilize GPU instead of CPU for calculations.

GPU enables you to speed up calculations, but it increases the consumption of RAM.

GPU calculations are not supported on Windows.

GPU calculations are supported for FaceDetV3 only. See "defaultDetectorType" parameter in "faceengine.conf".

"enable-gpu-processing" : {
            "value" : false,
            "description" : "When 'true' the processing is performed using GPU instead of CPU. GPU could provide better perfomance, however it increases memory consumption. ('false' by default)"
        },

Convert-full-frame#

If this parameter is enabled, the frame is immediately converted to an RGB image of the required size after decoding. This results in a better image quality but reduces the speed of frames processing.

If this parameter is disabled, the image is scaled according to the settings in the "trackengine.conf" file (standard behavior for releases 3.2.4 and earlier).

This parameter is similar to Frame-processing-mode, but it is set for all FaceStream instances at once.

"convert-full-frame" : {
            "value" : true,
            "description" : "Enables converting full raw frame from decoder to rgb for processing. If value is 'true', then better quality is achieved. 'false' value provides better perfomance. ('true' by default)"
        }

Debug#

This section is used to configure and debug the application. Settings of this section are not recommended for use in industrial environment, since they consume significant resources and negatively affect performance.

Draw-face-points#

Visual customization of key face points on the final image (nose, eyes, mouth). Points are drawn only on the image displayed during the visualization. This parameter affects the application only in case of show-window = true.

"draw-face-points":
{
"value": false,
"description": "Draw face point on image … "
}

Show-window#

If this setting is enabled, the application will output GUI-stream from camera and display the results of a detector.

This functionality is available on OS with GUI only. It is required to run FaceStream using the console in a graphical interface.

"show-window":
{
"value": false,
"description": " Show video window (false by default)."
}

Frames-per-second#

The maximum number of frames per second (FPS) to render the application. If the number of frames per second is excessive, part of the frames will be skipped. This parameter affects the application only in case of show-window = true.

"frames-per-second":
{
    "value": 20,
    "description": "Maximum frames per second (all other will be skipped)  … "
}

Show-tracker-detection#

Show-tracker-detection parameter allows to manage detection visualization in tracks. If a detector finds no confirmation for an existing track in the current frame, the application predicts the detection position of a track based on the information from the previous frames. Show-tracker-detection allows managing visualization of detections in tracks. This parameter affects the application only in case of show-window = true.

Possible values:

  • true - all detections are displayed in the visualization window;

  • false - only approved detections are displayed in the visualization window.

"show-tracker-detection":
{
    "value": false,
    "description": " Show tracker detections (false by default)."
}

Similarity-level-for-recognition#

Similarity-level-for-recognition parameter displays the results of face-detection in GUI-form of application. Available only if request-type=jpeg, and show-window = true.

Possible values:

  • value < 0 – to show no recognition results;

  • value from [0,1] – to display recognition results, where the match equals, or it is above the similarity-level-for-recognition.

"similarity-level-for-recognition":
{
    "value": "-1.0",
         "description": "Value [0 .. 1] - show request result in show-windows ..."
}

Save-debug-info#

Save-debug-info allows to save information on the detector work and recognition results. This parameter is used for debugging in order to calculate various statistics, and quality analysis of the system.

" save-debug-info ":
{
"value": false,
"description": "Save information for quality analysis ..."
}

Save-jpegs#

Save-jpeg flag is used to save frames received for processing in the application. The parameter is used for debugging purposes to repeated frame-by-frame analysis.

Saved frames from the original video-stream may require considerable space on the hard disk.

" save-jpegs":
{
"value": false,
"description": "Save jpegs for research visualization ..."
}

Save-only-jpegs-with-honest-detections#

Parameter enables saving only the frames with detected faces. Parameter is used for debugging purposes in frame-by-frame analysis.

"save-only-jpegs-with-honest-detections": {
    "value": false,
    "description": "Filter for save-jpegs flag to save only jpegs with honest detections ('false' by default)."
},

Use-smoothed-rects#

The use-smoothed-rects parameter provides better detection visualization. If the parameter is enabled (its value equals "true"), the rectangles (bounding boxes around detected faces) smoothly follow the face during visualization.

"use-smoothed-rects":
{
       "value": false,
       "description": "Draw smoothed detection rectangles for visualization ('false' by default)."
}

Visualize-liveness#

The visualize-liveness parameter enables information displaying in the GUI.

The "Alive" or "Fake" text is displayed for each of the faces in the frame according to the Liveness results.

The information about the primary track is displayed in the upper left corner.

The number of the track, which is primary, is displayed.

HAVE BESTSHOTS – the inscription indicates that the primary track has the best frame that can be sent to the server.

BESTSHOTS WERE SENDED – the inscription indicates that the best frame has been sent to the server.

Two columns of numbers show the last ten Liveness results separately for the head and shoulders. The left column displays the results for the head area, the right column for the shoulders area.

It is necessary to set the "true" value for the "show-window" parameter to use the visualize-liveness parameter.

In the ./data/input.json file, you should set true for the parameters use_primary_track_policy and use_liveness_filtration.

"visualize-liveness" : {
          "value" : false,
          "description" : "Visualize liveness activity on video window. ('false' by default)."}
Back to top