Skip to content

Additional Information#

Liveness description#

The liveness algorithm enables LUNA PLATFORM to detect presentation attacks. A presentation attack is a situation when an imposter tries to use a video or photos of another person to circumvent the recognition system and gain access to the person's private data.

There are the following general types of presentation attacks:

  • Printed Photo Attack. One or several photos of another person are used.
  • Video Replay Attack. A video of another person is used.
  • Printed Mask Attack. An imposter cuts out a face from a photo and covers his face with it.

Switch Liveness type#

There are two Liveness mechanisms available: Liveness V1 and Liveness V2. You can utilize only one Liveness at a time.

The Liveness mechanism used is specified in the license. The following values can be set in the license for the Liveness feature:

  • 0 - Liveness feature is not used.
  • 1 - Liveness V1 is used.
  • 2 - Liveness V2 is used.

Liveness V1 is launched as a separate service, whereas Liveness V2 is a part of the Handlers service. As Liveness v1 is a separate service, it should be enabled using the "liveness" option of the "ADDITIONAL_SERVICES_USAGE" section in the Configurator service. You can only perform Liveness V1 check using the "/liveness" resource, while the Liveness V2 check can be performed using "/handlers", "/verifiers", "/sdk" and "/liveness" resources.

The tables below show the system behavior when different license values are set.

Relations between set options and Liveness used for the "/liveness" resource

License value "Liveness" option Used liveness/error
0 true Error 403 is returned
0 false Error 403 is returned
1 true Liveness V1 is used
1 false Error 403 is returned
2 true Error 403 is returned
2 false Liveness V2 is used

For the Liveness V1 utilization for "/liveness" resource, you should have the value in the license set to "1" and the "liveness" option set to "true".

For the Liveness V2 utilization for "/liveness" resource, you should have the value in the license set to "2" and the "liveness" option set to "false".

All the other combinations lead to the 403 error when requesting the "/liveness" resource.

Relations between license value and Liveness used for "/handlers", "/verifiers" and "/sdk" resources

License value Used liveness/error
0 Error 403 is returned
1 Error 403 is returned
2 Liveness V2 is used

When the estimate_liveness=1 is set for these resources, the Liveness V2 should be enabled, and the "liveness" option of the "ADDITIONAL_SERVICES_USAGE" should be disabled. In all the other cases, the error 403 is returned.

Liveness check results#

The Liveness algorithm uses a single image for processing and returns the following data:

  • Liveness probability [0..1]. Here 1 means real person, 0 means spoof. The parameter shows the probability of a live person being present in the image, i.e. it is not a presentation attack. In general, the estimated probability must exceed the "Liveness Threshold".

  • Image quality [0..1]. Here 1 means good quality, 0 means bad quality. The parameter describes the integral value of image, facial, and environmental characteristics. In general, the estimated quality must exceed the "Image Threshold".

  • Prediction. Based on the above data, LUNA PLATFORM issues the following prediction:

  • 0 (spoof). Check revealed that the person is not real.

  • 1 (real). Check revealed that the person is real.
  • 2 (unknown). Result of the check is unknown. Such a prediction may return if the quality of the image being checked is below the "Image Threshold", which determines the quality of the processed image.

Liveness V2#

Liveness V2 is a part of the Handlers service and is used in the "/liveness", "/sdk", "/handlers" and "/verifiers" resources.

You can filter events by liveness in the "handlers/{handler_id}\/events" and "/verifiers/{verifier_id}\/verifications" resources, i.e. you can exclude "spoof", "real" or "unknown" results from image processing.

Filtering by liveness is available for the following scenarios:

You can also specify the liveness estimation parameter when manually creating and saving events in the "handlers/{handler_id}\/events/raw" resource.

For multiple uploaded images, you can aggregate the liveness results to obtain more accurate data.

For Liveness V2, the ability to license the number of completed transactions is available. When licensing Liveness V2, you can choose between an unlimited license and a license with a limited number of transactions. After the transaction limit is exhausted, it will be impossible to use the Liveness V2 estimate in requests. Requests that do not use Liveness V2 or with Liveness V2 estimation disabled are not affected by the exhaustion of the limit, they continue to work as usual.

Liveness V2 requirements#

Liveness estimation is not supported for samples (warped images) as they do not meet the requirements for incoming images.

The following requirements are related to Liveness V2 only.

This estimator supports images taken on mobile devices or webcams (PC or laptop).

Image resolution minimum requirements:

  • Mobile devices - 720 × 960 px
  • Webcam (PC or laptop) - 1280 x 720 px

There should be only one face in the image. An error occurs when there are two or more faces in the image.

The minimum face detection size must be 200 pixels.

Yaw, pitch, and roll angles should be no more than 25 degrees in either direction.

The minimum indent between the face and the image borders should be 10 pixels.

Liveness V1#

Liveness V1 is used in the "/liveness" resource only. If this liveness is enabled and you use other resources with Liveness estimation (e. g., "/sdk"), the 403 error is returned.

Additional request parameters#

Liveness V1 provides additional request paramters.

You can specify the device OS type in the "OS" field of the "meta" object in the request:

  • IOS
  • ANDROID
  • DESKTOP
  • UNKNOWN

The parameter can decrease the overall error rate.

Liveness V1 requirements#

Liveness estimation is not supported for samples (warped images) as they do not meet the requirements for incoming images.

There are certain requirements for image quality and face alignment that must be met to get correct results.

Face requirements:

  • A face should be fully open without any occlusions. The more face area is occluded, the lower the liveness estimation accuracy.
  • A face should be fully visible within a frame and should have padding around (the distance between the face and the image boundaries). The default minimum value of padding is 25 pixels. Cropping is not allowed.
  • Yaw and pitch angles are no more than 20 degrees in either direction.
  • The roll angle no more than 30 degrees in either direction.
  • The minimal distance between the eyes ~90 pixels (it is forbidden to set the value lower than 80 pixels).
  • Single face in the image. It is recommended to avoid several faces being present in the image.
  • No sunglasses.

Capture requirements:

  • No blur (increases BPCER).
  • No texture filtering (increases APCER).
  • No spotlights on the face and close surroundings (increases BPCER).
  • No colored light (increases BPCER).
  • The face in the image must not be too light or too dark (increases BPCER).
  • No fish-eye lenses.

APCER (Attack Presentation Classification Error Rate) — the rate of undetected attacks where algorithms identified the attack as a real person.

BPCER (Bona Fide Presentation Classification Error Rate) — the rate of incorrectly identified people where algorithms identified real people as spoofs.

Image requirements:

  • Horizontal and vertical oriented images of 720p and 1080p

  • Minimal image height: 480

  • No or minimal image compression. The compression highly influences liveness algorithms

Changing threshold on different resources#

By default, two thresholds are used to check Liveness:

  • "Image Threshold" - the threshold of the processed image quality, lower which no check will be performed and the result "unknown" will be given. This threshold is set in the system and does not imply a manual change of the value, however, it can be changed in the query parameters of "/handlers" and "/verifiers" requests. The default is 50% for Liveness V2 and 20% for Liveness V1.
  • "Liveness Threshold" - the threshold lower which the system will consider the result as a presentation attack ("spoof"). This threshold is set in the Configurator service settings for each Liveness version, and can also be changed directly in the query parameters of "/handlers" and "/verifiers" requests. The default is 50% for both versions of Liveness.

The specifics of setting thresholds in various resources are described below.

Changing the threshold is not a mandatory action. To check Liveness, you can use the thresholds set by default.

Changing thresholds on "/handlers" and "/verifiers" resources#

Resources "/handlers" or "/verifiers" are preferred for checking Liveness V2. You can set the following query parameters for them:

  • "quality_threshold" - sets the "Image Threshold"
  • "liveness_threshold" - sets the "Liveness Threshold"

Setting these thresholds redefines the standard LUNA PLATFORM values for Liveness described above.

Changing thresholds on «/liveness» and «/sdk» resources#

For resources "/liveness" and "/sdk", manual change of the "Image Threshold" value is not implied, i.e. when using these resources, the threshold will always be 50% for Liveness V2 and 20% for Liveness V1.

The "Liveness Threshold" value is set in the Configurator service using the "LUNA_HANDLERS_LIVENESS_SETTINGS" settings for Liveness V2 and "LIVENESS_THRESHOLD" for Liveness V1.

Note. If the quality of the checked image is lower than the standard "Image Threshold" thresholds (50% or 20%), then setting thresholds in the "LUNA_HANDLERS_LIVENESS_SETTINGS" or "LIVENESS_THRESHOLD" settings will not make sense, because the result of the Liveness check will be "unknown".

General information about services#

Worker processes#

There is a possibility to set the number of worker processes to use additional central processing units for the requests handling. A service will automatically spin up multiple processes and route traffic between the processes.

Note the number of available cores on your server when utilizing this feature.

Worker processes utilization is an alternative way for linear service scaling. It is recommended to use additional worker processes when increasing the number of service instances on the same server.

It is not recommended to use additional worker processes for the Handlers service when it utilizes GPU. Problems may occur if there is not enough GPU memory, and the workers will interfere with each other.

You can change the number of workers in Docker containers of services using the WORKER_COUNT parameter during the service container launch.

Automatic configurations reload#

LP services support the auto-reload of configurations. When a setting is changed, it is automatically updated for all the instances of the corresponding services. When this feature is enabled, no manual restart of services is required.

This feature is available for all the settings provided for each Python service. You should enable the feature manually upon each service launching. See the "Enable automatic configuration reload" section.

Starting with version 5.5.0 the configuration reload for Faces and Python Matcher services is done mostly by restarting appropriate processes.

Restrictions#

Service can work incorrectly while new settings are being applied. It is strongly recommended not to send requests to the service when you change important settings (DB setting, work plugins list, and others).

New settings appliance may lead to service restart and caches resetting (e. g., Python Matcher service cache). For example, the default descriptor version changing will lead to the LP restart. Changing the logging level does not cause service restart (if a valid setting value was provided).

Enable automatic configuration reload#

You can enable this feature by specifying a --config-reload option in the command line. In Docker containers, the feature is enabled using the "RELOAD_CONFIG" option.

You can specify the configurations check period in the --pulling-time command line argument. The value is set to 10 seconds by default. In Docker containers, the feature is enabled using the "RELOAD_CONFIG_INTERVAL" option.

Configurations update process#

LP services periodically receive settings from the Configurator service or configuration files. It depends on the way of configurations receiving for a particular service.

Each service compares its existing settings with the received settings:

  • If service settings were changed, they will pulled and applied.

    • If the configurations pulling has failed, the service will continue working without applying any changes to the existing configurations;

    • If check connections with new settings have failed, the service will retry new configurations pulling after 5 seconds. The service will shut down after 5 failed attempts;

  • If current settings and new pulled settings are the same, the Configurator service will not perform any actions.

Database migration execution#

You should execute migration scripts to update your database structure when upgrading to new LP builds. By default, migrations are automatically applied when running db_create script.

This method may be useful when you need to rollback to the previous LUNA PLATFORM build or upgrade the database structure without changing the stored data. Anyway, it is recommended to create the backup of your database before applying any changes.

You can run migrations from a container or use a single command.

Single command#

The example is given for the Tasks service.

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/tasks:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-tasks:v.3.8.11 \
alembic -x luna-config=http://127.0.0.1:5070/1 upgrade head

Running from container#

To run migrations from a container follow these steps (the example is given for the Configurator service):

  • Go to the service docker container. See the "Enter container" section in "LP_Docker_Installation_Manual" document.

  • Run the migrations.

For most of the services, the configuration parameters should be received from the Configurator service and the command is the following:

alembic -x luna-config=http://127.0.0.1:5070/1 upgrade head

-x luna-config=http://127.0.0.1:5070/1 - specifies that the configuration parameters for migrations should be received from the Configurator service.

For the Configurator service the parameters are received from "srv/luna_configurator/configs/config.conf" file.

You should use the following command for the Configurator service:

alembic upgrade head
  • Exit the container. The container will be removed after you exit.
exit

Nuances of working with services#

When working with different services, it is necessary to take into account some nuances that will be described in this section.

Auto-orientation of rotated image#

It is not recommended to send rotated images to LP as they are not processed properly and should be rotated. There are two methods to auto-orient a rotated image - based on EXIF ​​image data (query parameters) and using LP algorithms (Configurator setting). Both methods for automatic image orientation can be used together.

If auto-orientation is not used, the sample creation mechanism will rotate the image and produce an image with a random rotation angle.

Auto-orientation based on EXIF data#

This method of image orientation is performed in the query parameters using the "use_exif_info" parameter. This parameter can enable or disable auto-orientation of the image based on EXIF ​​data.

This parameter is available and enabled by default in the following resources:

The "use_exif_info" parameter cannot be used with samples. When the "warped_image" or "image_type" query parameter is set to the appropriate value, the parameter is ignored.

Auto-orientation based on Configurator setting#

This method of image orientation is performed in the Configurator using the "LUNA_HANDLERS_USE_AUTO_ROTATION" setting. If this setting is enabled and the input image is rotated by 90, 180 or 270 degrees, then LP rotates the image to the correct angle. If this setting is enabled, but the input image is not rotated, then LP does not rotate the image.

Performing auto-orientation consumes a significant amount of server resources, so it is disabled by default.

The "LUNA_HANDLERS_USE_AUTO_ROTATION" setting cannot be used with samples. If the "warped_image" or "image_type" query parameter is set to the appropriate value and the input image is a sample and rotated, then the "LUNA_HANDLERS_USE_AUTO_ROTATION" setting will be ignored.

Saving source images#

The URL to the source image can be saved in the "image_origin" field of the created events when processing the "/handlers/{handler_id}/events" request.

To do this, you should specify the "store_image" parameter in the "image_origin_policy" when creating handler.

Then you should set an image for processing in the "generate events" request.

If "use_external_references=0" and the URL to an external image was transferred in the "generate events" request, then this image will be saved to the Image Store storage, and the ID of the saved image will be added in the "image_origin" field of the generated event.

The "use_external_references" parameter enables you to save an external link instead of saving the image itself:

  • If use_external_references=1 and the URL to an external image was transferred in the "generate events" request, then that URL will be added in the "image_origin" field. The image itself will not be saved to the Image Store.

  • If use_external_references=1, the sample was provided in the "generate events" request and "face_sample_policy > store_sample" is enabled, the URL to the sample in the Image Store will be saved in the "image_origin" field. The duplication of the sample image will be avoided. If an external URL is too long (more than 256 symbols), the service will store the image to the Image Store.

You can also provide the URL to the source image directly using the "/handlers/{handler_id}/events" resource. To do this, you should use the "application/json" or "multipart/form-data" body schema in the request. The URL should be specified in the "image_origin" field of the request.

If the "image_origin" is not empty, the provided URL will be used in the created event regardless of the "image_origin_policy" policy.

The image provided in the "image_origin" field will not be processed in the request. It is used as a source image only.

Neural networks information#

Switch to 52 neural network#

This section describes switching to 52 version of neural network. It is required when the user utilizes this version in the previous LP version and does not want to upgrade to a new neural network version.

The neural network is not included in the distribution package. It is provided separately upon request to VisionLabs. There is two separate archives: for CPU with AVX2 and GPU. You should download the required archive. You also should to download the archive with the configuration file of this neural network.

The neural network archive contains the *.plan format neural network. The archive with the configuration file contains a configuration file of *.conf format.

After downloading the archive with the neural network and the archive with its configuration, you should perform the following steps:

  • unzip the archives
  • copy the neural network and its configuration file to the launched Handlers container
  • follow the steps described in the "Switch neural network version" section

Unzip neural networks#

Go to the directory with the archives and unzip them.

unzip fsdk_plans_*.zip
unzip conf_files_46_52.zip

Copy neural network and configuration file to Handler container#

Copy the neural network and its configuration file to the Handlers container using the following commands.

docker cp fsdk_plans_*/cnn52b*.plan luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_52.conf luna-handlers:/srv/fsdk/data/

luna-handlers - the name of the launched Handlers container. This name may differ in your installation.

Check that the required model for the required device (CPU or GPU) was successfully loaded:

docker exec -t luna-handlers ls /srv/fsdk/data/

Switch neural network version#

When changing the neural network version used, one should:

  • perform the re-extraction task so the already existing descriptors can be extracted using the new neural network. You should not change the default neural network version, before finishing the re-extraction task.
  • set a new neural network version in LP configurations (see "Change neural network version").

Launch re-extraction task#

The re-extraction task performs the extraction of descriptors using the new neural network version. It should be launched using the Admin service to be applied to all the descriptors created.

Re-extraction can be performed for faces and events objects. Basic attributes, face descriptors, and body descriptors (for events) can be re-extracted. You should specify the corresponding objects in the re-extraction request.

It is highly recommended not to perform any requests changing the state of databases during the descriptor version updates. It can lead to data loss.

The default descriptor version in the "DEFAULT_FACE_DESCRIPTOR_VERSION" parameter (for faces) or the "DEFAULT_HUMAN_DESCRIPTOR_VERSION" (for bodies) in the Configurator service should be set to the current neural network version used for the descriptors extraction, not to the new NN version. New neural network version should be set after the re-extraction was successfully finished.

Samples are required for the re-extraction of descriptors using a new neural network. Descriptors of a new version will not be extracted for the faces and events that do not have samples.

Create backups of LP databases and the Image Store storage before launching the re-extraction task.

The re-extraction task can be launched using one of the following ways:

  • using the request to the Admin API. See the "/additional_extract" resource for details
  • using the Admin GUI

Re-extraction using Admin GUI:

  • Go to the Admin GUI: http://<admin_server_ip>:5010/tasks.

  • Run the re-extract task using the following button.

Run re-extract task
Run re-extract task
  • Set the object type (Faces or Events), descriptor type (Face or Body), and new neural network version in the appeared window and press "Run".
Set required settings
Set required settings

You can see the information about the task processing using the "View details"


button.

You can download the log with all the processed samples and occurred errors using the "download" button in the "Result" column.

Change neural network version#

You should set the new version of the neural network in the configurations of services. Use the Configurator GUI for this purpose:

  • Go to to the http://<configurator_server_ip>:5070.
  • Set the required neural network in the "DEFAULT_FACE_DESCRIPTOR_VERSION" parameter (for faces) or the "DEFAULT_HUMAN_DESCRIPTOR_VERSION" (for bodies).
  • Save changes using the "Save" button.
  • Wait until the setting is applied to all the LP services.

General information about requests creation#

All information about LP API can be found in the following directory:

"./docs/ReferenceManuals/"

API specifications are provided in two formats: HTML and YML.

OpenAPI specification is the only valid document providing up-to-date information about the service API.

The specification can be used:

  • By documentation generation tools to visualize the API (e. g., https://editor.swagger.io/).
  • By code generation tools. You can import the file to an external application for requests creation (e. g., Postman).

All the documents and code generated using this specification can include inaccuracies and should be carefully checked.

OpenAPI specification can be received using one of the following ways:

  • using the "/docs/spec" resource. The "Accept" header should be set to "application/x-yaml".
  • in the distribution package of LUNA PLATFORM in "ReferenceManuals" directory. The document us in YAML format.

The documents in HTML format provide a visual representation of API specifications and may be incomplete.

Specification includes:

  • Required resources and methods for requests sending.
  • Request parameters description.
  • Response description.
  • Examples of the requests and responses.

HTML and YML documents corresponding to the same service API have the same names.

When performing a request that changes the database, it is required to specify a "Luna-Account-Id". The created data will be related to the specified account ID.

You should use the account ID when requesting information from LP to receive the information related to the account.

The account ID is created according to the UUID format. There are plenty of UUID generators available on the Internet.

For testing purposes, the account ID from the API requests examples can be used.

OpenAPI documentation
OpenAPI documentation

The HTML document includes the following elements:

  1. Requests, divided into groups.
  2. Request method and URL example. You should use it with your protocol, IP-address, and port to create a request. Example: POST http://<IP>:<PORT>/<Version>/matcher.
  3. Description of request path parameters, query parameters, header parameters, body schema.
  4. Example of the request body.
  5. Description of responses.
  6. Examples of responses.

General requests to LP are sent via API service, using its URL:

http://<API server IP-address>:<API port>/<API Version>/

You can send requests via CURL or Postman to test LP.

You can expand descriptions for request body parameters or response parameters using the corresponding icon.

Expand descriptions
Expand descriptions

You can select the required example for request body or response in corresponding windows.

Select required example
Select required example
Response example
Response example

When specifying filters for requests you must use a full value, unless otherwise noted. The possibility of using part of the value is indicated in the description.

Image check#

LUNA PLATFORM enables you to perform various front-type image checks. Check can be done either with thresholds conforming to ISO/IEC 19794-5, and by manually entering thresholds and selecting the necessary checks.

The results of the checks are not stored in the database, they are returned only in the response.

ISO/IEC 19794-5 checks are performed using the "/iso" resource (see detailed description in the "Image check according to ISO/IEC 19794-5" section below).

Checks with manually specified thresholds using the "face_quality" group of checks of "detection_policy" policy of "/handlers" and "/verifiers" resources (see detailed description in the "Image check according to specified conditions" section below).

The possibility of performing such checks is regulated by a special parameter in the license file.

The result of all checks is determined by the "status" parameter, where:

  • "0" - any of the checks were not passed
  • "1" - all checks have been passed

The result of each check is also displayed next to it (the "result" parameter).

You can enable checking for multiple faces in a photo using the "multiface_policy" parameter. Checks are performed for each face detection found in the photo. Check results are not aggregated.

For some checks, certain requirements should be met. For example, in order to get the correct results of the status of eyebrows, it is necessary that the head angles are in a certain range and the face width is at least 80 pixels. The requirements for the checks are listed in the section "Face and image parameters". If any requirements are not met when check a certain parameter, the results of checking this parameter may be incorrect.

The set of checks for "face_quality" and the "/iso" resource is different (see the difference between checks in "Comparison table of available checks" section).

Image check according to ISO/IEC 19794-5#

By default, images with one face present are checked. For each of the found faces, the estimates and coordinates of the found face will be returned. It should be noted that many ISO checks assume the presence of one face in the frame, so not all checks for multiple faces will be performed successfully.

The order of the returned responses after processing corresponds to the order of the transferred images.

You can additionally enable the extraction of EXIF data of the image in the request.

For each check, thresholds are set that comply with ISO requirements. The value of the thresholds for each check is given in the sample response to the "/iso" request in the Open API documentation.

Some checks are united under one ISO requirement. For example, to successfully pass the eye status check, the statuses of the left and right eyes should take the value "open".

The following information is returned in the response body:

  • Verdict on passing checks, which is 1 if all checks are successful.

  • Results of each of the tests. This enables you to determine which particular test was not passed. The following values are returned:

    • The name of the check.
    • The value obtained after performing the check.
    • The default threshold. The thresholds are set in the system by the requirements of the ISO/IEC 19794-5 standard and cannot be changed by the user.
    • The result of this check. When passing the thresholds, 1 is returned.
  • The coordinates of the face.

If an error occurs for one of the processed images, for example, if the image is damaged, an error will be displayed in the response. Processing of the rest of the images will continue as usual.

Image check according to specified conditions#

The principle of operation is similar to check according the ISO standard, but the user has the right to decide for himself which checks need to be performed and which thresholds to set.

To enable checks, you should specify the value "1" in the "estimate" field for "face_quality". Image check is disabled by default. To disable a certain check, you need to set "0" in the "estimate" field for this check. By default, all checks are enabled and will be performed when "face_quality" is enabled.

Depending on the type of check, you can specify the minimum and maximum values of the threshold, or allowable values for this check. For this, the "threshold" field is used. If the minimum or maximum threshold is not set, the minimum or maximum allowable value will be automatically selected as the unset threshold. If the maximum value is unlimited (for example, ">=0"), then the value "null" will be returned in the "max" field in the event response body. If both thresholds are not set, the check will be performed according to the standard thresholds set in the system (see the values of the standard thresholds in the OpenAPI documentation).

Default thresholds are selected by VisionLabs specialists to obtain optimal results. These thresholds may vary depending on shooting conditions, equipment, etc.

When setting thresholds for the checks "Image quality" and "Head pose", it is recommended to take into account the standard thresholds preset in the system settings. For example, to check the image for blurriness (the "blurriness_quality" parameter), it is recommended to set the threshold in the range [0.57...0.65]. When setting a threshold outside this range, the results may be unpredictable. When choosing the angles of the head position, you need to pay attention to the recommended maximum thresholds for estimation in cooperative and non-cooperative modes. Information about these thresholds is provided in the relevant sections of the administrator manual.

It is recommended to consider the results of mouth state checks ("mouth_smiling", "mouth_occluded", "mouth_open", "smile_properties") together. So, for example, if the check revealed that the face is occluded with something, then the rest of the results of the mouth check will not be useful.

It is possible to enable filtering based on check results ("filter" parameter). If one of the "face_quality" checks for the detection fails, then the results and the reason for filtering will be returned. No further policies will be performed for this detection.

In addition, some checks are available in the "face_quality" group of checks, which are not available in the images check according the standard (see below).

Comparison table of available checks#

The following checks are available for "/iso" resource and "face_quality" group of checks of the "/handlers" and "/verifiers" resources:

Checks description Checks name Resource "/iso" "face_quality"
Image quality check illumination_quality, specularity_quality, blurriness_quality, dark_quality, light_quality + +
Illumination uniformity check according to ICAO standard illumination_uniformity - +
Head pose check head_yaw, head_pitch, head_roll + +
Gaze check gaze_yaw, gaze_pitch + +
Mouth attribures check mouth_smiling, mouth_occluded, mouth_open + +
Smile state check smile_properties (none, smile_lips, smile_teeth) + +*
Glasses state check glasses + +
Eyes attributes check left_eye (open, occluded, closed), right_eye (open, occluded, closed) + +
Distance between eyes check eye_distance + +
Natural light check natural_light (0, 1) + +
Radial distortion check (Fisheye effect) radial_distortion (0, 1) + +
Red eyes effect check red_eyes (0, 1) + +
Eyebrows state check eyebrows_state (neutral, raised, squnting, frowning) + +*
Headwear type check headwear_type (none, baseball_cap, beanie, peaked_cap, shawl, hat_with_ear_flaps, helmet, hood, hat, other) + +*
Vertical/horizontal face position checks head_horizontal_center, head_vertical_center + +
Vertical/horizontal head sizes check head_width, head_height + +
Image format check image_format + +
Face color type check face_color_type (color, grayscale, infrared) + +
Image size check image_size - +
Indents from image edges checks indent_upper, intend_left, intend_right, intend_left - +
Image width/height checks image_width, image_height - +
Image aspect ratio check aspect_ratio - +
Face width/height check face_width, face_height - +
Dynamic range check dynamic_range - +

* Several parameters can be specified for these checks.

Upload images from folder#

The "folder_uploader.py" script uploads images from the specified folder and processes uploaded images according to the preassigned parameters.

General information about the script#

The "folder_uploader.py" script can be utilized for downloading images using the API service only.

You cannot specify the Backport 4 address and port for utilizing this script. You can use the data downloaded to the LP 5 API in Backport 4 requests.

You cannot use the "folder_uploader.py" script to download data to Backport 3 service as the created objects for Backport 3 differs (e. g. "person" object is not created by the script).

Script usage#

Script pipeline:

  1. Search images of the allowed type ('.jpg', '.jpeg', '.png', '.bmp', '.ppm', '.tif', '.tiff') in the specified folder (source).
  2. Start asynchronous image processing according to the specified parameters (see section "Script launching").

Image processing pipeline:

  1. Detect faces and create samples.
  2. Extract attributes.
  3. Create faces and link them to a list.
  4. Add record to the log file.

If an image was loaded successfully, the record is added to the _success_log.txt: success load logfile. The record has the following structure:

    {
    "image name": ..., 
    "face id": [...]
    }.

If errors occur at any step of the script processing, the image processing routine is terminated and a record is added to the error log file _error_log.txt: error. The record has the following structure:

    {
    "image name": ..., 
    "error": ...
    }

Install dependencies#

Before the script launching you must install all the required dependencies to launch the script.

It is strongly recommended to create a virtual environment for python dependencies installation.

Install Python packages (version 3.7 and later) before launching installation. The packages are not provided in the distribution package and their installation is not described in this manual:

  • python3.7
  • python3.7-devel

Install gcc.

yum -y install gcc

Go to the directory with the script

cd /var/lib/luna/current/extras/utils/folder_uploader

Create a virtual environment

python3.7 -m venv venv

Activate the virtual environment

source venv/bin/activate

Install the tqdm library.

pip3.7 install tqdm 

Install luna3 libraries.

pip3.7 install ../luna3*.whl

Deactivate virtual environment

deactivate

Script launching#

Use the command to run the script (the virtual environment must be activated):

python3.7 folder_uploader.py --account_id 6d071cca-fda5-4a03-84d5-5bea65904480 --source "Images/" --warped 0 --descriptor 1 --origin http://127.0.0.1:5000 --avatar 1  --list_id 0dde5158-e643-45a6-8a4d-ad42448a913b --name_as_userdata 1  

Make sure that the --descriptor parameter is set to 1 so descriptors are created.

The API version of the service is set to 6 by default in the script, and it cannot be changed using arguments.

--source "Images/" - "Images/" is the folder with images located near the "folder_uploader.py" script. Or you can specify the full path to the directory

--list_id 0dde5158-e643-45a6-8a4d-ad42448a913b - specify your existing list here

--account_id 6d071cca-fda5-4a03-84d5-5bea65904480 - specify the required account ID

--origin http://127.0.0.1:5000 - specify your current API service address and port here

See help for more information about available script arguments:

python3.7 folder_uploader.py --help

Command line arguments:

  • account_id: an account ID used in requests to LUNA PLATFORM (required)

  • source: a directory with images to load (required)

  • warped: are images warped or not (0,1) (required)

  • descriptor: whether to extract descriptor (0,1); default - 0

  • origin: origin; default - "http://127.0.0.1:5000"

  • avatar: whether to set sample as avatar (0,1); default - 0

  • list_id: list ID to link faces with (a new LUNA list will be created if list_id is not set and list_linked=1); default - None

  • list_linked: whether to link faces with list (0,1); default - 1

  • list_userdata: userdata for list to link faces with (for newly created list); default - None

  • pitch_threshold: maximum deviation pitch angle [0..180];

  • roll_threshold: maximum deviation roll angle [0..180];

  • yaw_threshold: maximum deviation yaw angle [0..180];

  • multi_face_allowed: whether to allow several face detection from single image (0,1); default - 0

  • get_major_detection: whether to choose major face detection by sample Manhattan distance from single image (0,1); default - 0

  • basic_attr: whether to extract basic attributes (0,1); default - 1

  • score_threshold: descriptor quality score threshold (0..1); default - 0

  • name_as_userdata: whether to use image name as user data (0,1); default - 0

  • concurrency: parallel processing image count; default - 10

Client library#

General information#

The archive with the client library for LUNA PLATFORM 5 is provided in the distribution package: /var/lib/luna/current/extras/utils/luna3-*.whl

This Python library is an HTTP client for all LUNA PLATFORM services.

You can find the examples of the library utilization in the /var/lib/luna/current/docs/ReferenceManuals/APIReferenceManual.html document.

Luna3 library usage example
Luna3 library usage example

The example shows the request for faces matching. The luna3 library is utilized for the request creation. See "matcher" > "matching faces" in "APIReferenceManual.html":

# This example is written using luna3 library

from luna3.common.http_objs import BinaryImage
from luna3.lunavl.httpclient import LunaHttpClient
from luna3.python_matcher.match_objects import FaceFilters
from luna3.python_matcher.match_objects import Reference
from luna3.python_matcher.match_objects import Candidates

luna3client = LunaHttpClient(
    accountId="8b8b5937-2e9c-4e8b-a7a7-5caf86621b5a",
    origin="http://127.0.0.1:5000",
)

# create sample
sampleId = luna3client.saveSample(
    image=BinaryImage("image.jpg"),
    raiseError=True,
).json["sample_id"]

attributeId = luna3client.extractAttrFromSample(
    sampleIds=[
        sampleId,
    ],
    raiseError=True,
).json[0]["attribute_id"]

# create face
faceId = luna3client.createFace(
    attributeId=attributeId,
    raiseError=True,
).json["face_id"]

# match
candidates = Candidates(
    FaceFilters(
        faceIds=[
            faceId,
        ]
    ),
    limit=3,
    threshold=0.5,
)
reference = Reference("face", faceId)

response = luna3client.matchFaces(
    candidates=[candidates], references=[reference],
    raiseError=True,
)

print(response.statusCode)
print(response.json)

Library installation example#

In this example a virtual environment is created for luna3 installation.

You can use this Python library on Windows, Linux, MacOS.

Install Python packages (version 3.7 and later) before launching installation. The packages are not provided in the distribution package and their installation is not described in this manual:

  • python3.7
  • python3.7-devel

Install gcc.

yum -y install gcc

Go to the directory with any script, for example, folder_uploader.py

cd /var/lib/luna/current/extras/utils/folder_uploader

Create a virtual environment

python3.7 -m venv venv

Activate the virtual environment

source venv/bin/activate

Install luna3 libraries.

pip3.7 install ../luna3*.whl

Deactivate virtual environment

deactivate

Plugins#

Plugins are used to perform secondary actions for the user's needs. For example, you can create your own resource based on the abstract class, or you can describe what needs to be done in some resource in addition to the standard functionality.

Files with base abstract classes are located in the .plugins/plugins_meta folder of specific service.

Plugins should be written in the Python programming language.

There are two sorts of plugins:

  • Event plugin
  • Background plugin
  • Matching plugin

Event plugins#

The first sort is triggered when an event occurs. The plugin should implement a callback function. This function is called on each event of the corresponding type. The set of event types is defined by the service developers. There are two types of event plugins available for the Handlers service:

  • Monitoring event
  • Sending event

For other services, only monitoring event type is available.

For examples of monitoring and sending plugins, see the development manual.

Background plugins#

The second sort of plugin is intended for background work. The background plugin can implement:

  • custom request for a specific resource (route),
  • background monitoring of service resources,
  • collaboration of an event plugin and a background plugin (batching monitoring points),
  • connection to other data sources (Redis, RabbitMQ) and their data processing.

For examples of backgrounds plugins, see the development manual.

Matching plugins#

The third type of plugins enables you to significantly speed up the processing of matching requests.

Note that plugins are not provided as a ready-made solution for matching. It is required to implement the logic required for solving particular business tasks.

Matching plugins are enabled in Python Matcher Proxy service. This service is not installed by default. You should run it to work with plugins. See the Python Matcher Proxy launching command in the "Use Python Matcher with Python Matcher Proxy" section of the LUNA PLATFORM installation manual.

In the default LUNA PLATFORM operation processing using Python Matcher Proxy, all matching requests are processed by Python Matcher Proxy service by redirecting requests to Python Matcher service. It is possible that matching requests processing is slower than it needs for several reasons, including:

  • large amount of data and inability to speed up requests by any database configuration changes, e.g. create an index in a database that speeds up request;

  • the way of data storage - descriptor and entity id (face_id/event_id) are kept in different database tables. Filters, which specified in matching request also can be presented in a separate database table, which slows down the request processing speed;

  • internal database specific restrictions.

It is possible to separate some groups of requests and improve their processing speed by utilizing matching plugins, including by transferring data to another storage with a specific way of data storage which makes possible the fastest matching in comparison to the default way (see "Plugin data source").

Examples:

  • matching requests where all faces (let’s say that all matching candidates are faces) are linked to one list and any other filters do not specify in the request.

    In this case, it is possible to duplicate date for these matching candidates to other data storage than the default data storage and create a matching plugin, which will only match specified references with these candidates, but not with any other entities.

    The matching request processing will be faster in comparison to the default way because the plugin will not spend time to separate faces, which are linked to list from all faces, which are store in the database.

  • matching requests where all candidates are events and specify only one filter - "event_ids" and it is required to match only by bodies.

    In this case, it is possible to duplicate all event_id and its body descriptors to other data storage than the default data storage and create a matching plugin, that will match specified references with these candidates, but not with any other entities.

    The matching request processing will be faster in comparison to the default way because the plugin will not spend time to separate events with bodies from all events and overlook filters.

You can use built-in matching plugins or create user matching plugins.

One built-in matching plugin is available - the Thin event plugin.

"Thin event" is used for rapid comparison of face descriptors with descriptors of simplified events. Simplified events contain fewer fields compared to events from the "luna_events" database. All the data for them is stored in the same table.

Requirements for the launch of "Thin event" are provided in its documentation. By default, the plugin is not used.

See the detailed description of the built-in plugins and instructions for writing user plugins in the developer manual).

General plugin processing pipeline#

Each matching request is presented in the form of all possible combinations of candidates and references, then each such combination is processed as a separate sub-request as follows (further sub-request means combination of reference and candidates):

  • Get the sub-request matching cost (see "Matching cost").

  • Choose the way for the sub-request processing using the lowest estimated matching cost: matching plugin or Python Matcher service.

    • If in the previous step Python Matcher service was selected, it will process sub-request, returns the response to the Python Matcher Proxy service.

    • If in the previous step matching plugin was selected, it will process sub-request. If sub-request was successfully processed, the response returns to the Python Matcher Proxy service. If a sub-request was not successfully processed, it will try to process by Python Matcher service.

  • If the request was successfully processed by matching plugin and plugin does not have access to all matching targets which specified in sub-request, then Python Matcher Proxy service will enrich data before next step, see matching targets for details.

  • The Python Matcher Proxy service collects results from all sub-requests, sorts them in the right order, and replies to the user.

Matching cost#

Matching cost is a float numeric expression of matching request process complexity using a plugin. Matching cost is necessary to choose the best way to process a matching request: Python Matcher service or one or more plugins.

The matching cost value for the Python-Matcher service is 100. If there are several plugins, then the matching cost value will be calculated for each plugin. The matching plugin with the lowest matching cost will be used if its matching cost is lower than the Python Matcher matching cost. All requests with matching costs greater than 100 will be processed in the Python Matcher service. If there are no plugins, Python Matcher will be used for the request processing.

Matching targets#

The Python Matcher service has access to all data of matching entities, so it can process matching requests with all targets. Matching plugins may not have access to data, which is specified in request targets. In this case, Python Matcher Proxy service will enrich response of plugin with missing targets data, e.g.:

  • matching response contains next targets: face_id, user_data and similarity and the chosen matching plugin does not have access to user_data field:

    • matching plugin match reference with specified face_ids and return the matching response to the Luna-Python-Matcher-Proxy, which contains only pairs of face_id and similarity;

    • for every match candidate in result, Python Matcher Proxy service will get user_data from the main database by face_id and merge face_id and similarity with user_data;

    • return enriched response with specified targets to the user.

  • matching response contains next targets: age, gender (all candidates are events’ faces) and the chosen matching plugin have access only to event_id, descriptor, and age fields:

    • matching plugin match reference and return the matching response to the Luna-Python-Matcher-Proxy, which contains only pairs of event_id, age and similarity;

    • for every match candidate in result, Python Matcher Proxy service will get gender from the main database by event_id and merge event_id with gender, also after that it drops non-required event_id and similarity from the response

    • return a prepared response with specified targets to the user.

The workflow of matching plugins is shown below:

Matching plugin workflow
Matching plugin workflow

Plugin data source#

To speed up request processing, each matching plugin may use a separated data source instead of the default one ( Events DB, Faces DB, or Attributes DB (see the "Database description" section), for example, use a separate database, a new table in the existing database, in-memory cache, etc.

For more information about matching plugins, see the developer manual.

Plugins usage#

Adding plugins to the directory manually#

This way can be used when the plugin does not require any additional dependencies that are not provided in the service Docker container.

There are two steps required to use a plugin with the service in Docker container:

  • Add the plugin file to the container.
  • Specify the plugin usage in the container configurations.

When starting the container, you need to forward the plugin file to the folder with plugins of specific service. For example, for the Handlers service it will be the /srv/luna_handlers/plugins folder.

This can be done in any convenient way. For example, you can mount the folder with plugins to the required service directory during service launching (see service launching commands in the installation manual):

You should add the following volume if all the required plugins for the service are stored in the "/var/lib/luna/handlers/plugins" directory:

-v /var/lib/luna/handlers/plugins:/srv/luna_handlers/plugins/ \

The command is given for the case of manual service launching.

Next, you should add the filename to the "LUNA_HANDLERS_ACTIVE_PLUGINS" configuration in the Configurator service.

LUNA_HANDLERS_ACTIVE_PLUGINS = [luna_handlers_plugin]

The list should contain filenames without extension (.py).

After completing these steps, LP will automatically use the plugin.

More information about plugins for a specific service can be found in the "API Development Manual" documentation in "../ServiceManuals/".

Building new Docker container with plugin#

This way can be used when additional dependencies are required for the plugin utilization or when the container with the plugin is required for the production usage.

You should create your docker container based on the basic service container.

Add "Dockerfile" with the following structure to your CI:

FROM dockerhub.visionlabs.ru/luna/luna-handlers:v.2.3.3
USER root
...
USER luna

FROM should include the address to the basic service container that will be used. USER root - change the privileges to the root user to perform the following actions. Then the commands for adding the plugin and its dependencies should be listed. They are not given in this manual. Check the Docker documentation. USER luna - after all the commands are executed, the user should be changed back to "luna".

Add the plugin filename to the "LUNA_HANDLERS_ACTIVE_PLUGINS" configuration in the Configurator service.

You can:

  • Update settings manually in the Configurator service as described above.

  • Create a dump file with LP plugin settings and add them to the Configurator service after its launch.

An example of the dump file with the Handlers plugin settings is given below.

{
    "settings":[
        {
            "value": [luna_handlers_plugin],
            "description": "list active plugins",
            "name": "LUNA_HANDLERS_ACTIVE_PLUGINS",
            "tags": []
        },
    ]
}

Then the file is applied using the following command. For example the file is stored in "/var/lib/luna/". The dump filename is "luna_handlers_plugin.json".

docker run \
-v /var/lib/luna/luna_handlers_plugin.json:/srv/luna_configurator/used_limitations/luna_handlers_plugin.json \
--network=host \
--rm \
--entrypoint=python3.9 \
dockerhub.visionlabs.ru/luna/luna-configurator:v.2.0.35 \
./base_scripts/db_create.py --dump-file /srv/luna_configurator/used_limitations/luna_handlers_plugin.json

Monitoring#

Monitoring is implemented as sending data to the "InfluxDB". Monitoring is enabled in the services by default.

There are two types of events that are monitored: request (all requests) and error (failed requests only).

Every event is a point in the time series. The point is represented using the following data:

  • series name (requests or errors)
  • timestamp of the request start
  • tags
  • fields

The tag is an indexed data in storage. It is represented as a dictionary, where

  • keys - string tag names,
  • values - string, integer or float.

The field is a non-indexed data in storage. It is represented as a dictionary, where

  • keys - string field names,
  • values - string, integer or float.

Below is an example of tags and fields for the Luna API service. These tags are unique for each service. You can find information about monitoring a specific service in the relevant documentation:

Saving data for requests is triggered on every request. Each point contains data about the corresponding request (execution time and etc.).

  • tags
tag name description
service always "luna-api"
account_id account id or none
route concatenation of a request method and a request resource (POST:/extractor)
status_code http status code of response
  • fields
fields description
request_id request_id
execution_time request execution time

Saving data for errors is triggered when a request fails. Each point contains error_code of LUNA error.

  • tags
tag name description
request_id always "luna-api"
account_id account id or none
route concatenation of a request method and a request resource (POST:/extractor)
status_code http status code of response
error_code LUNA PLATFORM error code
  • fields
fields description
request_id request_id

Every handler can add additional tags or fields. For example, handler of "/handlers/{handler_id}/events" resource adds tag handler_id.

With InfluxDB version 2 you can visualize monitoring data using Luna Dashboards tool.

Requests and estimations statistics gathering#

LUNA PLATFORM gathers the number of completed requests and estimates per month based on monitoring data if statistics gathering is enabled. Statistics gathering works only when monitoring is enabled and InfluxDB of version 2.0.8 and later is installed.

To get statistics, use the "/luna_sys_info" resource or go to the Admin service GUI to the "help" tab and click "Get LUNA PLATFORM system info". The necessary information is contained in the "stats" section.

This section contains two fields - "estimators_stats" and "routes_stats".

The first field contains a list of performed estimations. Three fields are displayed for each estimation:

  • name - name of estimation performed (for example, estimate_emotions)
  • count - total number of completed estimations of the same type
  • month - month for which gathering was made (for example, 2021-09)

The second field contains a list of services to which requests were made. Five fields are displayed for each service:

  • service - name of service (for example, luna-api)
  • route - method and request (for example, GET:/version)
  • month - month for which gathering was made
  • errors - number of requests performed with a specific error (for example, [ { "count": 1, "error_code": "12012" } ] )
  • request_stats - number of successful requests (for example, [ { "count": 1, "status_code": "200" } ])

The information is impersonal and contains only quantitative data.

Statistics are gathered in InfluxDB based on data from the "luna_monitoring" bucket and stored in the "luna_monitoring_aggregated" bucket. The buckets are created in InfluxDB. Do not delete data from this bucket, otherwise, it will be impossible to get statistics.

Statistics are gatgered once a day, so they are not displayed immediately after the LP is launched.

Tasks for gatgering statistics can be found in the InfluxDB GUI on the "Tasks" tab. There you can manually start their performing.

To enable this functionality, run the python influx2_cli.py create_usage_task --luna-config http://127.0.0.1:5070/1 command after starting the Admin service (see the installation manual). The command automatically creates the necessary bucket "luna_monitoring_aggregated". If this command is not performed, the response "/luna_sys_info" will not display statistics.

If necessary, you can disable statistics gathering by deleting or disabling the corresponding tasks on the "Tasks" tab in the InfluxDB GUI.

Luna Dashboards#

The Luna Dashboards tool is intended to visualize monitoring data. Luna Dashboards based on the Grafana web application create a set of dashboards for analyzing the state of individual services, as well as two summarised dashboards that can be used to evaluate the state of the system.

Use "http://IP_ADDRESS:3000" to go to the Grafana web interface.

API service dashboard when starting testing
API service dashboard when starting testing

The data source is configured in Grafana, with the help of which it can communicate with the "InfluxDB" from which monitoring data is received during the operation of all LP services.

Luna Dashboards workflow
Luna Dashboards workflow

Luna Dashboard can be useful:

  • To monitor the state of the system;
  • For error analysis;
  • To get statistics on errors;
  • To analyze the load on individual services and on the platform as a whole, load by days of the week, by time of day, etc.;
  • To analyze statistics of request execution, i.e. what resources account for what proportion of requests for the entire platform;
  • To analyze the dynamics of request execution time;
  • To evaluate the average value of the execution time of requests for a specific resource;
  • To analyze changes in the indicator over time.

After installing the dashboards (see below), the "platform_5" directory becomes available in the Grafana web interface, which contains the following dashboards:

  • Luna Platform Heatmap,
  • Luna Platform Summary,
  • Dashboards for individudal services.
Luna Dashboards structure
Luna Dashboards structure

Luna Platform Heatmap enables you to evaluate the load on the system without a specific resource. In statistics, you can evaluate the time of activity for the system at a certain time.

Luna Plarform Summary enables you to get statistics on requests for all services in one place, as well as evaluate graphs by RPS (Request Per Seconds).

Dashboards for individual services enables you to get information about requests for individual resources, errors and status codes for each service. In such a dashboard, not load data will be displayed, but artificially generated data at a selected time interval.

The following SW is required for Grafana dashboard utilization:

  • InfluxDB 2.0 (the currently used version is 2.0.8-alpine)
  • Grafana (the currently used version is 8.0.6)

InfluxDB and Grafana are already included in the package. You can use your own Grafana installation or install it manually.

For more information on manual installation, see the "InfluxDB" and "Grafana" sections in the installation manual.

Grafana plugin installation#

Installing the plugin is required only for manual installation of Grafana.

In addition to build-in Grafana plugins, dashboards also use a piechart plugin. Use the new grafana-cli tool to install piechart-panel from the command line:

grafana-cli plugins install grafana-piechart-panel

A restart is required to apply the plugin:

sudo service grafana-server restart

Grafana dashboards installation#

Dashboards can be launched manually or using a special Grafana container with dashboards called luna-dashboards:

  • Launching using a special Grafana container with dashboards is described in the installation manual.

  • Manual launch is described further in this document.

Install Grafana. An example of the command is given below:

docker run \
--restart=always \
--detach=true \
--network=host \
--name=grafana \
-v /etc/localtime:/etc/localtime:ro \
-e "GF_INSTALL_PLUGINS=grafana-piechart-panel" \
dockerhub.visionlabs.ru/luna/grafana:8.0.6

The scripts for Grafana plugins installation can be found in "/var/lib/luna/current/extras/utils/".

Install Python version 3.7 or later before launching the following script. The packages are not provided in the distribution package and their installation is not described in this manual.

Go to the luna dashboards directory

cd /var/lib/luna/current/extras/utils/luna-dashboards_linux_rel_v.*

Create a virtual environment

python3.7 -m venv venv

Activate the virtual environment

source venv/bin/activate

Install luna dashboards file

pip install luna_dashboards-*-py3-none-any.whl

Go to the following directory

cd luna_dashboards

The "luna_dashboards" folder contains the configuration file "config.conf", which includes the settings for Grafana, Influx and monitoring periods. By default, the file already includes the default settings, but you can change the settings use "vi config.conf".

Run the following script to create dashboards

python create_dashboards.py

Deactivate virtual environment

deactivate

Use "http://IP_ADDRESS:3000" to go to the Grafana web interface when the Grafana and Influx containers are running.

In the upper left corner, select the "General" button, then expand the "platform_5" folder and select the necessary dashboard.

Databases information#

Advanced PostgreSQL setting#

PostgreSQL can be configured to interact effectively with LUNA PLATFORM 5. To do this, you need to set certain values for the PostgreSQL settings in the postgresql.conf file.

This section does not provide a complete list of all settings with a detailed description. See official PostgreSQL website for a complete list of settings and their descriptions.

Useful tips for calculating PostgreSQL configuration are described here: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server.

It is possible to calculate configuration for PostgreSQL based on the maximum performance for a given hardware configuration (see https://pgtune.leopard.in.ua/).

Note. The following settings should be changed with caution as manually changing PostgreSQL settings requires experience.

The recommended values of the settings and their description are given below.

max_connections = 200 - determines the maximum number of concurrent connections to the database server. The default value is 100.

The default value may be enough for test demonstrations of the LUNA PLATFORM, but for real purposes, the standard value may not be enough and it will need to be calculated.

In the Configurator service, you can set the number of DB connections using the connection_pool_size setting located in the LUNA_<SERVICE_NAME>_DB sections, where <SERVICE_NAME> is the name of the service that has the database. The actual number of connections may be greater than the value of this setting by 1.

If there are too many connections, but not enough active ones, you can use third-party load balancing services, for example, haproxy or pgbouncer. When using balancing services, it is necessary to take into account some nuances described here: https://magicstack.github.io/asyncpg/current/faq.html#why-am-i-getting-prepared-statement-errors.

maintenance_work_mem = 2GB - specifies the maximum amount of memory to be used by maintenance operations.

shared_buffers = 0.25…0.5 * RAM (MB) - determines how much memory Postgres will allocate for caching data. It depends on how often the matching by database is performed, which indexes, etc.

effective_io_concurrency = 100 - sets the number of concurrent disk I/O operations that PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel.

max_worker_processes = CPU_COUNT - sets the maximum number of worker processes that the system can support.

max_parallel_maintenance_workers = 4 - sets the maximum number of parallel worker processes performing the index creation command (CREATE INDEX).

max_parallel_workers_per_gather = 4 - sets the maximum number of workers that a query or subquery can be parallelized to.

max_parallel_workers = CPU_COUNT - sets the maximum number of workers that the system can support for parallel operations.

Note. The following values of settings are related to the function of matching by database for large tables.

enable_bitmapscan = off - enables or disables the query planner's use of bitmap-scan plan types. Sometimes it may be necessary when PostgreSQL erroneously determines that bitmapscan is better than the index. It is recommended to change it only if necessary, when it is assumed that the query will use the index, but for unknown reasons does not use it.

seq_page_cost = 1 - sets the planner's estimate of the cost of a disk page fetch that is part of a series of sequential fetches.

random_page_cost = 1.5 - sets the planner's estimate of the cost of a non-sequentially-fetched disk page.

parallel_tuple_cost = 0.1 - sets the approximate cost of transferring one tuple (row) from a parallel worker to another worker.

parallel_setup_cost = 5000.0 - sets the approximate cost of running parallel workers.

max_parallel_workers_per_gather = CPU_COUNT/2 - sets the maximum number of workers that a query or subquery can be parallelized to.

min_parallel_table_scan_size = 1MB - sets the minimum amount of table data that should be scanned in order for a parallel scan to be considered.

min_parallel_index_scan_size = 8k - sets the minimum amount of index data for parallel scanning.

InfluxDB OSS 2#

Starting with version 5.5.0, LUNA PLATFORM provides a possibility to use InfluxDB of version 2.

For InfluxDB OSS 2 usage, you should:

  • Install the DB. See the "InfluxDB OSS 2" in the installation manual.
  • Register in the DB. InfluxDB has a user interface where you can register. You should visit <server_ip>:<influx_port>.
  • Configure the display of monitoring information in the GUI. It is not described in this documentation.

Migration from version 1#

InfluxDB provides built-in tools for migration from version 1 to version 2. See documentation:

https://docs.influxdata.com/influxdb/v2.0/upgrade/v1-to-v2/docker/

InfluxDB configuration#

The settings for InfluxDB are described below.

InfluxDB settings

Setting name Type Description
send_data_for_monitoring integer Enables monitoring for the service.
use_ssl integer Enables HTTPS protocol usage for connection to InfluxDB (0 – do not use, 1 – use).
flushing_period integer The frequency of sending monitoring data to InfluxDB.
port integer InfluxDB port.
host integer InfluxDB host.
organization String The organization name specified during registration.
token String Token received after registration.
bucket String Bucket name.

Manual creation of services databases#

This section describes commands required for configuring external PostgreSQL for working with LP services. External means that you already have working DB and MQ and want to use them with LP.

You need to specify your external BD and MQ in the configurations of LP services.

The Faces and Events services requires the VLMatch additional function to be added to the utilized database. For detailed information on creating this function for the Faces service see the "Create VLMatch function for Faces DB" section and for the Events database see the "Create VLMatch function for Events DB" section.

PostgreSQL user creation#

Go to the directory.

cd /var/

Create a database user

runuser -u postgres -- psql -c 'create role luna;'

Assign a password to the user

runuser -u postgres -- psql -c "ALTER USER luna WITH PASSWORD 'luna';"

Configurator DB creation#

Create the database for the Configurator service. It is assumed that the DB user is already created.

Go to the directory.

cd /var/
  • Create the database

  • Grant privileges to the database user

  • Allow user to authorize in the DB

runuser -u postgres -- psql -c 'CREATE DATABASE luna_configurator;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_configurator TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Handlers DB creation#

Create the database for the Handlers service. It is assumed that the DB user is already created.

Go to the directory.

cd /var/

The sequence of actions corresponds to the commands below:

  • Create a database
  • Grant privileges to the database and the user
  • Enable the user to login into DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_handlers;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_handlers TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Backport 3 DB creation#

Create the database for the Backport 3 service. It is assumed that the DB user is already created.

Go to the directory.

cd /var/

The sequence of actions corresponds to the commands below:

  • Create a database
  • Grant privileges to the database and the user
  • Enable the user to login into DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_backport3;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_backport3 TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Faces DB creation#

Create the database for the Faces service. It is assumed that the DB user is already created.

Go to the directory.

cd /var/
  • Create a database

  • Grant privileges to the database and the user

  • Enable the user to login into DB

runuser -u postgres -- psql -c 'CREATE DATABASE luna_faces;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_faces TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Events DB creation#

Create the database for the Events service. It is assumed that the DB user is already created.

Go to the directory.

cd /var/
  • Create database

  • Grant privileges to database and the user

  • Enable the user to login into DB

runuser -u postgres -- psql -c 'CREATE DATABASE luna_events;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_events TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Admin DB creation#

Create the database for the Admin service. It is assumed that the DB user is already created.

Go to the directory.

cd /var/
  • Create the database

  • Grant privileges to the database user

  • Allow user to authorize in the DB

runuser -u postgres -- psql -c 'CREATE DATABASE luna_admin;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_admin TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Tasks DB creation#

Create the database for the Tasks service. It is assumed that the DB user is already created.

Go to the directory.

cd /var/
  • Create the database

  • Grant privileges to the database user

  • Allow user to authorize in the DB

runuser -u postgres -- psql -c 'CREATE DATABASE luna_tasks;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_tasks TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Create VLMatch function for Faces DB#

The Faces service requires the VLMatch additional function to be added to the utilized database. LUNA PLATFORM cannot perform matching calculations without this function. The VLMatch function can be added to the PostgreSQL or Oracle database.

The VLMatch library is compiled for your particular database version.

Note! Do not use the library built for another version of DB. For example, the library build for the PostgreSQL of version 12 cannot be used for the PostgreSQL of version 9.6.

This section describes the function creation for PostgreSQL.

The instruction for the Oracle DB is given in the "VLMatch for Oracle" section.

Build VLMatch for PostgreSQL#

You can find all the required files for the VLMatch user-defined extension (UDx) compilation in the following directory:

/var/lib/luna/current/extras/VLMatch/postgres/

The following instruction describes installation for PostgreSQL 12.

For VLMatch UDx function compilation one needs to:

  • Make sure, that PostgreSQL of the required version is installed and launched.

  • Install the required PostgreSQL development environment. You can find more information on the official web site.

  • The llvm-toolset-7-clang is required for postgresql12-devel. Install it from the centos-release-scl-rh repository.

yum -y install centos-release-scl-rh
yum -y --enablerepo=centos-sclo-rh-testing install llvm-toolset-7-clang
  • Install epel-release for access to extended package repository
yum -y install epel-release
  • Install the development environment.
yum -y install postgresql12 postgresql12-server postgresql12-devel 
  • Install the gcc-c++ package. The package version 4.8 or higher is required.
yum -y install gcc-c++.x86_64 
  • Install CMAKE. The version 3.5 or higher is required.

  • Open the make.sh script using a text editor. It includes paths to the currently used PostgreSQL version. Change the following values (if necessary):

SDK_HOME specifies the path to PostgreSQL home directory. The default value is /usr/pgsql-12/include/server;

LIB_ROOT specifies the path to PostgreSQL library root directory. The default value is /usr/pgsql-12/lib.

Go to the make.sh script directory and run it:

cd /var/lib/luna/current/extras/VLMatch/postgres/ 
chmod +x make.sh
./make.sh

Add VLMatch function to Faces database#

The VLMatch function should be applied to the PostgreSQL DB.

  • Define the function inside the Faces database:
sudo -u postgres -h 127.0.0.1 -- psql -d luna_faces -c "CREATE FUNCTION VLMatch(bytea, bytea, int) RETURNS float8 AS 'VLMatchSource.so', 'VLMatch' LANGUAGE C PARALLEL SAFE;"
  • Test function by sending re following request to the service database:
sudo -u postgres -h 127.0.0.1 -- psql -d luna_faces -c "SELECT VLMatch('\x1234567890123456789012345678901234567890123456789012345678901234'::bytea, '\x0123456789012345678901234567890123456789012345678901234567890123'::bytea, 32);"

The result returned by the database must be "0.4765625".

Build VLMatch for Oracle#

This section describes VLMatch library building and new function appliance to the Oracle database.

For VLMatch UDx function compilation one needs to:

  • Install required environment, see requirements:

  • Install the gcc/g++ 4.8 or higher

yum -y install gcc-c++.x86_64 
  • Change SDK_HOME variable - oracle sdk root (default is $ORACLE_HOME/bin, check $ORACLE_HOME environment variable is set) in the makefile.
  • Go to the directory and run the "make.sh" file:

    • cd /var/lib/luna/current/extras/VLMatch/oracle/
    • chmod +x make.sh
    • ./make.sh
  • Define the library and the function inside the database (from database console):

CREATE OR REPLACE LIBRARY VLMatchSource AS '$ORACLE_HOME/bin/VLMatchSource.so';
CREATE OR REPLACE FUNCTION VLMatch(descriptorFst IN RAW, descriptorSnd IN RAW, length IN BINARY_INTEGER)
  RETURN BINARY_FLOAT 
AS
  LANGUAGE C
  LIBRARY VLMatchSource
  NAME "VLMatch"
  PARAMETERS (descriptorFst BY REFERENCE, descriptorSnd BY REFERENCE, length UNSIGNED SHORT, RETURN FLOAT);

Test function within call (from database console):

SELECT VLMatch(HEXTORAW('1234567890123456789012345678901234567890123456789012345678901234'), HEXTORAW('0123456789012345678901234567890123456789012345678901234567890123'), 32) FROM DUAL;

The result should be equal to "0.4765625".

Create VLMatch function for Events DB#

The Events service requires the VLMatch additional function to be added to the utilized database. LUNA PLATFORM cannot perform matching calculations without this function. The VLMatch function can be added to the PostgreSQL.

The VLMatch library is compiled for your particular database version.

Note! Do not use the library built for another version of DB. For example, the library build for the PostgreSQL of version 12 cannot be used for the PostgreSQL of version 9.6.

This section describes the function creation for PostgreSQL. If you use the PostgreSQL database, you have already created and moved the created library during the Faces service launch. See section "Build VLMatch UDx".

Add VLMatch function to Events database#

The VLMatch function should be applied to the PostgreSQL DB.

Define the function inside the Events database:

sudo -u postgres -h 127.0.0.1 -- psql -d luna_events -c "CREATE FUNCTION VLMatch(bytea, bytea, int) RETURNS float8 AS 'VLMatchSource.so', 'VLMatch' LANGUAGE C PARALLEL SAFE;"

Test function within call:

sudo -u postgres -h 127.0.0.1 -- psql -d luna_events -c "SELECT VLMatch('\x1234567890123456789012345678901234567890123456789012345678901234'::bytea, '\x0123456789012345678901234567890123456789012345678901234567890123'::bytea, 32);"

The result returned by the database must be "0.4765625".