Additional Information#
Liveness description#
The liveness algorithm enables LUNA PLATFORM to detect presentation attacks. A presentation attack is a situation when an imposter tries to use a video or photos of another person to circumvent the recognition system and gain access to the person's private data.
There are the following general types of presentation attacks:
- Video Replay Attack. A video of another person is used.
- Printed Photo Attack. One or several photos of another person are used.
- Printed Mask Attack. An imposter cuts out a face from a photo and covers his face with it.
- 3D Mask Attack. A silicone or plastic mask is used.
Switch Liveness type#
There are two Liveness mechanisms available: Liveness V1 and Liveness V2. You can utilize only one Liveness at a time.
The Liveness mechanism used is specified in the license. The following values can be set in the license for the Liveness feature:
- 0 - Liveness feature is not used.
- 1 - Liveness v1 is used.
- 2 - Liveness v2 is used.
Liveness v1 is launched as a separate service, whereas Liveness v2 is a part of the Handlers service. As Liveness v1 is a separate service, it should be enabled using the "liveness" option of the "ADDITIONAL_SERVICES_USAGE" section in the Configurator service.
The tables below show the system behavior when different license values are set.
Relations between set options and Liveness used for the "/liveness" resource
License value | "Liveness" option | Used liveness/error |
---|---|---|
0 | true | Error 403 is returned |
0 | false | Error 403 is returned |
1 | true | Liveness V1 is used |
1 | false | Error 403 is returned |
2 | true | Error 403 is returned |
2 | false | Liveness V2 is used |
For the Liveness v1 utilization for "/liveness" resource, you should have the value in the license set to "1" and the "liveness" option set to "true".
For the Liveness v2 utilization for "/liveness" resource, you should have the value in the license set to "2" and the "liveness" option set to "false".
All the other combinations lead to the 403 error when requesting the "/liveness" resource.
Relations between license value and Liveness used for "/sdk" resource
License value | Used liveness/error |
---|---|
0 | Error 403 is returned |
1 | Error 403 is returned |
2 | Liveness V2 is used |
When the estimate_liveness=1
is set for the "/sdk" resource, the Liveness V2 should be enabled, and the "liveness" option of the "ADDITIONAL_SERVICES_USAGE" should be disabled. In all the other cases, the error 403 is returned.
Liveness check results#
The liveness algorithm uses a single image for processing and returns the following data:
-
Liveness probability [0..1]. Here 1 means real person, 0 means spoof. The parameter shows the probability of a live person being present in the image, i.e. it is not a presentation attack. In general, the estimated probability must exceed the theoretical threshold of 50%. The value may be increased according to your business rules;
-
Image quality [0..1]. Here 1 means good quality, 0 means bad quality. The parameter describes the integral value of image, facial, and environmental characteristics. In general, the estimated quality must exceed the theoretical threshold of 50%. The threshold may be increased according to the photo shooting conditions.
Liveness V1#
Liveness V1 is used in the "/liveness" resource only. If this liveness is enabled and you use other resources with Liveness estimation (e. g., "/sdk"), the 403 error is returned.
Additional request parameters#
Liveness V1 provides additional request paramters.
You can specify the device OS type in the "OS" field of the "meta" object in the request:
- IOS
- ANDROID
- DESKTOP
- UNKNOWN
The parameter can decrease the overall error rate.
Liveness v1 requirements#
There are certain requirements for image quality and face alignment that must be met to get correct results.
Face requirements:
- A face should be fully open without any occlusions. The more face area is occluded, the lower the liveness estimation accuracy.
- A face should be fully visible within a frame and should have padding around (the distance between the face and the image boundaries). The default minimum value of padding is 25 pixels. Cropping is not allowed.
- Yaw and pitch angles are no more than 20 degrees in either direction.
- The roll angle no more than 30 degrees in either direction.
- The minimal distance between the eyes ~90 pixels (it is forbidden to set the value lower than 80 pixels).
- Single face in the image. It is recommended to avoid several faces being present in the image.
- No sunglasses.
Capture requirements:
- No blur (increases BPCER).
- No texture filtering (increases APCER).
- No spotlights on the face and close surroundings (increases BPCER).
- No colored light (increases BPCER).
- The face in the image must not be too light or too dark (increases BPCER).
- No fish-eye lenses.
APCER (Attack Presentation Classification Error Rate) — the rate of undetected attacks where algorithms identified the attack as a real person.
BPCER (Bona Fide Presentation Classification Error Rate) — the rate of incorrectly identified people where algorithms identified real people as spoofs.
Image requirements:
-
Horizontal and vertical oriented images of 720p and 1080p
-
Minimal image height: 480
-
No or minimal image compression. The compression highly influences liveness algorithms
Liveness V2#
Liveness V2 is a part of the Handlers service.
Liveness V2 requirements#
The requirements for the processed image and the face in the image are listed above.
This estimator supports images taken on mobile devices or webcams (PC or laptop).
Image resolution minimum requirements:
- Mobile devices - 720 × 960 px
- Webcam (PC or laptop) - 1280 x 720 px
There should be only one face in the image. An error occurs when there are two or more faces in the image.
The minimum face detection size must be 200 pixels.
Yaw, pitch, and roll angles should be no more than 25 degrees in either direction.
The minimum indent between the face and the image borders should be 10 pixels.
General information about services#
Worker processes#
There is a possibility to set the number of worker processes to use additional central processing units for the requests handling. A service will automatically spin up multiple processes and route traffic between the processes.
Note the number of available cores on your server when utilizing this feature.
Worker processes utilization is an alternative way for linear service scaling. It is recommended to use additional worker processes when increasing the number of service instances on the same server.
It is not recommended to use additional worker processes for the Handlers service when it utilizes GPU. Problems may occur if there is not enough GPU memory, and the workers will interfere with each other.
You can change the number of workers in Docker containers of services using the WORKER_COUNT
parameter during the service container launch.
Automatic configurations reload#
LP services now support the auto-reload of configurations. When a setting is changed, it is automatically updated for all the instances of the corresponding services. When this feature is enabled, no manual restart of services is required.
This feature is available for all the settings provided for each Python service. You should enable the feature manually upon each service launching. See the "Enable automatic configuration reload" section.
Starting with version 5.5.0 the configuration reload for Faces and Python Matcher services is done mostly by restarting appropriate processes.
Restrictions#
Service can work incorrectly while new settings are being applied. It is strongly recommended not to send requests to the service when you change important settings (DB setting, work plugins list, and others).
New settings appliance may lead to service restart and caches resetting (e. g., Python Matcher service cache). For example, the default descriptor version changing will lead to the LP restart. Changing the logging level does not cause service restart (if a valid setting value was provided).
Enable automatic configuration reload#
You can enable this feature by specifying a --config-reload
option in the command line. In Docker containers, the feature is enabled using the "RELOAD_CONFIG" option.
You can specify the configurations check period in the --pulling-time
command line argument. The value is set to 10 seconds by default. In Docker containers, the feature is enabled using the "RELOAD_CONFIG_INTERVAL" option.
Configurations update process#
LP services periodically receive settings from the Configurator service or configuration files. It depends on the way of configurations receiving for a particular service.
Each service compares its existing settings with the received settings:
-
If service settings were changed, they will pulled and applied.
-
If the configurations pulling has failed, the service will continue working without applying any changes to the existing configurations;
-
If check connections with new settings have failed, the service will retry new configurations pulling after 5 seconds. The service will shut down after 5 failed attempts;
-
-
If current settings and new pulled settings are the same, the Configurator service will not perform any actions.
Database migration execution#
You should execute migration scripts to update your database structure when upgrading to new LP builds. By default, migrations are automatically applied when running db_create
script.
This method may be useful when you need to rollback to the previous LUNA PLATFORM build or upgrade the database structure without changing the stored data. Anyway, it is recommended to create the backup of your database before applying any changes.
You can run migrations from a container or use a single command.
Single command#
The example is given for the Tasks service.
docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/tasks:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-tasks:v.3.0.27 \
alembic -x luna-config=http://127.0.0.1:5070/1 upgrade head
Running from container#
To run migrations from a container follow these steps (the example is given for the Configurator service):
-
Go to the service docker container. See the "Enter container".
-
Run the migrations.
For most of the services, the configuration parameters should be received from the Configurator service and the command is the following:
alembic -x luna-config=http://127.0.0.1:5070/1 upgrade head
-x luna-config=http://127.0.0.1:5070/1` - specifies that the configuration parameters for migrations should be received from the Configurator service.
For the Configurator service the parameters are received from "srv/luna_configurator/configs/config.conf" file.
You should use the following command for the Configurator service:
alembic upgrade head
- Exit the container. The container will be removed after you exit.
exit
Neural networks information#
Switch to 46 or 52 neural network#
This section describes switching to 46 and 52 versions of neural networks. It is required when the user utilizes one of these versions in the previous LP version and does not want to upgrade to a new neural network version.
The neural networks are not included in the distribution package. They are provided separately upon request to VisionLabs. There are two separate archives: for CPU with AVX2 and GPU. You should download the required archive.
Each archive includes two neural networks (*.plan) and their configuration files (*.conf).
After you have downloaded the archive with neural networks, you should perform the following actions:
- unzip the archive
- copy the neural networks to the launched Handlers container
- follow the steps described in the "Switch neural network version" section
Unzip neural networks#
Go to the directory with the archive and unzip it.
unzip fsdk_plans_*.zip
Copy neural networks to the handlers container#
Go to the directory with neural networks.
cd fsdk_plans_*/
Then copy the required neural network and its configuration file to the Handlers container using one of the following commands.
For the 46 neural network:
docker cp cnn46*.plan luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_46*.conf luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_46*.conf luna-python-matcher:/usr/lib/python3.7/site-packages/pymatcherlib/matcher/data/
luna-handlers
- the name of the launched Handlers container. This name may differ in your installation.
For the 52 neural network:
docker cp cnn52*.plan luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_52*.conf luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_52*.conf luna-python-matcher:/usr/lib/python3.7/site-packages/pymatcherlib/matcher/data/
Check that the required model for the required device (CPU or GPU) was successfully loaded:
docker exec -t luna-handlers ls /srv/fsdk/data/
Switch neural network version#
When changing the neural network version used, one should:
- set a new neural network version in LP configurations
- perform the re-extraction task so the already existing descriptors can be extracted using the new neural network
Set neural network version in LP configurations#
It is highly recommended not to perform any requests changing the states of databases during the descriptor version updates. It can lead to data loss.
You should set the new version of the neural network in the configurations of services. Use the Configurator GUI for this purpose:
- go to to the
http://<configurator_server_ip>:5070
- set the required neural network in the "DEFAULT_FACE_DESCRIPTOR_VERSION" parameter
- save changes using the "Save" button
- wait until the setting is applied to all the LP services
Launch re-extraction task#
The re-extraction task performs the extraction of descriptors using the new neural network version. It should be launched using the Admin service to be applied to all the descriptors created.
Samples are required for the re-extraction of descriptors using a new neural network. Descriptors of a new version will not be extracted for the faces and events that do not have samples.
Create backups of LP databases and the Image Store storage before launching the re-extraction task.
The re-extraction task can be launched using one of the following ways:
- using the request to the Admin API. See the "/additional_extract" resource for details
- using the Admin GUI
Re-extraction using Admin GUI:
-
Go to the API GUI:
http://<admin_server_ip>:5010/tasks
. -
Run the re-extract task using the following button.

- Set the new neural network version in the appeared window and press "Run".
You can see the information about the task processing using the "View details"
button.
You can download the log with all the processed samples and occurred errors using the "download" button in the "Result" column.
General information about requests creation#
All information about LP API can be found in the following directory:
"./docs/ReferenceManuals/"
API specifications are provided in two formats: HTML and YML.
The documents in HTML format provide a visual representation of API specifications and may be incomplete.
The documents in YML format provide a valid specification for LUNA PLATFORM. You can import the file to an external application for requests creation (e. g., Postman) or visualize using special tools (e. g., https://editor.swagger.io/).
The HTML and YML documents include:
- Required resources and methods for requests sending.
- Request parameters description.
- Response description.
- Examples of the requests and responses.
HTML and YML documents corresponding to the same service API have the same names.
When performing a request that changes the database, it is required to specify a "Luna-Account-Id". The created data will be related to the specified account ID.
You should use the account ID when requesting information from LP to receive the information related to the account.
The account ID is created according to the UUID format. There are plenty of UUID generators available on the Internet.
For testing purposes, the account ID from the API crequests examples can be used.

The HTML document includes the following elements:
- Requests, divided into groups.
- Request method and URL example. You should use it with your protocol, IP-address, and port to create a request. Example:
POST http://<IP>:<PORT>/<Version>/matcher
. - Description of request path parameters, query parameters, header parameters, body schema.
- Example of the request body.
- Description of responses.
- Examples of responses.
General requests to LP are sent via API service, using its URL:
http://<API server IP-address>:<API port>/<API Version>/
You can send requests via CURL or Postman to test LP.
You can expand descriptions for request body parameters or response parameters using the corresponding icon.

You can select the required example for request body or response in corresponding windows.


When specifying filters for requests you must use a full value, unless otherwise noted. The possibility of using part of the value is indicated in the description.
Upload images from folder#
The "folder_uploader.py" script uploads images from the specified folder and processes uploaded images according to the preassigned parameters.
General information about the script#
The "folder_uploader.py" script can be utilized for downloading images using the API service only.
You cannot specify the Backport 4 address and port for utilizing this script. You can use the data downloaded to the LP 5 API in Backport 4 requests.
You cannot use the "folder_uploader.py" script to download data to Backport 3 service as the created objects for Backport 3 differs (e. g. "person" object is not created by the script).
Script usage#
Script pipeline:
- Search images of the allowed type ('.jpg', '.jpeg', '.png', '.bmp', '.ppm', '.tif', '.tiff') in the specified folder (source).
- Start asynchronous image processing according to the specified parameters (see section "Script launching").
Image processing pipeline:
- Detect faces and create samples.
- Extract attributes.
- Create faces and link them to a list.
- Add record to the log file.
If an image was loaded successfully, the record is added to the _success_log.txt: success load logfile. The record has the following structure:
{
"image name": ...,
"face id": [...]
}.
If errors occur at any step of the script processing, the image processing routine is terminated and a record is added to the error log file _error_log.txt: error. The record has the following structure:
{
"image name": ...,
"error": ...
}
Install dependencies#
Before the script launching you must install all the required dependencies to launch the script.
It is strongly recommended to create a virtual environment for python dependencies installation.
Install Python packages (version 3.7 and later) before launching installation. The packages are not provided in the distribution package and their installation is not described in this manual:
- python3.7
- python3.7-devel
Install gcc.
yum -y install gcc
Go to the directory with the script
cd /var/lib/luna/current/extras/utils
Create a virtual environment
python3.7 -m venv venv
Activate the virtual environment
source venv/bin/activate
Install the tqdm library.
pip3.7 install tqdm
Install luna3 libraries.
pip3.7 install ./luna3*.whl
Deactivate virtual environment
deactivate
Script launching#
Use the command to run the script (the virtual environment must be activated):
python3.7 folder_uploader.py --account_id 6d071cca-fda5-4a03-84d5-5bea65904480 --source "Images/" --warped 0 --descriptor 1 --origin http://127.0.0.1:5000 --api 6 --avatar 1 --list_id 0dde5158-e643-45a6-8a4d-ad42448a913b --name_as_userdata 1
Make sure that the
--descriptor
parameter is set to1
so descriptors are created.
--source "Images/"
- "Images/" is the folder with images located near the "folder_uploader.py" script. Or you can specify the full path to the directory
--list_id 0dde5158-e643-45a6-8a4d-ad42448a913b
- specify your existing list here
--account_id 6d071cca-fda5-4a03-84d5-5bea65904480
- specify the required account ID
--origin http://127.0.0.1:5000
- specify your current API service address and port here
--api 6
- specify the API version. You can find it in the /var/lib/luna/current/docs/ReferenceManuals/APIReferenceManual.html
document
See help for more information about available script arguments:
python3.7 folder_uploader.py --help
Command line arguments:
-
account_id: an account ID used in requests to LUNA PLATFORM (required)
-
source: a directory with images to load (required)
-
warped: are images warped or not (0,1) (required)
-
descriptor: whether to extract descriptor (0,1); default - 0
-
origin: origin; default - "http://127.0.0.1:5000"
-
api: API version of the API service; default - 5
-
avatar: whether to set sample as avatar (0,1); default - 0
-
list_id: list ID to link faces with (a new LUNA list will be created if list_id is not set and list_linked=1); default - None
-
list_linked: whether to link faces with list (0,1); default - 1
-
list_userdata: userdata for list to link faces with (for newly created list); default - None
-
pitch_threshold: maximum deviation pitch angle [0..180];
-
roll_threshold: maximum deviation roll angle [0..180];
-
yaw_threshold: maximum deviation yaw angle [0..180];
-
multi_face_allowed: whether to allow several face detection from single image (0,1); default - 0
-
get_major_detection: whether to choose major face detection by sample Manhattan distance from single image (0,1); default - 0
-
basic_attr: whether to extract basic attributes (0,1); default - 1
-
score_threshold: descriptor quality score threshold (0..1); default - 0
-
name_as_userdata: whether to use image name as user data (0,1); default - 0
-
concurrency: parallel processing image count; default - 10
Client library#
General information#
The archive with the client library for LUNA PLATFORM 5 is provided in the distribution package:
/var/lib/luna/current/extras/utils/luna3-*.whl
This Python library is an HTTP client for all LUNA PLATFORM services.
You can find the examples of the library utilization in the /var/lib/luna/current/docs/ReferenceManuals/APIReferenceManual.html
document.

The example shows the request for faces matching. The luna3 library is utilized for the request creation. See "matcher" > "matching faces" in "APIReferenceManual.html":
# This example is written using luna3 library
from luna3.common.http_objs import BinaryImage
from luna3.lunavl.httpclient import LunaHttpClient
from luna3.python_matcher.match_objects import FaceFilters
from luna3.python_matcher.match_objects import Reference
from luna3.python_matcher.match_objects import Candidates
luna3client = LunaHttpClient(
accountId="8b8b5937-2e9c-4e8b-a7a7-5caf86621b5a",
origin="http://127.0.0.1:5000",
)
# create sample
sampleId = luna3client.saveSample(
image=BinaryImage("image.jpg"),
raiseError=True,
).json["sample_id"]
attributeId = luna3client.extractAttrFromSample(
sampleIds=[
sampleId,
],
raiseError=True,
).json[0]["attribute_id"]
# create face
faceId = luna3client.createFace(
attributeId=attributeId,
raiseError=True,
).json["face_id"]
# match
candidates = Candidates(
FaceFilters(
faceIds=[
faceId,
]
),
limit=3,
threshold=0.5,
)
reference = Reference("face", faceId)
response = luna3client.matchFaces(
candidates=[candidates], references=[reference],
raiseError=True,
)
print(response.statusCode)
print(response.json)
Library installation example#
In this example a virtual environment is created for luna3 installation.
You can use this Python library on Windows, Linux, MacOS.
Install Python packages (version 3.7 and later) before launching installation. The packages are not provided in the distribution package and their installation is not described in this manual:
- python3.7
- python3.7-devel
Install gcc.
yum -y install gcc
Go to the directory with the script
cd /var/lib/luna/current/extras/utils
Create a virtual environment
python3.7 -m venv venv
Activate the virtual environment
source venv/bin/activate
Install luna3 libraries.
pip3.7 install ./luna3*.whl
Deactivate virtual environment
deactivate
Databases information#
InfluxDB OSS 2#
Starting with version 5.5.0, LUNA PLATFORM provides a possibility to use InfluxDB of version 2.
For InfluxDB OSS 2 usage, you should:
- Install the DB. Installation is not described in this documentation. See InfluxDB documentation https://docs.influxdata.com/influxdb/v2.0/install/?t=Linux for details about installation.
- Register in the DB. InfluxDB has a user interface where you can register. You should visit
<server_ip>:<influx_port>
. - Configure LUNA PLATFORM to use InfluxDB version 2.
- Configure the display of monitoring information in the GUI. It is not described in this documentation.
Migration from version 1#
InfluxDB provides built-in tools for migration from version 1 to version 2. See documentation:
https://docs.influxdata.com/influxdb/v2.0/upgrade/v1-to-v2/
InfluxDB OSS 2 configuration#
InfluxDB OSS 2 configurations differ from InfluxDB OSS 1 configurations. One should change the settings set in the "INFLUX_MONITORING" section.
The following sections are similar for both InfluxDB versions:
- "send_data_for_monitoring"
- "use_ssl"
- "flushing_period"
- "host"
- "port"
The unique settings for InfluxDB OSS 2 are described below.
InfluxDB OSS 2 unique settings
Setting name | Type | Description |
---|---|---|
organization | String | The organization name specified during registration. |
token | String | Token received after registration. |
bucket | String | Bucket name. |
version | integer | Version of the DB used. You should set "1" or "2" according to the version used. You can leave the field empty. Then the required version will be set automatically. |
The resulting settings for InfluxDB OSS 2:
"send_data_for_monitoring": 1,
"use_ssl": 0,
"flushing_period": 1,
"host": "127.0.0.1",
"port": 8086,
"organization": "<ORGANIZATION_NAME>",
"token": "<TOKEN>",
"bucket": "<BUCKET_NAME>",
"version": <DB_VERSION>
You can update InfluxDB settings in the Configurator service by following these steps:
- Open /var/lib/luna/current/extras/conf/influx2.json.
- Set required data in "organization", "token", "bucket", "version" fields.
- Save changes.
- Copy the file to the influxDB container:
docker cp /var/lib/luna/current/extras/conf/influx2.json example-docker_configurator_1:/srv/
Note that the Configurator container name differs for manual (luna-configurator) and Compose (example-docker_configurator_1) installations.
- Update settings in the Configurator.
docker exec -it example-docker_configurator_1 python3.7 ./base_scripts/db_create.py --dump-file /srv/influx2.json
You can also manually update settings in the Configurator service user interface.