Additional Information#
Liveness description#
The liveness algorithm enables LUNA PLATFORM to detect presentation attacks. A presentation attack is a situation when an imposter tries to use a video or photos of another person to circumvent the recognition system and gain access to the person's private data.
There are the following general types of presentation attacks:
- Printed Photo Attack. One or several photos of another person are used.
- Video Replay Attack. A video of another person is used.
- Printed Mask Attack. An imposter cuts out a face from a photo and covers his face with it.
Switch Liveness type#
There are two Liveness mechanisms available: Liveness V1 and Liveness V2. You can utilize only one Liveness at a time.
The Liveness mechanism used is specified in the license. The following values can be set in the license for the Liveness feature:
- 0 - Liveness feature is not used.
- 1 - Liveness v1 is used.
- 2 - Liveness v2 is used.
Liveness v1 is launched as a separate service, whereas Liveness v2 is a part of the Handlers service. As Liveness v1 is a separate service, it should be enabled using the "liveness" option of the "ADDITIONAL_SERVICES_USAGE" section in the Configurator service.
The tables below show the system behavior when different license values are set.
Relations between set options and Liveness used for the "/liveness" resource
License value | "Liveness" option | Used liveness/error |
---|---|---|
0 | true | Error 403 is returned |
0 | false | Error 403 is returned |
1 | true | Liveness V1 is used |
1 | false | Error 403 is returned |
2 | true | Error 403 is returned |
2 | false | Liveness V2 is used |
For the Liveness v1 utilization for "/liveness" resource, you should have the value in the license set to "1" and the "liveness" option set to "true".
For the Liveness v2 utilization for "/liveness" resource, you should have the value in the license set to "2" and the "liveness" option set to "false".
All the other combinations lead to the 403 error when requesting the "/liveness" resource.
Relations between license value and Liveness used for "/sdk" and "/handlers" resources
License value | Used liveness/error |
---|---|
0 | Error 403 is returned |
1 | Error 403 is returned |
2 | Liveness V2 is used |
When the estimate_liveness=1
is set for these resources, the Liveness V2 should be enabled, and the "liveness" option of the "ADDITIONAL_SERVICES_USAGE" should be disabled. In all the other cases, the error 403 is returned.
Liveness check results#
The liveness algorithm uses a single image for processing and returns the following data:
-
Liveness probability [0..1]. Here 1 means real person, 0 means spoof. The parameter shows the probability of a live person being present in the image, i.e. it is not a presentation attack. In general, the estimated probability must exceed the theoretical threshold of 50%. The value may be increased according to your business rules;
-
Image quality [0..1]. Here 1 means good quality, 0 means bad quality. The parameter describes the integral value of image, facial, and environmental characteristics. In general, the estimated quality must exceed the theoretical threshold of 50%. The threshold may be increased according to the photo shooting conditions.
Liveness V2#
Liveness V2 is a part of the Handlers service and is used in the "/liveness", "/sdk", "/handlers" and "/verifiers" resources.
You can filter events by liveness in the "handlers/{handler_id}\/events" and "/verifiers/{verifier_id}\/verifications" resources, i.e. you can exclude "spoof", "real" or "unknown" results from image processing.
Filtering by liveness is available for the following scenarios:
- when performing matching operations;
- when performing tasks;
- when sending data via web sockets.
You can also specify the liveness estimation parameter when manually creating and saving events in the "handlers/{handler_id}\/events/raw" resource.
For multiple uploaded images, you can aggregate the liveness results to obtain more accurate data.
Liveness V2 requirements#
Liveness estimation is not supported for samples (warped images) as they do not meet the requirements for incoming images.
The following requirements are related to Liveness V2 only.
This estimator supports images taken on mobile devices or webcams (PC or laptop).
Image resolution minimum requirements:
- Mobile devices - 720 × 960 px
- Webcam (PC or laptop) - 1280 x 720 px
There should be only one face in the image. An error occurs when there are two or more faces in the image.
The minimum face detection size must be 200 pixels.
Yaw, pitch, and roll angles should be no more than 25 degrees in either direction.
The minimum indent between the face and the image borders should be 10 pixels.
Liveness V1#
Liveness V1 is used in the "/liveness" resource only. If this liveness is enabled and you use other resources with Liveness estimation (e. g., "/sdk"), the 403 error is returned.
Additional request parameters#
Liveness V1 provides additional request paramters.
You can specify the device OS type in the "OS" field of the "meta" object in the request:
- IOS
- ANDROID
- DESKTOP
- UNKNOWN
The parameter can decrease the overall error rate.
Liveness V1 requirements#
Liveness estimation is not supported for samples (warped images) as they do not meet the requirements for incoming images.
There are certain requirements for image quality and face alignment that must be met to get correct results.
Face requirements:
- A face should be fully open without any occlusions. The more face area is occluded, the lower the liveness estimation accuracy.
- A face should be fully visible within a frame and should have padding around (the distance between the face and the image boundaries). The default minimum value of padding is 25 pixels. Cropping is not allowed.
- Yaw and pitch angles are no more than 20 degrees in either direction.
- The roll angle no more than 30 degrees in either direction.
- The minimal distance between the eyes ~90 pixels (it is forbidden to set the value lower than 80 pixels).
- Single face in the image. It is recommended to avoid several faces being present in the image.
- No sunglasses.
Capture requirements:
- No blur (increases BPCER).
- No texture filtering (increases APCER).
- No spotlights on the face and close surroundings (increases BPCER).
- No colored light (increases BPCER).
- The face in the image must not be too light or too dark (increases BPCER).
- No fish-eye lenses.
APCER (Attack Presentation Classification Error Rate) — the rate of undetected attacks where algorithms identified the attack as a real person.
BPCER (Bona Fide Presentation Classification Error Rate) — the rate of incorrectly identified people where algorithms identified real people as spoofs.
Image requirements:
-
Horizontal and vertical oriented images of 720p and 1080p
-
Minimal image height: 480
-
No or minimal image compression. The compression highly influences liveness algorithms
General information about services#
Worker processes#
There is a possibility to set the number of worker processes to use additional central processing units for the requests handling. A service will automatically spin up multiple processes and route traffic between the processes.
Note the number of available cores on your server when utilizing this feature.
Worker processes utilization is an alternative way for linear service scaling. It is recommended to use additional worker processes when increasing the number of service instances on the same server.
It is not recommended to use additional worker processes for the Handlers service when it utilizes GPU. Problems may occur if there is not enough GPU memory, and the workers will interfere with each other.
You can change the number of workers in Docker containers of services using the WORKER_COUNT
parameter during the service container launch.
Automatic configurations reload#
LP services now support the auto-reload of configurations. When a setting is changed, it is automatically updated for all the instances of the corresponding services. When this feature is enabled, no manual restart of services is required.
This feature is available for all the settings provided for each Python service. You should enable the feature manually upon each service launching. See the "Enable automatic configuration reload" section.
Starting with version 5.5.0 the configuration reload for Faces and Python Matcher services is done mostly by restarting appropriate processes.
Restrictions#
Service can work incorrectly while new settings are being applied. It is strongly recommended not to send requests to the service when you change important settings (DB setting, work plugins list, and others).
New settings appliance may lead to service restart and caches resetting (e. g., Python Matcher service cache). For example, the default descriptor version changing will lead to the LP restart. Changing the logging level does not cause service restart (if a valid setting value was provided).
Enable automatic configuration reload#
You can enable this feature by specifying a --config-reload
option in the command line. In Docker containers, the feature is enabled using the "RELOAD_CONFIG" option.
You can specify the configurations check period in the --pulling-time
command line argument. The value is set to 10 seconds by default. In Docker containers, the feature is enabled using the "RELOAD_CONFIG_INTERVAL" option.
Configurations update process#
LP services periodically receive settings from the Configurator service or configuration files. It depends on the way of configurations receiving for a particular service.
Each service compares its existing settings with the received settings:
-
If service settings were changed, they will pulled and applied.
-
If the configurations pulling has failed, the service will continue working without applying any changes to the existing configurations;
-
If check connections with new settings have failed, the service will retry new configurations pulling after 5 seconds. The service will shut down after 5 failed attempts;
-
-
If current settings and new pulled settings are the same, the Configurator service will not perform any actions.
Database migration execution#
You should execute migration scripts to update your database structure when upgrading to new LP builds. By default, migrations are automatically applied when running db_create
script.
This method may be useful when you need to rollback to the previous LUNA PLATFORM build or upgrade the database structure without changing the stored data. Anyway, it is recommended to create the backup of your database before applying any changes.
You can run migrations from a container or use a single command.
Single command#
The example is given for the Tasks service.
docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/tasks:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-tasks:v.3.4.6 \
alembic -x luna-config=http://127.0.0.1:5070/1 upgrade head
Running from container#
To run migrations from a container follow these steps (the example is given for the Configurator service):
-
Go to the service docker container. See the "Enter container".
-
Run the migrations.
For most of the services, the configuration parameters should be received from the Configurator service and the command is the following:
alembic -x luna-config=http://127.0.0.1:5070/1 upgrade head
-x luna-config=http://127.0.0.1:5070/1` - specifies that the configuration parameters for migrations should be received from the Configurator service.
For the Configurator service the parameters are received from "srv/luna_configurator/configs/config.conf" file.
You should use the following command for the Configurator service:
alembic upgrade head
- Exit the container. The container will be removed after you exit.
exit
Nuances of working with services#
When working with different services, it is necessary to take into account some nuances that will be described in this section.
Auto-orientation of rotated image#
It is not recommended to send rotated images to LP as they are not processed properly and should be rotated. There are two methods to auto-orient a rotated image - based on EXIF image data (query parameters) and using LP algorithms (Configurator setting). Both methods for automatic image orientation can be used together.
If auto-orientation is not used, the sample creation mechanism will rotate the image and produce an image with a random rotation angle.
Auto-orientation based on EXIF data#
This method of image orientation is performed in the query parameters using the "use_exif_info" parameter. This parameter can enable or disable auto-orientation of the image based on EXIF data.
This parameter is available and enabled by default in the following resources:
The "use_exif_info" parameter cannot be used with samples. When the "warped_image" or "image_type" query parameter is set to the appropriate value, the parameter is ignored.
Auto-orientation based on Configurator setting#
This method of image orientation is performed in the Configurator using the "LUNA_HANDLERS_USE_AUTO_ROTATION" setting. If this setting is enabled and the input image is rotated by 90, 180 or 270 degrees, then LP rotates the image to the correct angle. If this setting is enabled, but the input image is not rotated, then LP does not rotate the image.
Performing auto-orientation consumes a significant amount of server resources, so it is disabled by default.
The "LUNA_HANDLERS_USE_AUTO_ROTATION" setting cannot be used with samples. If the "warped_image" or "image_type" query parameter is set to the appropriate value and the input image is a sample and rotated, then the "LUNA_HANDLERS_USE_AUTO_ROTATION" setting will be ignored.
Saving source images#
The URL to the source image can be saved in the "image_origin" field of the created events when processing the "/handlers/{handler_id}/events" request.
To do this, you should specify the "store_image" parameter in the "image_origin_policy" when creating handler.
Then you should set an image for processing in the "generate events" request.
If "use_external_references"=0
and the URL to an external image was transferred in the "generate events" request, then this image will be saved to the Image Store storage, and the ID of the saved image will be added in the "image_origin" field of the generated event.
The "use_external_references" parameter enables you to save an external link instead of saving the image itself:
-
If
"use_external_references"=1
and the URL to an external image was transferred in the "generate events" request, then that URL will be added in the "image_origin" field. The image itself will not be saved to the Image Store. -
If
"use_external_references"=1
, the sample was provided in the "generate events" request and "face_sample_policy > store_sample" is enabled, the URL to the sample in the Image Store will be saved in the "image_origin" field. The duplication of the sample image will be avoided. If an external URL is too long (more than 256 symbols), the service will store the image to the Image Store.
You can also provide the URL to the source image directly using the "/handlers/{handler_id}/events" resource. To do this, you should use the "application/json" or "multipart/form-data" body schema in the request. The URL should be specified in the "image_origin" field of the request.
If the "image_origin" is not empty, the provided URL will be used in the created event regardless of the "image_origin_policy" policy.
The image provided in the "image_origin" field will not be processed in the request. It is used as a source image only.
Neural networks information#
Switch to 46 or 52 neural network#
This section describes switching to 46 and 52 versions of neural networks. It is required when the user utilizes one of these versions in the previous LP version and does not want to upgrade to a new neural network version.
The neural networks are not included in the distribution package. They are provided separately upon request to VisionLabs. There are two separate archives: for CPU with AVX2 and GPU. You should download the required archive.
Each archive includes two neural networks (*.plan) and their configuration files (*.conf).
After you have downloaded the archive with neural networks, you should perform the following actions:
- unzip the archive
- copy the neural networks to the launched Handlers container
- follow the steps described in the "Switch neural network version" section
Unzip neural networks#
Go to the directory with the archive and unzip it.
unzip fsdk_plans_*.zip
Copy neural networks to the handlers container#
Go to the directory with neural networks.
cd fsdk_plans_*/
Then copy the required neural network and its configuration file to the Handlers container using one of the following commands.
For the 46 neural network:
docker cp cnn46*.plan luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_46*.conf luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_46*.conf luna-python-matcher:/usr/lib/python3.9/site-packages/pymatcherlib/matcher/data/
luna-handlers
- the name of the launched Handlers container. This name may differ in your installation.
For the 52 neural network:
docker cp cnn52*.plan luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_52*.conf luna-handlers:/srv/fsdk/data/
docker cp cnndescriptor_52*.conf luna-python-matcher:/usr/lib/python3.9/site-packages/pymatcherlib/matcher/data/
Check that the required model for the required device (CPU or GPU) was successfully loaded:
docker exec -t luna-handlers ls /srv/fsdk/data/
Switch neural network version#
When changing the neural network version used, one should:
- perform the re-extraction task so the already existing descriptors can be extracted using the new neural network. You should not change the default neural network version, before finishing the re-extraction task.
- set a new neural network version in LP configurations (see "Change neural network version").
Launch re-extraction task#
The re-extraction task performs the extraction of descriptors using the new neural network version. It should be launched using the Admin service to be applied to all the descriptors created.
Re-extraction can be performed for faces and events objects. Basic attributes, face descriptors, and body descriptors (for events) can be re-extracted. You should specify the corresponding objects in the re-extraction request.
It is highly recommended not to perform any requests changing the state of databases during the descriptor version updates. It can lead to data loss.
The default descriptor version in the "DEFAULT_FACE_DESCRIPTOR_VERSION" parameter (for faces) or the "DEFAULT_HUMAN_DESCRIPTOR_VERSION" (for bodies) in the Configurator service should be set to the current neural network version used for the descriptors extraction, not to the new NN version. New neural network version should be set after the re-extraction was successfully finished.
Samples are required for the re-extraction of descriptors using a new neural network. Descriptors of a new version will not be extracted for the faces and events that do not have samples.
Create backups of LP databases and the Image Store storage before launching the re-extraction task.
The re-extraction task can be launched using one of the following ways:
- using the request to the Admin API. See the "/additional_extract" resource for details
- using the Admin GUI
Re-extraction using Admin GUI:
-
Go to the API GUI:
http://<admin_server_ip>:5010/tasks
. -
Run the re-extract task using the following button.

- Set the object type (Faces or Events), descriptor type (Face or Body), and new neural network version in the appeared window and press "Run".

You can see the information about the task processing using the "View details"
button.
You can download the log with all the processed samples and occurred errors using the "download" button in the "Result" column.
Change neural network version#
You should set the new version of the neural network in the configurations of services. Use the Configurator GUI for this purpose:
- Go to to the
http://<configurator_server_ip>:5070
. - Set the required neural network in the "DEFAULT_FACE_DESCRIPTOR_VERSION" parameter (for faces) or the "DEFAULT_HUMAN_DESCRIPTOR_VERSION" (for bodies).
- Save changes using the "Save" button.
- Wait until the setting is applied to all the LP services.
General information about requests creation#
All information about LP API can be found in the following directory:
"./docs/ReferenceManuals/"
API specifications are provided in two formats: HTML and YML.
OpenAPI specification is the only valid document providing up-to-date information about the service API.
The specification can be used:
- By documentation generation tools to visualize the API (e. g., https://editor.swagger.io/).
- By code generation tools. You can import the file to an external application for requests creation (e. g., Postman).
All the documents and code generated using this specification can include inaccuracies and should be carefully checked.
OpenAPI specification can be received using one of the following ways:
- using the "/docs/spec" resource. The "Accept" header should be set to "application/x-yaml".
- in the distribution package of LUNA PLATFORM in "ReferenceManuals" directory. The document us in YAML format.
The documents in HTML format provide a visual representation of API specifications and may be incomplete.
Specification includes:
- Required resources and methods for requests sending.
- Request parameters description.
- Response description.
- Examples of the requests and responses.
HTML and YML documents corresponding to the same service API have the same names.
When performing a request that changes the database, it is required to specify a "Luna-Account-Id". The created data will be related to the specified account ID.
You should use the account ID when requesting information from LP to receive the information related to the account.
The account ID is created according to the UUID format. There are plenty of UUID generators available on the Internet.
For testing purposes, the account ID from the API requests examples can be used.

The HTML document includes the following elements:
- Requests, divided into groups.
- Request method and URL example. You should use it with your protocol, IP-address, and port to create a request. Example:
POST http://<IP>:<PORT>/<Version>/matcher
. - Description of request path parameters, query parameters, header parameters, body schema.
- Example of the request body.
- Description of responses.
- Examples of responses.
General requests to LP are sent via API service, using its URL:
http://<API server IP-address>:<API port>/<API Version>/
You can send requests via CURL or Postman to test LP.
You can expand descriptions for request body parameters or response parameters using the corresponding icon.

You can select the required example for request body or response in corresponding windows.


When specifying filters for requests you must use a full value, unless otherwise noted. The possibility of using part of the value is indicated in the description.
Upload images from folder#
The "folder_uploader.py" script uploads images from the specified folder and processes uploaded images according to the preassigned parameters.
General information about the script#
The "folder_uploader.py" script can be utilized for downloading images using the API service only.
You cannot specify the Backport 4 address and port for utilizing this script. You can use the data downloaded to the LP 5 API in Backport 4 requests.
You cannot use the "folder_uploader.py" script to download data to Backport 3 service as the created objects for Backport 3 differs (e. g. "person" object is not created by the script).
Script usage#
Script pipeline:
- Search images of the allowed type ('.jpg', '.jpeg', '.png', '.bmp', '.ppm', '.tif', '.tiff') in the specified folder (source).
- Start asynchronous image processing according to the specified parameters (see section "Script launching").
Image processing pipeline:
- Detect faces and create samples.
- Extract attributes.
- Create faces and link them to a list.
- Add record to the log file.
If an image was loaded successfully, the record is added to the _success_log.txt: success load logfile. The record has the following structure:
{
"image name": ...,
"face id": [...]
}.
If errors occur at any step of the script processing, the image processing routine is terminated and a record is added to the error log file _error_log.txt: error. The record has the following structure:
{
"image name": ...,
"error": ...
}
Install dependencies#
Before the script launching you must install all the required dependencies to launch the script.
It is strongly recommended to create a virtual environment for python dependencies installation.
Install Python packages (version 3.7 and later) before launching installation. The packages are not provided in the distribution package and their installation is not described in this manual:
- python3.7
- python3.7-devel
Install gcc.
yum -y install gcc
Go to the directory with the script
cd /var/lib/luna/current/extras/utils
Create a virtual environment
python3.7 -m venv venv
Activate the virtual environment
source venv/bin/activate
Install the tqdm library.
pip3.7 install tqdm
Install luna3 libraries.
pip3.7 install ./luna3*.whl
Deactivate virtual environment
deactivate
Script launching#
Use the command to run the script (the virtual environment must be activated):
python3.7 folder_uploader.py --account_id 6d071cca-fda5-4a03-84d5-5bea65904480 --source "Images/" --warped 0 --descriptor 1 --origin http://127.0.0.1:5000 --api 6 --avatar 1 --list_id 0dde5158-e643-45a6-8a4d-ad42448a913b --name_as_userdata 1
Make sure that the
--descriptor
parameter is set to1
so descriptors are created.
--source "Images/"
- "Images/" is the folder with images located near the "folder_uploader.py" script. Or you can specify the full path to the directory
--list_id 0dde5158-e643-45a6-8a4d-ad42448a913b
- specify your existing list here
--account_id 6d071cca-fda5-4a03-84d5-5bea65904480
- specify the required account ID
--origin http://127.0.0.1:5000
- specify your current API service address and port here
--api 6
- specify the API version. You can find it in the /var/lib/luna/current/docs/ReferenceManuals/APIReferenceManual.html
document
See help for more information about available script arguments:
python3.7 folder_uploader.py --help
Command line arguments:
-
account_id: an account ID used in requests to LUNA PLATFORM (required)
-
source: a directory with images to load (required)
-
warped: are images warped or not (0,1) (required)
-
descriptor: whether to extract descriptor (0,1); default - 0
-
origin: origin; default - "http://127.0.0.1:5000"
-
api: API version of the API service; default - 5
-
avatar: whether to set sample as avatar (0,1); default - 0
-
list_id: list ID to link faces with (a new LUNA list will be created if list_id is not set and list_linked=1); default - None
-
list_linked: whether to link faces with list (0,1); default - 1
-
list_userdata: userdata for list to link faces with (for newly created list); default - None
-
pitch_threshold: maximum deviation pitch angle [0..180];
-
roll_threshold: maximum deviation roll angle [0..180];
-
yaw_threshold: maximum deviation yaw angle [0..180];
-
multi_face_allowed: whether to allow several face detection from single image (0,1); default - 0
-
get_major_detection: whether to choose major face detection by sample Manhattan distance from single image (0,1); default - 0
-
basic_attr: whether to extract basic attributes (0,1); default - 1
-
score_threshold: descriptor quality score threshold (0..1); default - 0
-
name_as_userdata: whether to use image name as user data (0,1); default - 0
-
concurrency: parallel processing image count; default - 10
Client library#
General information#
The archive with the client library for LUNA PLATFORM 5 is provided in the distribution package:
/var/lib/luna/current/extras/utils/luna3-*.whl
This Python library is an HTTP client for all LUNA PLATFORM services.
You can find the examples of the library utilization in the /var/lib/luna/current/docs/ReferenceManuals/APIReferenceManual.html
document.

The example shows the request for faces matching. The luna3 library is utilized for the request creation. See "matcher" > "matching faces" in "APIReferenceManual.html":
# This example is written using luna3 library
from luna3.common.http_objs import BinaryImage
from luna3.lunavl.httpclient import LunaHttpClient
from luna3.python_matcher.match_objects import FaceFilters
from luna3.python_matcher.match_objects import Reference
from luna3.python_matcher.match_objects import Candidates
luna3client = LunaHttpClient(
accountId="8b8b5937-2e9c-4e8b-a7a7-5caf86621b5a",
origin="http://127.0.0.1:5000",
)
# create sample
sampleId = luna3client.saveSample(
image=BinaryImage("image.jpg"),
raiseError=True,
).json["sample_id"]
attributeId = luna3client.extractAttrFromSample(
sampleIds=[
sampleId,
],
raiseError=True,
).json[0]["attribute_id"]
# create face
faceId = luna3client.createFace(
attributeId=attributeId,
raiseError=True,
).json["face_id"]
# match
candidates = Candidates(
FaceFilters(
faceIds=[
faceId,
]
),
limit=3,
threshold=0.5,
)
reference = Reference("face", faceId)
response = luna3client.matchFaces(
candidates=[candidates], references=[reference],
raiseError=True,
)
print(response.statusCode)
print(response.json)
Library installation example#
In this example a virtual environment is created for luna3 installation.
You can use this Python library on Windows, Linux, MacOS.
Install Python packages (version 3.7 and later) before launching installation. The packages are not provided in the distribution package and their installation is not described in this manual:
- python3.7
- python3.7-devel
Install gcc.
yum -y install gcc
Go to the directory with the script
cd /var/lib/luna/current/extras/utils
Create a virtual environment
python3.7 -m venv venv
Activate the virtual environment
source venv/bin/activate
Install luna3 libraries.
pip3.7 install ./luna3*.whl
Deactivate virtual environment
deactivate
Plugins#
Plugins are used to perform secondary actions for the user's needs. For example, you can create your own resource based on the abstract class, or you can describe what needs to be done in some resource in addition to the standard functionality.
Files with base abstract classes are located in the
.plugins/plugins_meta
folder of specific service.
Plugins should be written in the Python programming language.
There are two sorts of plugins:
- Event plugin
- Background plugin
Event plugin#
The first sort is triggered when an event occurs. The plugin should implement a callback function. This function is called on each event of the corresponding type. The set of event types is defined by the service developers. There are two types of event plugins available for the Handlers service:
- Monitoring event
- Sending event
For other services, only monitoring event type is available.
Monitoring event plugin example#
Below is an example of monitoring event plugin for all services:
"""
Module request monitoring plugin example
"""
from abc import abstractmethod
from typing import List, Optional
from aiohttp import ClientSession
from crutches_on_wheels.monitoring.points import BaseRequestMonitoringPoint
from crutches_on_wheels.plugins.plugins_meta.base_plugins import BaseEventPlugin
from crutches_on_wheels.web.application import LunaApplication
from crutches_on_wheels.utils.log import Logger
class BaseRequestMonitoringPlugin(BaseEventPlugin):
"""
Base class for requests monitoring.
"""
# event name for triggering callback
eventName = "monitoring_event"
@abstractmethod
async def flushPointToMonitoring(self, points: List[BaseRequestMonitoringPoint], logger: Logger) -> None:
"""
All plugins must realize this method.
This function call after end of request
Args:
points: points for monitoring which corresponding the request
logger: logger
"""
async def handleEvent( # pylint: disable-msg=C0204,W0221
self, points: List[BaseRequestMonitoringPoint], logger: Logger
):
await self.flushPointToMonitoring(points=points, logger=logger)
class RequestMonitoringPlugin(BaseRequestMonitoringPlugin):
"""
Example plugin sends a request data for monitoring to third-party source.
Only one instance of this class exist during the program execution.
"""
def __init__(self, app: LunaApplication):
super().__init__(app)
self.url = "http://127.0.0.1:5020/1/buckets"
self.session: Optional[ClientSession] = None
self.bucket = "plugin_test_bucket"
async def close(self):
"""
Stop plugin.
Close all open connections and ect
"""
if self.session:
await self.session.close()
async def initialize(self):
"""
Initialize plugin.
Close all open connections and ect
"""
self.session = ClientSession()
async with self.session.post(f"{self.url}?bucket={self.bucket}") as resp:
if resp.status not in (201, 409):
response = await resp.json()
raise RuntimeError(f"failed create bucket, {self.bucket}, response: ")
async def flushPointToMonitoring(self, points: List[BaseRequestMonitoringPoint], logger: Logger) -> None:
"""
Callback for sending a request monitoring data.
Args:
points: point for monitoring which corresponding the request
logger: logger
"""
if not points:
return
point = points[0]
logger.debug(
f"Plugin 'flushPointToMonitoring' get point, request_id: {point.requestId}, "
f"start time: {point.eventTime}"
)
msg = {"tags": point.tags, "fields": point.fields}
async with self.session.post(f"{self.url}/{self.bucket}/objects", json=msg) as resp:
logger.info(resp.status)
logger.info(await resp.text())
This plugin demonstrates the sending of a request monitoring data to another service. All monitoring plugins must implement the BaseRequestMonitoringPlugin
abstract class.
Sending plugin example#
Below is an example of sending plugin for all services:
from abc import abstractmethod
from typing import List, Optional, Union
from aiohttp import ClientSession
from classes.event import Event, eventAsDict
from classes.schemas.event_raw import RawEvent
from crutches_on_wheels.plugins.plugins_meta.base_plugins import BaseEventPlugin
from crutches_on_wheels.utils.log import Logger
from crutches_on_wheels.web.application import LunaApplication
class BaseEventSendingPlugin(BaseEventPlugin):
""" Base class for event sending."""
# event name for triggering callback
eventName = "sending_event"
@abstractmethod
async def sendEvents(
self,
events: Union[List[Event], List[RawEvent]],
handlerId: str,
accountId: str,
requestId: str,
eventsTime: str,
logger: Logger,
) -> None:
"""
Callback that is triggered on every success request to handlers.
Args:
events: event list
handlerId: handler id
accountId: account id
requestId: request id
eventsTime: value of the header "Luna-Event-Time" if specified, otherwise current time.
logger: logger
"""
pass
async def handleEvent(
self,
events: Union[List[Event], List[RawEvent]],
handlerId: str,
accountId: str,
requestId: str,
eventsTime: str,
logger: Logger,
):
"""
Handle events.
Args:
events: event list
handlerId: handler id
accountId: account id
requestId: request id
eventsTime: value of the header "Luna-Event-Time" if specified, otherwise current time.
logger: logger
"""
await self.sendEvents(events, handlerId, accountId, requestId, eventsTime, logger)
class EventSendingPlugin(BaseEventSendingPlugin):
""" Sends events to the third-party source. Only one instance of this class exist during the program execution."""
def __init__(self, app: LunaApplication):
super().__init__(app)
self.url = "http://127.0.0.1:5020/1/buckets"
self.session: Optional[ClientSession] = None
self.bucket = "plugin_test_bucket"
async def close(self):
"""Stop plugin. Close all open connections and etc."""
if self.session:
await self.session.close()
async def initialize(self):
""" Initialize plugin."""
self.session = ClientSession()
async with self.session.post(f"{self.url}?bucket={self.bucket}") as resp:
if resp.status not in (201, 409):
response = await resp.json()
raise RuntimeError(f"failed create bucket, {self.bucket}, response: ")
async def sendEvents(
self,
events: Union[List[Event], List[RawEvent]],
handlerId: str,
accountId: str,
requestId: str,
eventsTime: str,
logger: Logger,
) -> None:
logger.debug(
f"Plugin 'EventsOnFinishExampleClass' get events, request_id: , " f"event_time: "
)
prepareEvents = []
for event in events:
if isinstance(event, Event):
serializationEvent = eventAsDict(event)
else:
serializationEvent = event.asHandlerEventDict()
prepareEvents.append(serializationEvent)
msg = {
"handler_id": handlerId,
"account_id": accountId,
"Luna-Request-Id": requestId,
"events": prepareEvents,
"events_time": eventsTime,
}
async with ClientSession() as session:
async with session.post(f"{self.url}/{self.bucket}/objects", json=msg) as resp:
logger.debug(resp.status)
logger.debug(await resp.text())
This plugin demonstrates the sending of a handler generates event data to the third-party source. All event sending plugins should implement the BaseEventSendingPlugin
abstract class.
Background plugin#
The second sort of plugin is intended for background work. The background plugin can implement:
- custom request for a specific resource (route),
- background monitoring of service resources,
- collaboration of an event plugin and a background plugin (batching monitoring points),
- connection to other data sources (Redis, RabbitMQ) and their data processing.
Background plugin example#
Below is an example of background plugin for all services:
"""
Module realizes background plugin example
"""
import asyncio
from asyncio import Task
from typing import Optional
from sanic.response import HTTPResponse
from crutches_on_wheels.plugins.plugins_meta.base_plugins import BaseBackgroundPlugin
from crutches_on_wheels.web.application import LunaApplication
from crutches_on_wheels.web.handlers import BaseHandler
class HandlerExample(BaseHandler):
"""
Handler example
"""
async def get(self) -> HTTPResponse:
"""
Method get example.
Returns:
response
"""
return self.success(body="I am teapot", contentType="text/plain", statusCode=418)
@property
def app(self):
"""
Get app
Abstract method of `BaseHandler`
Returns:
app
"""
return self.request.app
@property
def config(self):
"""
Get app config
Abstract method of `BaseHandler`
Returns:
app config
"""
return self.request.app.ctx.serviceConfig
class BackgroundPluginExample(BaseBackgroundPlugin):
"""
Background plugin example.
Create background task and add a route.
"""
def __init__(self, app: LunaApplication):
super().__init__(app)
app.addRoutes([("/teapot", HandlerExample)])
self.task: Optional[Task] = None
self.temperature = 0
async def initialize(self):
"""
Initialize plugin
"""
async def close(self):
"""
Stop background process
Returns:
"""
if self.task:
self.task.cancel()
async def usefulJob(self):
"""
Some useful async work
"""
while True:
await asyncio.sleep(1)
self.temperature = min(100, self.temperature + 1)
if self.temperature < 100:
self.app.ctx.logger.info(f"I boil water, temperature: {self.temperature}")
else:
self.app.ctx.logger.info("boiling water is ready, would you care for a cup of tea?")
async def start(self):
"""
Run background process
.. warning::
The function suppose that the process is handle in this coroutine. The coroutine must start
the process only without awaiting a end of the process
"""
self.task = asyncio.create_task(self.usefulJob())
This plugin demonstrates background work and implements a route. All background plugins should implement the BaseRequestMonitoringPlugin
abstract class.
Plugins usage#
Adding plugins to the directory manually#
This way can be used when the plugin does not require any additional dependencies that are not provided in the service Docker container.
There are two steps required to use a plugin with the service in Docker container:
- Add the plugin file to the container.
- Specify the plugin usage in the container configurations.
When starting the container, you need to forward the plugin file to the folder with plugins of specific service. For example, for the Handlers service it will be the /srv/luna_handlers/plugins
folder.
This can be done in any convenient way. For example, you can mount the folder with plugins to the required service directory during service launching (see service launching commands in the installation manual):
You should add the following volume if all the required plugins for the service are stored in the "/var/lib/luna/handlers/plugins" directory:
-v /var/lib/luna/handlers/plugins:/srv/luna_handlers/plugins/ \
The command is given for the case of manual service launching.
Next, you should add the filename to the "LUNA_HANDLERS_ACTIVE_PLUGINS" configuration in the Configurator service.
LUNA_HANDLERS_ACTIVE_PLUGINS = [luna_handlers_plugin]
The list should contain filenames without extension (.py).
After completing these steps, LP will automatically use the plugin.
More information about plugins for a specific service can be found in the "API Development Manual" documentation in "../ServiceManuals/".
Building new Docker container with plugin#
This way can be used when additional dependencies are required for the plugin utilization or when the container with the plugin is required for the production usage.
You should create your docker container based on the basic service container.
Add "Dockerfile" with the following structure to your CI:
FROM dockerhub.visionlabs.ru/luna/luna-handlers:v.1.7.2
USER root
...
USER luna
FROM
should include the address to the basic service container that will be used.
USER root
- change the privileges to the root user to perform the following actions.
Then the commands for adding the plugin and its dependencies should be listed. They are not given in this manual. Check the Docker documentation.
USER luna
- after all the commands are executed, the user should be changed back to "luna".
Add the plugin filename to the "LUNA_HANDLERS_ACTIVE_PLUGINS" configuration in the Configurator service.
You can:
-
Update settings manually in the Configurator service as described above.
-
Create a dump file with LP plugin settings and add them to the Configurator service after its launch.
An example of the dump file with the Handlers plugin settings is given below.
{
"settings":[
{
"value": [luna_handlers_plugin],
"description": "list active plugins",
"name": "LUNA_HANDLERS_ACTIVE_PLUGINS",
"tags": []
},
]
}
Then the file is applied using the following command. For example the file is stored in "/var/lib/luna/". The dump filename is "luna_handlers_plugin.json".
docker run \
-v /var/lib/luna/luna_handlers_plugin.json:/srv/luna_configurator/used_limitations/luna_handlers_plugin.json \
--network=host \
--rm \
--entrypoint=python3.9 \
dockerhub.visionlabs.ru/luna/luna-configurator:v.1.3.7 \
./base_scripts/db_create.py --dump-file /srv/luna_configurator/used_limitations/luna_handlers_plugin.json
Monitoring#
Monitoring is implemented as sending data to the "InfluxDB".
There are two types of events that are monitored: request (all requests) and error (failed requests only).
Every event is a point in the time series. The point is represented using the following data:
- series name (requests or errors)
- timestamp of the request start
- tags
- fields
The tag is an indexed data in storage. It is represented as a dictionary, where
- keys - string tag names,
- values - string, integer or float.
The field is a non-indexed data in storage. It is represented as a dictionary, where
- keys - string field names,
- values - string, integer or float.
Below is an example of tags and fields for the Luna API service. These tags are unique for each service. You can find information about monitoring a specific service in the relevant documentation:
- "Luna API"
- "Luna Licenses"
- "Luna Configurator"
- "Luna Image Store"
- "Luna Faces"
- "Luna Admin"
- "Luna Backport 3"
- "Luna Events"
- "Luna Handlers"
- "Luna Python Matcher"
- "Luna Sender"
- "Luna Tasks"
Saving data for requests is triggered on every request. Each point contains data about the corresponding request (execution time and etc.).
- tags
tag name | description |
---|---|
service | always "luna-api" |
account_id | account id or none |
route | concatenation of a request method and a request resource (POST:/extractor) |
status_code | http status code of response |
- fields
fields | description |
---|---|
request_id | request_id |
execution_time | request execution time |
Saving data for errors is triggered when a request fails. Each point contains error_code of LUNA error.
- tags
tag name | description |
---|---|
request_id | always "luna-api" |
account_id | account id or none |
route | concatenation of a request method and a request resource (POST:/extractor) |
status_code | http status code of response |
error_code | LUNA PLATFORM error code |
- fields
fields | description |
---|---|
request_id | request_id |
Every handler can add additional tags or fields. For example, handler of "/handlers/{handler_id}/events" resource adds tag handler_id
.
If you are using InfluxDB version 2 then you can visualize monitoring data using Luna Dashboards tool.
Luna Dashboards#
The Luna Dashboards tool is intended to visualize monitoring data. Luna Dashboards based on the Grafana web application creates a set of dashboards for analyzing the state of individual services, as well as two summarised dashboards that can be used to evaluate the state of the system.

The data source is configured in Grafana, with the help of which it can communicate with the "InfluxDB" from which monitoring data is received during the operation of all LP services.

Luna Dashboard can be useful:
- To monitor the state of the system;
- For error analysis;
- To get statistics on errors;
- To analyze the load on individual services and on the platform as a whole, load by days of the week, by time of day, etc.;
- To analyze statistics of request execution, i.e. what resources account for what proportion of requests for the entire platform;
- To analyze the dynamics of request execution time;
- To evaluate the average value of the execution time of requests for a specific resource;
- To analyze changes in the indicator over time.
After installing the dashboards (see below), the "platform_5" directory becomes available in the Grafana web interface, which contains the following dashboards:
- Luna Platform Heatmap,
- Luna Platform Summary,
- Dashboards for individudal services.

Luna Platform Heatmap enables you to evaluate the load on the system without a specific resource. In statistics, you can evaluate the time of activity for the system at a certain time.
Luna Plarform Summary enables you to get statistics on requests for all services in one place, as well as evaluate graphs by RPS (Request Per Seconds).
Dashboards for individual services enables you to get information about requests for individual resources, errors and status codes for each service. In such a dashboard, not load data will be displayed, but artificially generated data at a selected time interval.
The following SW is required for Grafana dashboard utilization:
- InfluxDB 2.0 (the currently used version is 2.0.7-alpine)
- Grafana (the currently used version is 8.0.6)
InfluxDB and Grafana are already included in the package. You can use your own Grafana installation or install it manually.
For more information on manual installation, see the "InfluxDB" and "Grafana" sections in the installation manual.
To run dashboards, an additional installation is required, described below.
Grafana plugin installation#
Installing the plugin is required only for manual installation of Grafana.
In addition to build-in Grafana plugins, dashboards also use a piechart plugin. Use the new grafana-cli tool to install piechart-panel from the command line:
grafana-cli plugins install grafana-piechart-panel
A restart is required to apply the plugin:
sudo service grafana-server restart
Grafana dashboards installation#
The scripts for Grafana plugins installation can be found in "/var/lib/luna/current/extras/utils/".
Install Python version 3.7 or later before launching the following script. The packages are not provided in the distribution package and their installation is not described in this manual.
Go to the luna dashboards directory
cd /var/lib/luna/current/extras/utils/luna-dashboards_linux_rel_v.*
Create a virtual environment
python3.7 -m venv venv
Activate the virtual environment
source venv/bin/activate
Install luna dashboards file
pip install luna_dashboards-*-py3-none-any.whl
Go to the following directory
cd luna_dashboards
The "luna_dashboards" folder contains the configuration file "config.conf", which includes the settings for Grafana, Influx and monitoring periods. By default, the file already includes the default settings, but you can change the settings use "vi config.conf".
Run the following script to create dashboards
python create_dashboards.py
Deactivate virtual environment
deactivate
If the Grafana and Influx containers are running, then use "http://IP_ADDRESS:3000" to go to the Grafana web interface. In the upper left corner, select the "General" button, then expand the "platform_5" folder and select the necessary dashboard.
Databases information#
You can use InfluxDB version 1 or version 2.
It is recommended to use InfluxDB version 2, as it can be used to visualize monitoring data using Luna Dashboards.
Using and configuring both versions are described below.
InfluxDB OSS 2#
Starting with version 5.5.0, LUNA PLATFORM provides a possibility to use InfluxDB of version 2.
For InfluxDB OSS 2 usage, you should:
- Install the DB. See the "InfluxDB OSS 2" in the installation manual.
- Register in the DB. InfluxDB has a user interface where you can register. You should visit
<server_ip>:<influx_port>
. - Configure LUNA PLATFORM to use InfluxDB version 2.
- Configure the display of monitoring information in the GUI. It is not described in this documentation.
Migration from version 1#
InfluxDB provides built-in tools for migration from version 1 to version 2. See documentation:
https://docs.influxdata.com/influxdb/v2.0/upgrade/v1-to-v2/docker/
If necessary, you can use InfluxDB version 1, but in this case you will not be able to use the Luna Dashboards tool.
InfluxDB OSS 2 configuration#
InfluxDB OSS 2 configurations differ from InfluxDB OSS 1 configurations. One should change the settings set in the "INFLUX_MONITORING" section.
The following sections are similar for both InfluxDB versions:
- "send_data_for_monitoring"
- "use_ssl"
- "flushing_period"
- "host"
- "port"
The unique settings for InfluxDB OSS 2 are described below.
InfluxDB OSS 2 unique settings
Setting name | Type | Description |
---|---|---|
organization | String | The organization name specified during registration. |
token | String | Token received after registration. |
bucket | String | Bucket name. |
version | integer | Version of the DB used. You should set "1" or "2" according to the version used. You can leave the field empty. Then the required version will be set automatically. |
The resulting settings for InfluxDB OSS 2:
"send_data_for_monitoring": 1,
"use_ssl": 0,
"flushing_period": 1,
"host": "127.0.0.1",
"port": 8086,
"organization": "<ORGANIZATION_NAME>",
"token": "<TOKEN>",
"bucket": "<BUCKET_NAME>",
"version": <DB_VERSION>
You can update InfluxDB settings in the Configurator service by following these steps:
- Open /var/lib/luna/current/extras/conf/influx2.json.
- Set required data in "organization", "token", "bucket", "version" fields.
- Save changes.
- Copy the file to the influxDB container:
docker cp /var/lib/luna/current/extras/conf/influx2.json example-docker_configurator_1:/srv/
Note that the Configurator container name differs for manual (luna-configurator) and Compose (example-docker_configurator_1) installations.
- Update settings in the Configurator.
docker exec -it example-docker_configurator_1 python3.9 ./base_scripts/db_create.py --dump-file /srv/influx2.json
You can also manually update settings in the Configurator service user interface.
InfluxDB OSS 1#
For InfluxDB OSS 1 usage, you should:
- Install the DB. See the "InfluxDB OSS 1" in the installation manual.
- Register in the DB. InfluxDB has a user interface where you can register. You should visit
<server_ip>:<influx_port>
. - Configure LUNA PLATFORM to use InfluxDB version 1.
- Configure the display of monitoring information in the GUI. It is not described in this documentation.
InfluxDB OSS 1 configuration#
To configure InfluxDB version 1, you need to set the settings in the "INFLUX_MONITORING" section.
The following sections are similar for both InfluxDB versions:
- "send_data_for_monitoring"
- "use_ssl"
- "flushing_period"
- "host"
- "port"
The unique settings for InfluxDB OSS 1 are described below.
InfluxDB OSS 1 unique settings
Setting name | Type | Description |
---|---|---|
database_name | String | The organization name specified during registration. |
user_name | String | Token received after registration. |
password | String | Bucket name. |
The resulting settings for InfluxDB OSS 1:
"send_data_for_monitoring": 1,
"use_ssl": 0,
"flushing_period": 1,
"host": "127.0.0.1",
"port": 8086,
"database_name": "<DATABASE_NAME>",
"user_name": "<USER_NAME>",
"password": "<PASSWORD>"
Manual creation of services databases#
This section describes commands required for configuring external PostgreSQL for working with LP services. External means that you already have working DB and MQ and want to use them with LP.
You need to specify your external BD and MQ in the configurations of LP services.
The Faces and Events services requires the VLMatch additional function to be added to the utilized database. For detailed information on creating this function for the Faces service see the "Create VLMatch function for Faces DB" section and for the Events database see the "Create VLMatch function for Events DB" section.
PostgreSQL user creation#
Go to the directory.
cd /var/
Create a database user
runuser -u postgres -- psql -c 'create role luna;'
Assign a password to the user
runuser -u postgres -- psql -c "ALTER USER luna WITH PASSWORD 'luna';"
Configurator DB creation#
Create the database for the Configurator service. It is assumed that the DB user is already created.
Go to the directory.
cd /var/
-
Create the database
-
Grant privileges to the database user
-
Allow user to authorize in the DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_configurator;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_configurator TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'
Handlers DB creation#
Create the database for the Handlers service. It is assumed that the DB user is already created.
Go to the directory.
cd /var/
The sequence of actions corresponds to the commands below:
- Create a database
- Grant privileges to the database and the user
- Enable the user to login into DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_handlers;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_handlers TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'
Backport 3 DB creation#
Create the database for the Backport 3 service. It is assumed that the DB user is already created.
Go to the directory.
cd /var/
The sequence of actions corresponds to the commands below:
- Create a database
- Grant privileges to the database and the user
- Enable the user to login into DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_backport3;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_backport3 TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'
Faces DB creation#
Create the database for the Faces service. It is assumed that the DB user is already created.
Go to the directory.
cd /var/
-
Create a database
-
Grant privileges to the database and the user
-
Enable the user to login into DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_faces;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_faces TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'
Events DB creation#
Create the database for the Events service. It is assumed that the DB user is already created.
Go to the directory.
cd /var/
-
Create database
-
Grant privileges to database and the user
-
Enable the user to login into DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_events;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_events TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'
Admin DB creation#
Create the database for the Admin service. It is assumed that the DB user is already created.
Go to the directory.
cd /var/
-
Create the database
-
Grant privileges to the database user
-
Allow user to authorize in the DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_admin;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_admin TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'
Tasks DB creation#
Create the database for the Tasks service. It is assumed that the DB user is already created.
Go to the directory.
cd /var/
-
Create the database
-
Grant privileges to the database user
-
Allow user to authorize in the DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_tasks;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_tasks TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'
Index Manager DB creation#
Create the database for the Index Manager service. It is assumed that the DB user is already created.
Go to the directory.
cd /var/
-
Create the database
-
Grant privileges to the database user
-
Allow user to authorize in the DB
runuser -u postgres -- psql -c 'CREATE DATABASE luna_index_manager;'
runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_index_manager TO luna;'
runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'
Create VLMatch function for Faces DB#
The Faces service requires the VLMatch additional function to be added to the utilized database. LUNA PLATFORM cannot perform matching calculations without this function. The VLMatch function can be added to the PostgreSQL or Oracle database.
The VLMatch library is compiled for your particular database version.
Note! Do not use the library built for another version of DB. For example, the library build for the PostgreSQL of version 12 cannot be used for the PostgreSQL of version 9.6.
This section describes the function creation for PostgreSQL.
The instruction for the Oracle DB is given in the "VLMatch for Oracle" section.
Build VLMatch for PostgreSQL#
You can find all the required files for the VLMatch user-defined extension (UDx) compilation in the following directory:
/var/lib/luna/current/extras/VLMatch/postgres/
The following instruction describes installation for PostgreSQL 12.
For VLMatch UDx function compilation one needs to:
-
Make sure, that PostgreSQL of the required version is installed and launched.
-
Install the required PostgreSQL development environment. You can find more information on the official web site.
-
The llvm-toolset-7-clang is required for postgresql12-devel. Install it from the
centos-release-scl-rh
repository.
yum -y install centos-release-scl-rh
yum -y --enablerepo=centos-sclo-rh-testing install llvm-toolset-7-clang
- Install epel-release for access to extended package repository
yum -y install epel-release
- Install the development environment.
yum -y install postgresql12 postgresql12-server postgresql12-devel
- Install the gcc-c++ package. The package version 4.8 or higher is required.
yum -y install gcc-c++.x86_64
-
Install CMAKE. The version 3.5 or higher is required.
-
Open the make.sh script using a text editor. It includes paths to the currently used PostgreSQL version. Change the following values (if necessary):
SDK_HOME
specifies the path to PostgreSQL home directory. The default value is /usr/pgsql-12/include/server
;
LIB_ROOT
specifies the path to PostgreSQL library root directory. The default value is /usr/pgsql-12/lib
.
Go to the make.sh script directory and run it:
cd /var/lib/luna/current/extras/VLMatch/postgres/
chmod +x make.sh
./make.sh
Add VLMatch function to Faces database#
The VLMatch function should be applied to the PostgreSQL DB.
- Define the function inside the Faces database:
sudo -u postgres -h 127.0.0.1 -- psql -d luna_faces -c "CREATE FUNCTION VLMatch(bytea, bytea, int) RETURNS float8 AS 'VLMatchSource.so', 'VLMatch' LANGUAGE C PARALLEL SAFE;"
- Test function by sending re following request to the service database:
sudo -u postgres -h 127.0.0.1 -- psql -d luna_faces -c "SELECT VLMatch('\x1234567890123456789012345678901234567890123456789012345678901234'::bytea, '\x0123456789012345678901234567890123456789012345678901234567890123'::bytea, 32);"
The result returned by the database must be "0.4765625".
Build VLMatch for Oracle#
This section describes VLMatch library building and new function appliance to the Oracle database.
For VLMatch UDx function compilation one needs to:
-
Install required environment, see requirements:
-
Install the gcc/g++ 4.8 or higher
yum -y install gcc-c++.x86_64
- Change
SDK_HOME
variable - oracle sdk root (default is$ORACLE_HOME/bin
, check$ORACLE_HOME
environment variable is set) in the makefile. -
Go to the directory and run the "make.sh" file:
cd /var/lib/luna/current/extras/VLMatch/oracle/
chmod +x make.sh
./make.sh
-
Define the library and the function inside the database (from database console):
CREATE OR REPLACE LIBRARY VLMatchSource AS '$ORACLE_HOME/bin/VLMatchSource.so';
CREATE OR REPLACE FUNCTION VLMatch(descriptorFst IN RAW, descriptorSnd IN RAW, length IN BINARY_INTEGER)
RETURN BINARY_FLOAT
AS
LANGUAGE C
LIBRARY VLMatchSource
NAME "VLMatch"
PARAMETERS (descriptorFst BY REFERENCE, descriptorSnd BY REFERENCE, length UNSIGNED SHORT, RETURN FLOAT);
Test function within call (from database console):
SELECT VLMatch(HEXTORAW('1234567890123456789012345678901234567890123456789012345678901234'), HEXTORAW('0123456789012345678901234567890123456789012345678901234567890123'), 32) FROM DUAL;
The result should be equal to "0.4765625".
Create VLMatch function for Events DB#
The Events service requires the VLMatch additional function to be added to the utilized database. LUNA PLATFORM cannot perform matching calculations without this function. The VLMatch function can be added to the PostgreSQL or Vertica database.
The VLMatch library is compiled for your particular database version.
Note! Do not use the library built for another version of DB. For example, the library build for the PostgreSQL of version 12 cannot be used for the PostgreSQL of version 9.6.
This section describes the function creation for PostgreSQL. If you use the PostgreSQL database, you have already created and moved the created library during the Faces service launch. See section "Build VLMatch UDx".
The instruction for the Vertica DB is given in the "VLMatch for Vertica" section.
Add VLMatch function to Events database#
The VLMatch function should be applied to the PostgreSQL DB.
Define the function inside the Events database:
sudo -u postgres -h 127.0.0.1 -- psql -d luna_events -c "CREATE FUNCTION VLMatch(bytea, bytea, int) RETURNS float8 AS 'VLMatchSource.so', 'VLMatch' LANGUAGE C PARALLEL SAFE;"
Test function within call:
sudo -u postgres -h 127.0.0.1 -- psql -d luna_events -c "SELECT VLMatch('\x1234567890123456789012345678901234567890123456789012345678901234'::bytea, '\x0123456789012345678901234567890123456789012345678901234567890123'::bytea, 32);"
The result returned by the database must be "0.4765625".
Build VLMatch for Vertica#
This section describes VLMatch library building and new function appliance to the Vertica database.
You can find all the required files for the VLMatch user-defined extension (UDx) compilation in the following directory:
/var/lib/luna/current/luna-events/base_scripts/database_matching/vertica
For VLMatch UDx function compilation one needs to:
-
Install required environment. You can find more information here.
-
Install the gcc-c++ package. The package version 4.8 or higher is required.
yum -y install gcc-c++.x86_64
-
Install CMAKE. The version 3.5 or higher is required.
-
Go to the script directory:
cd /var/lib/luna/current/extras/VLMatch/vertica/
-
Change
SDK_HOME
variable (vertica sdk root) in the "makefile". The default value is/opt/vertica/sdk
. -
Run make from
/var/lib/luna/current/extras/VLMatch/vertica/
directory:
make
- Run the database console:
vsql -U luna -h 127.0.0.1 -d luna_events
- Define the function inside the service database:
CREATE OR REPLACE LIBRARY VLMatchSource AS '/opt/vertica/VLMatchSource.so';
CREATE OR REPLACE FUNCTION VLMatch AS LANGUAGE 'C++' NAME 'VLMatchFactory' LIBRARY VLMatchSource NOT FENCED;
- Test function within call inside the service database:
SELECT VLMatch(HEX_TO_BINARY('1234567890123456789012345678901234567890123456789012345678901234'), HEX_TO_BINARY('0123456789012345678901234567890123456789012345678901234567890123') USING PARAMETERS descriptorLength=32);
The result should be equal to "0.4765625".
- Check that the
USE_DB_MATCH_FUNCTION
parameter is enabled in the service settings of the Events service.