Skip to content

Additional Information#

OneShotLiveness description#

The Liveness technology enables to detect presentation attacks. A presentation attack is a situation when an imposter tries to use a video or photos of another person to circumvent the recognition system and gain access to the person's private data.

To estimate Liveness in the LUNA PLATFORM, the estimator LUNA SDK OneShotLiveness is used.

There are the following general types of presentation attacks:

  • Printed Photo Attack. One or several photos of another person are used.
  • Video Replay Attack. A video of another person is used.
  • Printed Mask Attack. An imposter cuts out a face from a photo and covers his face with it.
  • 3D Mask Attack. An imposer puts on a 3D mask depicting the face of another person.

Liveness has the ability to license the number of performed transactions. You can also choose between an unlimited license and a license with a limited number of transactions (a maximum of 16 777 215 transactions can be specified in the key). After the transaction limit is exhausted, it will be impossible to use the Liveness estimation in requests. Requests that do not use Liveness or with Liveness estimation disabled are not affected by the exhaustion of the limit, they continue to work as usual.

Liveness check results#

The Liveness algorithm uses a single image for processing and returns the following data:

  • Liveness probability [0..1]. Here 1 means real person, 0 means spoof. The parameter shows the probability of a live person being present in the image, i.e. it is not a presentation attack. In general, the estimated probability must exceed the Liveness threshold.

  • Prediction. Based on the above data, LUNA PLATFORM issues the following prediction:

  • 0 (spoof). Check revealed that the person is not real.

  • 1 (real). Check revealed that the person is real.

Requests for estimating Liveness#

Liveness is used in the following resources:

You can filter events by Liveness in the "handlers/{handler_id}\/events" and "/verifiers/{verifier_id}\/verifications" resources, i.e. you can exclude "spoof" or "real" results from image processing.

Filtering by Liveness is available for the following scenarios:

You can also specify the Liveness estimation parameter when manually creating and saving events in the "handlers/{handler_id}\/events/raw" resource.

For multiple uploaded images, you can aggregate the Liveness results to obtain more accurate data.

Liveness requirements#

The requirements for the processed image and the face in the image are listed below.

Parameters Requirements
Minimum resolution for mobile devices 720x960 pixels
Maximum resolution for mobile devices 1080x1920 pixels
Minimum resolution for webcams 1280x720 pixels
Maximum resolution for webcams 1920x1080 pixels
Compression No
Image warping No
Image cropping No
Effects overlay No
Mask No
Number of faces in the frame 1
Face detection bounding box width More than 200 pixels
Frame edges offset More than 10 pixels
Head pose -20 to +20 degrees for head pitch, yaw, and roll
Image quality The face in the frame should not be overexposed, underexposed, or blurred.

Liveness threshold#

The Liveness threshold is used to adjust the Liveness score.

Liveness threshold — the threshold lower which the system will consider the result as a presentation attack ("spoof"). This threshold is set in the "real_threshold" setting from the "LUNA_REMOTE_SDK_LIVENESS_ESTIMATOR_SETTINGS" section of the Configurator service. The default threshold value is 0.5 (50%).

For images received from mobile devices, it is recommended to set a threshold 0.5. For images obtained from a webcam, it is recommended to set a threshold 0.364 . At these threshold values, approximately equal results are obtained in terms of the accuracy of the algorithm.

Changing threshold on "/handlers" and "/verifiers" resources#

In the "/handlers" and "/verifiers" there is one additional parameter - "liveness_threshold" which sets the Liveness threshold.

Setting this parameter redefines the value of the "real_threshold" settings from the "LUNA_REMOTE_SDK_LIVENESS_ESTIMATOR_SETTINGS" section of the Configurator service.

Changing threshold on «/Liveness» and «/sdk» resources#

There are no additional parameters for overriding threshold for the "/Liveness" and "/sdk" resources. Threshold is set in the "real_threshold" settings from the "LUNA_REMOTE_SDK_LIVENESS_ESTIMATOR_SETTINGS" section of the Configurator service.

Video analytics#

LUNA PLATFORM offers the ability to perform video analytics in two ways:

The main difference between the two methods is the data storage format. The "videosdk" resource allows you to process a video file and get the processing result in the response body. The obtained information is only available in the response body and is not saved anywhere. The Video Agent service, on the other hand, allows you to process a video stream or video file and send the processing result to a third-party system. Moreover, since both methods pursue different goals, they may have their own set of specific parameters and values. For example, video analytics services have the ability to set the moment of event generation (at the beginning of the track, at the end of the track, periodically), while the "videosdk" resource does not have such an option. The features of the presence of certain parameters are given in the relevant sections below.

Important: Currently, the video analytics functionality is in beta testing. Input and output schemas may change in future releases without backward compatibility support.

Video analytics is a set of functions that process frame by frame and evaluate useful data.

A key concept in video analytics is the track. A track represents a long continuous observation in which several events can be generated at certain intervals. A track is needed to organize and aggregate data about groups of people who can appear and move in the video.

If necessary, the video or video stream can be rotated before processing by 90, 180, or 270 degrees.

During video analytics execution, a certain number of events are generated according to some rules, where each event has a beginning and an end. Events contain specific information of the particular video analytics. The response also contains basic meta-information about the video (number of frames, frame rate, video duration).

Important: An event in video analytics is not related to events generated by handlers.

Video processing settings, such as the number of decoder processes, the type of processor for video decoding, the temporary directory for storing videos, etc., can be set in the "LUNA_REMOTE_SDK_VIDEO_SETTINGS" group of settings for the Remote SDK service when using the "videosdk" request and "LUNA_VIDEO_AGENT_VIDEO_SETTINGS" when using video analytics services.

People counting video analytics#

A track starts when the specified number of people (parameter "people_count_threshold") appears on the last frame of the specified number of consecutive frames (parameter "probe_count"). For example, if the minimum number of people is 10 and the number of consecutive frames is 3, then if there are 10 people in 3 frames, the track will start from the 3rd frame. If there are 10 people in 2 frames and 9 people in the 3rd frame, the track will not start.

By default, every 10 frames are processed. If necessary, the frame processing frequency can be adjusted using the "rate" parameter (e.g., process every 3 seconds or every 3 frames).

The logic of generating video analytics events differs for "videosdk" requests and video analytics services.

When performing analytics using the "videosdk" resource, the number of events will equal the number of tracks.

When performing video analytics using the Video Agent service, video analytics events can be generated in the following cases ("event_policy"):

  • When the track starts ("trigger" > "start").
  • When the track ends ("trigger" > "end").
  • Periodically while the track exists ("trigger" > "period" and "interval").

For example, if 100 people are constantly in the frame for an hour, the service can generate events every 5 minutes to track changes or confirm the constant number of people.

In the request body, you can set the "targets" parameter, which accepts the following values:

  • "coordinates" — Calculation of people's coordinates.
  • "overview" — Obtaining an image with the maximum number of people and enabling image saving. If necessary, you can fine-tune the quality, format, and maximum size of the saved image using the "image_retain_policy" section.

You can set both values or none. If no values are set, a basic analysis will be performed, containing only information about the video recording, the number of people, and the frames associated with them.

For such video analytics, it is also possible to configure a region of interest ROI or a region of interest DROI on the frame (coordinates "x", "y", width, height, units of measurement — percentages or coordinates). The principle of the ROI is similar to FaceStream.

Each event in people count video analytics contains the following information:

  • "event_id" — Event identifier.
  • "track_id" — Track identifier linking events generated within one continuous observation.
  • "people_count" — Maximum number of people detected in the processed video segment.
  • "video_segment" — Information about the video segment (event start time, event end time, first frame number, last frame number).
  • "frames_estimations" (only "videosdk" resource) — Array with information about estimations on each frame, containing the following information:
    • People coordinates.
    • Time corresponding to the frame.
    • Number of people.
  • "overview" — information about the frame in the video where the most people were found:
    • Image.
    • Time corresponding to the frame.
    • People coordinates.

Human tracking video analytics#

For people tracking video analytics, a track is an object containing information about the position of one person in a sequence of frames. A track is created when the detection of a face, body or body with a face (depends on the detector type in the "detector_type" parameter) appears on the last frame of the specified number of consecutive frames ("probe_count" parameter). There can be as many tracks as the number of people detected on the frame.

When performing video analytics using the "videosdk" request, there can only be one event for one track.

For example, if "probe_count" = "3" and there are detections on two consecutive frames and no detections on the third frame, the track and event will not start. If at any moment of tracking a person there were detections on two consecutive frames and no detections on the third frame, the track and event will end.

When performing video analytics using the Video Agent service, video analytics events can be generated in the following cases ("event_policy"):

  • When the track starts ("trigger" > "start").
  • When the track ends ("trigger" > "end").
  • Periodically while the track exists ("trigger" > "period" and "interval").

During human tracking, it is possible to estimate attributes of faces, bodies, images, and also to obtain an image with the best detection. The list of estimations is specified in the "targets" parameter and differs depending on the analytics method ("videosdk" or Video Agent service). Estimation can be performed based on one best image from the track or on several (parameter "face_samples"/"body_samples" > "count"). If necessary, you can filter the best images by the "score", "head_pitch", "head_roll", "head_yaw" parameters.

If necessary, you can enable Re-identification of the body (ReID). The ReID feature is used to improve tracking accuracy by combining different tracks of the same person.

For this type of video analytics, you can also configure ROI or DROI regions on the frame (coordinates "x", "y", width, height, units - per cent or coordinates). The principle of ROI region of interest is similar to FaceStream.

Each people count video analytics event contains the following information:

  • "event_id" — Event ID.
  • "track_id" — Track ID.
  • "video_segment" — Information about the video segment (start time of the event, end time of the event, first frame number, last frame number).
  • "frames_estimations" — Array with information about estimates on each frame, containing the following information:
    • Frame number.
    • Time corresponding to the frame.
    • Bbox coordinates and score.
  • "aggregated_estimations" — Result of the score specified in the "targets" parameter in the request body.
  • "track_count" - Number of tracks.

When using the Video Agent service, each event also displays the stream ID and location specified in the request to create the stream.

roi#

ROI (Region of Interest) specifies the region of interest in which the estimation is performed.

The specified rectangular area is cut out from the frame and LUNA PLATFORM performs further processing using this image.

Correct exploitation of the "roi" parameter significantly improves the performance of video analysis.

ROI on the source frame is specified by the "x", "y", "width", "height" and "mode" parameters, where:

  • "x", "y" – Coordinates of the upper left point of the ROI area of interest.
  • "width" and "height" — Width and height of the processed area of the frame.
  • "mode" – Mode for specifying "x", "y", "width" and "height". Two modes are available:

    • "abs" — Parameters "x", "y", "width" and "height" are set in pixels.
    • "percent" — Parameters "x", "y", "width" and "height" are set as percentages of the current frame size.

    If the "mode" field is not specified in the request body, then the value "abs" will be used.

With width and height values of "0", the entire frame is considered the region of interest.

The coordinate system on the image is set similarly to the figure below.

ROI coordinate system
ROI coordinate system

When the values of width and height are set to "0", the entire frame will be the region of interest.

Below is an example of calculating ROI as a percentage:

Example of calculating ROI as a percentage
Example of calculating ROI as a percentage

droi#

Note: DROI is only available for people counting video analytics.

DROI specifies a region of interest relative to the source frame. If ROI is intended to optimize the estimation of the corresponding features, then DROI works as a filter after the estimation has been completed and is intended to implement business logic. You can use DROI both together with ROI and separately. For example, if after processing by the ROI area the number of people in the frame was equal to N, then after additional filtering by the DROI area the number of people in the frame can be reduced to M.

The DROI on the source frame is specified by the following parameters:

  • "area" — Geometry of the region of interest. The parameter is represented as an array of polygons. Each polygon is represented by an array of objects, where each object contains the x and y coordinates of the polygon's vertex. For example, you can define a triangular area of interest.
  • "mode" – mode of specifying “x”, “y”. Two modes are available:
    • "abs" — Parameters "x", "y", "width" and "height" are set in pixels.
    • "percent" — Parameters "x", "y", "width" and "height" are set as percentages of the current frame size.
  • "form" — Format of the region of interest. In the current implementation there is only one possible value, "common". Future implementations will support other formats in addition to polygons.

Filters#

The LUNA PLATFORM has the ability to filter objects according to certain rules. For example, you can match faces only from a certain list or get events for a certain period of time.

As a rule, filters are located in the "filters" field of various requests.

Filters by comparison operators#

For some filters, it is possible to specify comparison operators using the suffix "__operator". The operator is used both for time filters (from a certain date, for some time) and for ordinary filters (for some age of a person, for a list of IDs in lexicographic order).

__lt

The "less than" filter is used to search for values less than the specified one. For example, when using the filter "create_time__lt": "2022-08-11T09:11:41.674Z" in the "get events" request events with the creation time preceding the specified date and time will be returned, i.e. events created before August 11, 2022, 09:11:41.674 UTC.

__lte

The "less than or equal to" filter is used to search for values less than or equal to the specified one. For example, when using the filter "face_id__lte": "2046d39b-410d-481e-b9de-ede6a0c7367f" in the filters of the request "matching faces" for candidates, only those candidates whose IDs begin will be returned with "2046d39b-410d-481e-b9de-ede6a0c7367f" or less than this value in lexicographic order.

__gte

The "greater than or equal to" filter is used to search for values greater than or equal to the specified one. For example, when using the filter "similarity__gte": "0.9" in the "storage_policy" > "face_sample_policy" > "filters" > "match" of the "create handler" request, you can configure a rule for saving a face sample in the Faces database only if the "matching_policy" has determined that the similarity of the candidate with the reference is greater than or equal to "0.9".

Now-time filters#

For the time filters "create_time", "insert_time" and "end_time", the format for setting the time relative to the current time ("now-time" format) is available. For example, when creating a schedule for a task Garbage collection, you can specify the filter "create_time__lt": "now-30d", which will delete all objects except those created in the last 30 days.

It must be remembered that:

  • When creating a schedule, the current time will be counted not from the creation of the schedule, but from the creation of the task by the schedule in accordance with the cron expression.
  • When an event is generated by a handler that specifies the "now-time" filter (for example, in the "matching_policy" > "candidates" policy), the current time will be counted from the moment the event is generated.

To use the time relative to event generation, you can set policies directly in the request for event generation using the "multipart/form-data" scheme or using the Estimator task with the creation of a new handler.

In this format, the time is set according to the following pattern now-(\d+)[smhdwMy], where "[d+]" is a number, "[smhdwMy]" is the required period: m (minutes), h (hours), d (days), w (weeks), M (months), y (years).

User interfaces#

LUNA CLEMENTINE 2.0 Web Service#

The LUNA CLEMENTINE 2.0 web service can be used to perform basic LP requests to create objects and perform matching operations.

This service is not supplied with LUNA PLATFORM 5, it must be downloaded and configured separately.

Backport 3 and Backport 4 service interfaces#

Interfaces for interacting with LP 4 and LP 3 APIs are available in LUNA PLATFORM 5. The interfaces are intended only for those users who chose not to upgrade to the new version of the LUNA PLATFORM 5 API. They don't include most of the features available in LP 5 and will never include.

Service Description Default login data Default port
Backport 4 Interface for working with LP API 4 - 4200
Backport 3 Interface for working with LP API 3 To log in to the interface, you need to create an account on the Sign Up tab. 4100

Other interfaces#

These interfaces are needed to simplify working with LUNA PLATFORM services.

Service Description Default login data Default port
Configurator Interface for working with LUNA PLATFORM settings. It enables you to configure the parameters of all services in one place. It is especially convenient if auto-reloading of services is enabled after updating the settings. - 5070
Admin Interface for performing administrative tasks of the LUNA PLATFORM: creating accounts using the GUI, launching and monitoring the execution of long tasks (Garbage collection task, Additional extraction task). root@visionlabs.ai/root 5010
InfluxDB The interface allows you to view the monitoring data of LUNA PLATFORM services. luna/password 8086
Grafana (LUNA Dashboards) Interface for visualizing LUNA PLATFORM monitoring data stored in InfluxDB. Dashboards have been written for it to visualize and filter information. You can also use the log aggregation system Grafana Loki. admin/admin 3000

Disableable services#

Some unused general services can be disabled. When a service is disabled, its main functions will not work. For example, if the Tasks service is disabled, attempting to execute a task will result in an error. Similarly, if the Sender service is disabled and an event handler with the "notification_policy" is used, it will also result in an error.

As a result, all resources that require a disabled service will return errors like <Service_name> service is disabled with a 403 response code.

If any service fails during the operation of the LUNA PLATFORM or it is manually stopped, the operation of dependent services will fail. For example, when the Licenses service is stopped, the following error will be displayed in the Faces service logs:

[0000001 2023-08-11 13:00:51.042000] ERROR: luna-faces 1690887528,00000000-0000-4000-a000-000000000001: Check connection to Luna Licenses : GET:http://127.0.0.1:5120/version:Cannot connect to host 127.0.0.1:5120 ssl:default [Connection refused]

The corresponding error will be returned when trying to make a request that depends on the problematic service.

Then the work of services dependent on the Faces service will also be terminated with a similar error.

You can check the dependence of services on each other using the interaction diagram or you can find a setting like "service_name_ADDRESS" in the settings of the service for which you want to define dependent services. For example, in the settings of the Faces service there is a setting "LUNA_LICENSES_ADDRESS" responsible for connecting to the Licenses service, and in the settings of the Admin service there is a setting "LUNA_FACES_ADDRESS" responsible for connecting to the Faces service.

Disabling services process#

  • Open the Configurator user interface http://<configurator_server_ip>:5070.
  • Enter the name of the setting "ADDITIONAL_SERVICE_USAGE" in the "Setting name" field and click "Apply Filters".
  • Set the value for the required service to "false".
  • Save the changes by clicking the "Save" button.

Image Store service disabling consequences#

When the Image Store service is disabled, there are some specific features to note:

  • Objects of the type images, objects, samples, and sample save policies in handlers/verifiers will be unavailable.

  • All tasks, except Garbage Collection, Linker, and Estimator, will become unavailable. However, there are some limitations for these tasks:

    • Garbage Collection, Estimator, Linker: task/subtask results will not be saved
    • Garbage Collection, Estimator, Linker: after the subtask completes, the task status will be updated to Done, and the task result ID will be None.
    • Garbage Collection: deleting samples will become unavailable.

    If the Image Store service is disabled after events with the image_origin_policy are generated, when using the Garbage Collection task and the remove_image_origins parameter, the Tasks service will still attempt to delete the source images with an external URL.

Image Store service disabling consequences for Backport 3#

In the Backport 3 service, the "BACKPORT3_ENABLE_PORTRAITS" setting is available, which enables you to disable the ability to use portraits, but leave the ability to use the rest of the functionality of the Image Store service. If the use of the Image Store service is disabled in the "ADDITIONAL_SERVICES_USAGE" setting, then the above setting must also be disabled.

When the Image Store service is disabled, samples and portraits, as well as the "get portrait" and "get portrait thumbnail" resources, become unavailable.

Handlers service disabling consequences#

When disabling the Handlers service:

  • Launching the API service will lead to the unavailability of using the following requests:
  • Launching the Tasks service will lead to the unavailability of performing the tasks "Additional extraction" and "Estimator".
  • Launching the Admin service will lead to the unavailability of performing the task "Additional extraction".

Events service disabling consequences#

When disabling the Events service:

  • It will be possible to create a handler with an event saving policy, but the event itself will not be possible to generate.
  • All requests to the "/events" resource will not work.
  • Requests to resources "/handlers/{handler_id}/events", "/handlers/{handler_id}/events/raw" not will work.
  • The "event_id" parameter in the "create face" request will not work.
  • Matching by candidate events will not be available.
  • Filters by events in tasks will not be available.
  • Using the "ws handshake" request will not make sense.

Tasks service disabling consequences#

When disabling the Tasks service:

Tasks performed in the Admin service user interface are also unavailable when the Tasks service is disabled.

Sender service disabling consequences#

When disabling the Sender service:

  • The "ws handshake" request will not work.
  • The "notification_policy" policy in the handler will not work.

Disabling additional services#

Additional services can also be disabled if they were previously enabled.

If the Python Matcher Proxy service was previously used, then disabling it will make it impossible to use matching plugins or LIM module.

If the Lambda service was previously used, then disabling it will make it impossible to send requests to the "/lambdas" resources and work with the created lambda.

If Backport 3 or Backport 4 services were previously used, then disabling them will make it impossible to process LUNA PLATFORM 3/4 requests using LUNA PLATFORM 5.

Enabling Backport 3, Backport 4, User Interface 3 and User Interface 4 services is regulated only by launching the corresponding containers. There are no parameters in the "ADDITIONAL_SERVICE_USAGE" settings that enable these services.

Descriptor encryption#

To enhance security and prevent unauthorized use of descriptors, they can be obtained in encrypted form. This helps protect descriptors from theft and subsequent use in other systems.

Note: Encryption adds additional computational overhead, which results in slower processes such as cross-base matching, LUNA Index Module matching, and Cached Matcher cache synchronization.

As a result of encryption, descriptors are stored in encrypted form in the Events, Faces or Attributes databases and transmitted in the same form to external systems. Accordingly, if encryption is enabled, then requests that return a descriptor will receive exactly the encrypted descriptor.

Important: LUNA PLATFORM is not designed to support the simultaneous use of encrypted and unencrypted descriptors. If both types of descriptors are detected, the Python Matcher service will not start, displaying the error "Check is failed, encryption hash is not unique". Therefore, when enabling encryption, it is necessary to stop the services, thereby limiting all requests, translate all descriptors to the new version of encryption and only then restart the services. It is highly recommended to perform a database backup before manipulating encryption.

Note: Matching plugins also need to be updated according to the encryption logic.

Format of encrypted descriptors#

Encrypted descriptors have the following format: <encrypted_descriptor><tag><nonce><hash>.

  • encrypted_descriptor — Encrypted descriptor.
  • tag — Data used for message authentication.
  • nonce — Encryption initialization vector.
  • hash — Hash sum of the encryption key and algorithm.

Enable and configure encryption#

Encryption can be enabled and configured using the DESCRIPTOR_ENCRYPTION section.

To enable encryption, set the enabled parameter to true. This activates descriptor encryption in the system. The algorithm parameter specifies the encryption algorithm (default is aes256-gcm).

Encryption parameters are specified under params. Here, you need to specify the encryption key source using the source parameter. Two types of sources are supported: raw and vaultKV.

  • With raw, the encryption key is specified directly in the key parameter.
  • With vaultKV, the encryption key needs to be retrieved from Hashicorp Vault. In this case, the key parameter should contain the URL to fetch the key and an authentication token. Example configuration for vaultKV:
{
    "enabled": true,
    "algorithm": "aes256-gcm",
    "params": {
        "source": "vaultKV",
        "key": {
            "url": "https://vault.example.com/v1/secret/data/encryption_key",
            "token": "s.XYZ12345"
        }
    }
}

Hashicorp Vault is a tool for managing secrets and protecting sensitive data such as encryption keys.

The contents of the vaultKV key/value storage are expected in the following format:

{
    "key": "...",
    "algorithm": "..."
}

Manage descriptor encryption#

If necessary, you can add encryption to existing descriptors, replace existing encryption, or decrypt descriptors in the Faces DB, Attributes DB, or Events DB.

To do this, you need to migrate the descriptors using the descriptors_encryption.py script located in the Faces and Events service containers, passing data about the Faces/Attributes/Events database as script arguments, and passing the corresponding key and algorithm as environment variables as follows:

docker run\
--env=OLD_ENCRYPTION_KEY=<your_old_encryption_key> \
--env=NEW_ENCRYPTION_KEY=<your_new_encryption_key> \
--env=ENCRYPTION_ALGORITHM=aes256-gcm \
--rm \
...
dockerhub.visionlabs.ru/luna/luna-faces:v.4.12.21 \
python3 ./base_scripts/descriptors_encryption.py --luna-config=http://127.0.0.1:5070/1 --LUNA_FACES_DB=<LUNA_FACES_DB_TAG> --DATABASE_NUMBER=<DATABASE_NUMBER_TAG> --LUNA_ATTRIBUTES_DB=<ATTRIBUTES_DB_TAG>

Important: The script must be executed when the service is stopped.

Depending on the scenario (add/update/decrypt), certain environment variables may not be used. For example, when adding encryption to descriptors without encryption, the OLD_ENCRYPTION_KEY variable will not be used.

You must run the migration script after migrating the database and before starting the service.

See the "Manage descriptor encryption" section in the upgrade manuals in the "Additional information" section for details on the descriptor migration script.

Nuances of working with services#

When working with different services, it is necessary to take into account some nuances that will be described in this section.

Auto-orientation of rotated image#

It is not recommended to send rotated images to LP as they are not processed properly and should be rotated. There are two methods to auto-orient a rotated image — based on EXIF image data (query parameters) and using LP algorithms (Configurator setting). Both methods for automatic image orientation can be used together.

If auto-orientation is not used, the sample creation mechanism will rotate the image and produce an image with a random rotation angle.

Auto-orientation based on EXIF data#

This method of image orientation is performed in the query parameters using the "use_exif_info" parameter. This parameter can enable or disable auto-orientation of the image based on EXIF data.

This parameter is available and enabled by default in the following resources:

The "use_exif_info" parameter cannot be used with samples. When the "warped_image" or "image_type" query parameter is set to the appropriate value, the parameter is ignored.

Auto-orientation based on Configurator setting#

This method of image orientation is performed in the Configurator using the "LUNA_REMOTE_SDK_USE_AUTO_ROTATION" setting. If this setting is enabled and the input image is rotated by 90, 180 or 270 degrees, then LP rotates the image to the correct angle. If this setting is enabled, but the input image is not rotated, then LP does not rotate the image.

Performing auto-orientation consumes a significant amount of server resources.

The "LUNA_HANDLERS_USE_AUTO_ROTATION" setting cannot be used with samples. If the "warped_image" or "image_type" query parameter is set to the appropriate value and the input image is a sample and rotated, then the "LUNA_HANDLERS_USE_AUTO_ROTATION" setting will be ignored.

Saving source images#

The URL to the source image can be saved in the "image_origin" field of the created events when processing the "/handlers/{handler_id}/events" request.

To do this, you should specify the "store_image" parameter in the "image_origin_policy" when creating handler.

Then you should set an image for processing in the "generate events" request.

If "use_external_references=0" and the URL to an external image was transferred in the "generate events" request, then this image will be saved to the Image Store storage, and the ID of the saved image will be added in the "image_origin" field of the generated event.

The "use_external_references" parameter enables you to save an external link instead of saving the image itself:

  • If "use_external_references" = "1" and the URL to an external image was transferred in the "generate events" request, then that URL will be added in the "image_origin" field. The image itself will not be saved to the Image Store.
  • If "use_external_references" = "1", the sample was provided in the "generate events" request and "face_sample_policy > store_sample" is enabled, the URL to the sample in the Image Store will be saved in the "image_origin" field. The duplication of the sample image will be avoided.

If an external URL is too long (more than 256 symbols), the service will store the image to the Image Store.

You can also provide the URL to the source image directly using the "/handlers/{handler_id}/events" resource. To do this, you should use the "application/json" or "multipart/form-data" body schema in the request. The URL should be specified in the "image_origin" field of the request.

If the "image_origin" is not empty, the provided URL will be used in the created event regardless of the "image_origin_policy" policy.

The image provided in the "image_origin" field will not be processed in the request. It is used as a source image only.

Neural networks#

Using neural networks, the parameters of faces or bodies are estimated and their descriptors are extracted.

There are two types of neural networks — for performing estimation and for extracting descriptors.

Neural networks for performing detection and estimation are located in the Remote SDK service container in the format <estimation_name>_<architecture>.plan, where <estimation_name> is the name of the neural network, <architecture> the architecture used (cpu-avx2 or gpu). Such neural networks are called detectors and estimators, respectively.

Neural networks for extracting descriptors are located in the Remote SDK service container in the format cnn<model>_<architecture>.plan, where <model> is the neural network model, <architecture> the architecture used (cpu-avx2 or gpu).

To work with a neural network, a configuration file of the cnndescriptor_<model>.conf format is used, where <model> is a neural network model. Configuration files for all neural networks are also located in the Remote SDK service container.

Next, information only for neural networks for extracting descriptors will be described.

It is possible to remove any neural network from the Remote SDK service container if use of some estimators or detectors is disabled.

The current build of LUNA PLATFORM supports neural network models for extracting descriptors:

Object from which descriptor is extracted Neural network models Default model
Face 59, 60, 62 62
Body 116 116

The sizes of all neural network models for extracting face descriptors are given below:

Model Data size in Raw format (bytes) Metadata size (bytes) Data size in SDK format (Raw + metadata)
54 512 8 520
56 512 8 520
57 512 8 520
58 512 8 520
59 512 8 520
60 512 8 520
62 512 8 520

The sizes of all neural network models for extracting body descriptors are given below:

Model Data size in Raw format Metadata size (bytes) Data size in SDK format (Raw + metadata)
102 2048 8 2056
103 2048 8 2056
104 2048 8 2056
105 512 8 520
106 512 8 520
107 512 8 520

See the detailed information about descriptor formats in the section "Descriptor formats".

Neural network 105 is newer version of neural network 102.

Descriptors received using different models of the neural networks are not comparable with each other. That is why you should re-extract all the descriptors from the existing samples if you are going to use a new NN model (see "Change neural network used").

You can store several descriptors for the same image with different NN linked to a single face.

See the "DEFAULT_FACE_DESCRIPTOR_VERSION" and "DEFAULT_HUMAN_DESCRIPTOR_VERSION" parameters in the Configurator service to check the current extraction neural network.

Before any actions with neural networks, you should consult with VisionLabs specialists.

Change neural network used#

Changing the used neural network model is required when it is necessary to improve the quality of face/body recognition or when old models are declared obsolete and removed from the Remote SDK service container.

When changing the neural network model used, one should:

You should not change the default neural network, before finishing the additional extraction task.

You can both increase the model of the neural network, and lower it. To downgrade the neural network model, you need to perform similar steps to upgrading.

Launch Additional extraction task#

The additional extraction task can be launched using one of the following ways:

  • Using the "/additional_extract" request to the Admin API.
  • Using the Admin service user interface, by default located at http://<admin_server_ip>:5010.

Depending on the way, you need to specify the following information:

  • Object type: faces or events.
  • Extraction target: face descriptor, body descriptor or basic attributes.
  • New neural network version (not applicable for basic attributes).

For more information, see "Additional extraction task" section.

Change neural network model in settings#

You should set the new neural network model in the configurations of services. Use the Configurator GUI for this purpose:

  • Go to to the http://<configurator_server_ip>:5070.
  • Set the required neural network in the "DEFAULT_FACE_DESCRIPTOR_VERSION" parameter (for faces) or the "DEFAULT_HUMAN_DESCRIPTOR_VERSION" (for bodies).
  • Save changes using the "Save" button.
  • Wait until the setting is applied to all the LP services.

Use non-delivery neural network model#

This section describes the process of moving a non-delivery neural network to the Remote SDK container. This is necessary if the user is using the old neural network from a previous version of LP and does not want to change it when upgrading to a new version of LP.

It is necessary to request an archive with neural network files from a VisionLabs representative. The archive contains the following files:

  • Neural network file(s) cnn<model>_<architecture>.plan, where <model> — neural network model, <architecture> architecture used (cpu-avx2 and/or gpu).
  • Configuration file cnndescriptor_<model>.conf, where <model> — neural network model.

After downloading the archive with the neural network and the archive with its configuration, you should perform the following steps:

  • Unzip the archive.
  • Assign rights to neural networks.
  • Copy the neural networks and their configuration files to the launched Remote SDK container.
  • Make sure that the required model of the neural network is used in the service configurations (see "Change neural network model in settings" section).

Below is an example of commands for transferring neural networks to the Remote SDK container.

Unzip neural networks#

Go to the directory with the archives and unzip them.

unzip <archive_name>.zip

Assign rights to neural networks#

chown -R 1001:0 <archive_name>/cnn<model>_<architecture>.plan

Copy neural network and configuration file to Remote SDK container#

Copy the neural network and its configuration file to the Remote SDK container using the following commands.

docker cp <archive_name>/cnn<model>_<architecture>.plan luna-remote-sdk:/srv/fsdk/data/
docker cp <archive_name>/cnndescriptor_<model>.conf luna-remote-sdk:/srv/fsdk/data/

luna-remote-sdk — Name of the launched Remote SDK container. This name may differ in your installation.

Check that the required model for the required device (CPU and/or GPU) was successfully loaded:

docker exec -t luna-remote-sdk ls /srv/fsdk/data/

Logging#

There are two ways to output logs in LUNA PLATFORM:

  • Standard log output (stdout).
  • Log output to a file.

Log output settings are set in the settings of each service in the <SERVICE_NAME>_LOGGER section.

By default, logs are output only to standard output.

You can view the service logs via standard output using the docker logs <service_name> command.

It is recommended to set up an external system for collecting and storing logs. This manual does not provide an example of configuring an external system to configure logs.

If necessary, you can use both methods of displaying logs.

When enabling logging to a file in a container, the default path is srv/logs for each container. Logs are saved in the format <service_name>_<type_of_logs>.txt, where:

  • <service_name> — Name of the service, for example, luna-faces.
  • <types_of_logs> — Type of output logs — "ERROR", "INFO", "WARNING", "DEBUG".

It is possible to create up to six files of each type. For each type of logs output in the Configurator service settings, you can set the maximum size (the "max_log_file_size" parameter in the <SERVICE_NAME>_LOGGER section). So, for example, six 1024 MB files can be created for the "INFO" type. For other types, the principle of operation is similar. Thus, if the maximum value of "max_log_file_size" is 1024 MB, then the total amount of memory occupied by logs in the container cannot exceed 6*4*1024=24576 MB. After the remaining space in the last file is used up, the first file will be overwritten. If it is necessary to reduce the amount of memory occupied by logs, then it is necessary to reduce the value of the parameter "max_log_file_size".

You can synchronize the service logs folder with the local folder on the server using the command -v /tmp/logs/<service_name>:/srv/logs\ when starting the container. To do this, you first need to create the corresponding directory on the server and assign it the necessary rights.

Without assigning rights to the mounted folder, the service will issue the corresponding error.

When you enable saving logs to a file, you should remember that logs occupy a certain place in the storage (see above), and the process of logging to a file negatively affects system performance.

Image check#

LUNA PLATFORM enables you to perform various front-type image checks. Check can be done either with thresholds conforming to ISO/IEC 19794-5:2011, and by manually entering thresholds and selecting the necessary checks.

The results of the checks are not stored in the database, they are returned only in the response.

ISO/IEC 19794-5:2011 checks are performed using the "/iso" resource (see detailed description in the "Image check according to ISO/IEC 19794-5:2011" section below).

Checks with manually specified thresholds using the "face_quality" group of checks of "detect_policy" policy of "/handlers" and "/verifiers" resources (see detailed description in the "Image check according to specified conditions" section below).

The possibility of performing such checks is regulated by a special parameter in the license file.

The result of all checks is determined by the "status" parameter, where:

  • "0" — Any of the checks were not passed.
  • "1" — All checks have been passed.

The result of each check is also displayed next to it (the "result" parameter).

You can enable checking for multiple faces in a photo using the "multiface_policy" parameter. Checks are performed for each face detection found in the photo. Check results are not aggregated.

For some checks, certain requirements should be met. For example, in order to get the correct results of the status of eyebrows, it is necessary that the head angles are in a certain range and the face width is at least 80 pixels. The requirements for the checks are listed in the section "Estimated data". If any requirements are not met when check a certain parameter, the results of checking this parameter may be incorrect.

For the following checks, it is not available to use a sample as a source image:

The set of checks for "face_quality" and "/iso" resource is different (see the difference between checks in "Comparison table of available checks" section).

Image check according to ISO/IEC 19794-5:2011#

This check is performed using the "/iso" resource.

By default, images with one face present are checked. For each of the found faces, the estimates and coordinates of the found face will be returned. It should be noted that many ISO checks assume the presence of one face in the frame, so not all checks for multiple faces will be performed successfully.

The order of the returned responses after processing corresponds to the order of the transferred images.

You can additionally enable the extraction of EXIF data of the image in the request.

For each check, thresholds are set that comply with ISO requirements. The value of the thresholds for each check is given in the sample response to the "/iso" request in the OpenAPI documentation.

Some checks are united under one ISO requirement. For example, to successfully pass the eye status check, the statuses of the left and right eyes should take the value "open".

The following information is returned in the response body:

  • Verdict on passing checks, which is 1 if all checks are successful.

  • Results of each of the tests. This enables you to determine which particular test was not passed. The following values are returned:

    • The name of the check.
    • The value obtained after performing the check.
    • The default threshold. The thresholds are set in the system by the requirements of the ISO/IEC 19794-5:2011 standard and cannot be changed by the user.
    • The result of this check. When passing the thresholds, 1 is returned.
  • The coordinates of the face.

If an error occurs for one of the processed images, for example, if the image is damaged, an error will be displayed in the response. Processing of the rest of the images will continue as usual.

In addition to the "/iso" resource, the ability to check for compliance with ISO/IEC 19794-5:2011 and ICAO standards is available in the "/detector" (parameter "estimate_face_quality") resource. The principle of performing checks is similar to the one described above, however, additional checks from the "face_quality" checks group are available in this resource.

Image check according to specified conditions#

The principle of operation is similar to check according the ISO standard, but the user has the right to decide for himself which checks need to be performed and which thresholds to set.

To enable checks, you should specify the value "1" in the "estimate" field for "face_quality". Image check is disabled by default. To disable a certain check, you need to set "0" in the "estimate" field for this check. By default, all checks are enabled and will be performed when "face_quality" is enabled.

Depending on the type of check, you can specify the minimum and maximum values of the threshold, or allowable values for this check. For this, the "threshold" field is used. If the minimum or maximum threshold is not set, the minimum or maximum allowable value will be automatically selected as the unset threshold. If the maximum value is unlimited (for example, ">=0"), then the value "null" will be returned in the "max" field in the event response body. If both thresholds are not specified, the check will be performed using the standard thresholds set in the "FACE_QUALITY_SETTINGS" section in the Configurator service or in the "face_quality" request body. If the threshold is specified in the "face_quality" request body, then this overrides the standard thresholds specified in the "FACE_QUALITY_SETTINGS" section.

Default thresholds are selected by VisionLabs specialists to obtain optimal results. These thresholds may vary depending on shooting conditions, equipment, etc.

When setting thresholds for the checks "Image quality" and "Head pose", it is recommended to take into account the standard thresholds preset in the system settings. For example, to check the image for blurriness (the "blurriness_quality" parameter), it is recommended to set the threshold in the range [0.57...0.65]. When setting a threshold outside this range, the results may be unpredictable. When choosing the angles of the head position, you need to pay attention to the recommended maximum thresholds for estimation in cooperative and non-cooperative modes. Information about these thresholds is provided in the relevant sections of the administrator manual.

It is recommended to consider the results of mouth state checks ("mouth_smiling", "mouth_occluded", "mouth_open", "smile_properties") together. So, for example, if the check revealed that the face is occluded with something, then the rest of the results of the mouth check will not be useful.

It is possible to enable filtering based on check results ("filter" parameter). If one of the "face_quality" checks for the detection fails, then the results and the reason for filtering will be returned. No further policies will be performed for this detection.

In addition, some checks are available in the "face_quality" group of checks, which are not available in the images check according the standard (see below).

Comparison table of available checks#

The following checks are available for "/iso" resource and "face_quality" group of checks of the "/handlers" and "/verifiers" resources:

Checks description Checks name Resource "/iso" "face_quality"
Image quality check illumination_quality, specularity_quality, blurriness_quality, dark_quality, light_quality + +
Background check background_lightness, background_uniformity + +
Illumination uniformity check according to ICAO standard illumination_uniformity - +
Head pose check head_yaw, head_pitch, head_roll + +
Gaze check gaze_yaw, gaze_pitch + +
Mouth attribures check mouth_smiling, mouth_occluded, mouth_open + +
Smile state check smile_properties (none, smile_lips, smile_teeth) + +*
Glasses state check glasses + +
Eyes attributes check left_eye (open, occluded, closed), right_eye (open, occluded, closed) + +
Distance between eyes check eye_distance + +
Natural light check natural_light (0, 1) + +
Radial distortion check (Fisheye effect) radial_distortion (0, 1) + +
Red eyes effect check red_eyes (0, 1) + +
Eyebrows state check eyebrows_state (neutral, raised, squinting, frowning) + +*
Headwear type check headwear_type (none, baseball_cap, beanie, peaked_cap, shawl, hat_with_ear_flaps, helmet, hood, hat, other) + +*
Vertical/horizontal face position checks head_horizontal_center, head_vertical_center + +
Vertical/horizontal head sizes check head_width, head_height + +
Image format check image_format + +
Face color type check face_color_type (color, grayscale, infrared) + +
Shoulders position check shoulders_position + +
Image size check image_size - +
Indents from image edges checks indent_upper, intent_lower, intent_right, intent_left - +
Image width/height checks image_width, image_height - +
Image aspect ratio check aspect_ratio - +
Face width/height check face_width, face_height - +
Dynamic range check dynamic_range - +
Face occlusion face_occlusion, lower_face_occlusion, forehead_occlusion, nose_occlusion - +

* Several parameters can be specified for these checks.

Services health checks#

A "/healthcheck" resource is intended for service health checks.The resource can be used to actively check the health of a service, namely, whether the service can perform its functions in full or not. The ability to connect this service to the LP and DB services on which it depends is checked.

It is possible to set up a periodic resource check using HAProxy, NGINX or another system. This will enable you to determine that the service is unavailable and decide whether to disconnect the service from the contour or restart it.

Using the "include_luna_services" option, you can enable and disable health check for the LUNA PLATFORM services on which this service depends. If this option is enabled, additional requests are sent to the "/healthcheck" resources of these services. The "include_luna_services" option is disabled in order not to perform recursive checking of the same services. For example, when several services on which this service depends at once will send requests to the Faces service and thereby increase the load on it.

If the health check is successful, only the connection execution time in the "execution_time" field is returned.

If one or more services are unavailable, an error code 502 "Unhealthy" is returned. The response body lists the components, check statuses, and errors that have occurred. The error code 500 in the response body does not necessarily mean a problem with the service. A long request may fail due to exceeded timeouts, increased server load, network problems or other reasons.

When performing a request to the "/healthcheck" resource, it is recommended to set a timeout of several seconds. If the request does not have time to be processed, this is a sign that problems have arisen during the operation of the system.

To check the health of services, a "get health (redirect)" request is also available, which enables you to not specify the API version in the request.

Upload images from folder#

The "folder_uploader.py" script uploads images from the specified folder and processes uploaded images according to the preassigned parameters.

General information about the script#

The "folder_uploader.py" script can be utilized for downloading images using the API service only.

You cannot specify the Backport 4 address and port for utilizing this script. You can use the data downloaded to the LP 5 API in Backport 4 requests.

You cannot use the "folder_uploader.py" script to download data to Backport 3 service as the created objects for Backport 3 differs (e. g. "person" object is not created by the script).

Script usage#

Script pipeline:

  1. Search images of the allowed type (formats: '.png', '.jpg', '.jpeg', '.bmp', '.ppm', '.tif', '.tiff', color model: RGB, CMYK) in the specified folder (source).
  2. Start asynchronous image processing according to the specified parameters (see section "Script launching").

Image processing pipeline:

  1. Detect faces and create samples.
  2. Extract attributes.
  3. Create faces and link them to a list.
  4. Add record to the log file.

If an image was loaded successfully, the record is added to the {start_upload_time}_success_log.txt: success load logfile. The record has the following structure:

    {
    "image name": ..., 
    "face id": [...]
    }.

If errors occur at any step of the script processing, the image processing routine is terminated and a record is added to the error log file {start_upload_time}_error_log.txt: error. The record has the following structure:

    {
    "image name": ..., 
    "error": ...
    }

Install dependencies#

Before the script launching you must install all the required dependencies to launch the script.

It is strongly recommended to create a virtual environment for python dependencies installation.

Install Python packages (version 3.7 and higher) before launching installation. The packages are not provided in the distribution package and their installation is not described in this manual:

  • python3.7
  • python3.7-devel

Install gcc.

yum -y install gcc

Go to the directory with the script

cd /var/lib/luna/current/extras/utils/folder_uploader

Create a virtual environment

python3.7 -m venv venv

Activate the virtual environment

source venv/bin/activate

Install the tqdm library.

pip3.7 install tqdm 

Install luna3 libraries.

pip3.7 install ../luna3*.whl

Deactivate virtual environment

deactivate

GC script launching#

Use the command to run the script (the virtual environment must be activated):

python3.7 folder_uploader.py --account_id 6d071cca-fda5-4a03-84d5-5bea65904480 --source "Images/" --warped 0 --descriptor 1 --origin http://127.0.0.1:5000 --avatar 1  --list_id 0dde5158-e643-45a6-8a4d-ad42448a913b --name_as_userdata 1  

Make sure that the --descriptor parameter is set to 1 so descriptors are created.

The API version of the service is set to 6 by default in the script, and it cannot be changed using arguments.

--source "Images/" — "Images/" is the folder with images located near the "folder_uploader.py" script. Or you can specify the full path to the directory.

--list_id 0dde5158-e643-45a6-8a4d-ad42448a913b — Specify your existing list here.

--account_id 6d071cca-fda5-4a03-84d5-5bea65904480 — Specify the required account ID.

--origin http://127.0.0.1:5000 — Specify your current API service address and port here.

See help for more information about available script arguments:

python3.7 folder_uploader.py --help

Command line arguments:

  • "account_id" — Account ID used in requests to LUNA PLATFORM (required).

  • "source" — Directory with images to load (required).

  • "warped" — Whether the images warped or not (0,1) (required).

  • "descriptor" — Whether to extract descriptor (0,1). Default — 0.

  • "origin" — Origin. Default — "http://127.0.0.1:5000".

  • "avatar". Whether to set sample as avatar (0,1). Default — 0.

  • "list_id" — List ID to link faces with (a new LUNA list will be created if list_id is not set and list_linked=1). Default — None.

  • "list_linked" — Whether to link faces with list (0,1). Default — 1.

  • "list_userdata" — User data for list to link faces with (for newly created list). Default — None.

  • "pitch_threshold" — Maximum deviation pitch angle [0..180].

  • "roll_threshold" — Maximum deviation roll angle [0..180].

  • "yaw_threshold" — Maximum deviation yaw angle [0..180].

  • "multi_face_allowed" — Whether to allow several face detection from single image (0,1). Default — 0.

  • "get_major_detection" — Whether to choose major face detection by sample Manhattan distance from single image (0,1). Default — 0.

  • "basic_attr" — Whether to extract basic attributes (0,1). Default — 1.

  • "score_threshold" — Descriptor quality score threshold (0..1). Default — 0.

  • "name_as_userdata" — Whether to use image name as user data (0,1). Default — 0.

  • "concurrency" — Parallel processing image count. Default — 10.

Client library#

General information#

The archive with the client library for LUNA PLATFORM 5 is provided in the distribution package: /var/lib/luna/current/extras/utils/luna3-*.whl

This Python library is an HTTP client for all LUNA PLATFORM services.

You can find the examples of the library utilization in the /var/lib/luna/current/docs/ReferenceManuals/APIReferenceManual.html document.

Luna3 library usage example
Luna3 library usage example

The example shows the request for faces matching. The luna3 library is utilized for the request creation. See "matcher" > "matching faces" in "APIReferenceManual.html":

# This example is written using luna3 library {#this-example-is-written-using-luna3-library}

from luna3.common.http_objs import BinaryImage
from luna3.lunavl.httpclient import LunaHttpClient
from luna3.python_matcher.match_objects import FaceFilters
from luna3.python_matcher.match_objects import Reference
from luna3.python_matcher.match_objects import Candidates

luna3client = LunaHttpClient(
    accountId="8b8b5937-2e9c-4e8b-a7a7-5caf86621b5a",
    origin="http://127.0.0.1:5000",
)

# create sample {#create-sample}
sampleId = luna3client.saveSample(
    image=BinaryImage("image.jpg"),
    raiseError=True,
).json["sample_id"]

attributeId = luna3client.extractAttrFromSample(
    sampleIds=[
        sampleId,
    ],
    raiseError=True,
).json[0]["attribute_id"]

# create face {#create-face}
faceId = luna3client.createFace(
    attributeId=attributeId,
    raiseError=True,
).json["face_id"]

# match {#match}
candidates = Candidates(
    FaceFilters(
        faceIds=[
            faceId,
        ]
    ),
    limit=3,
    threshold=0.5,
)
reference = Reference("face", faceId)

response = luna3client.matchFaces(
    candidates=[candidates], references=[reference],
    raiseError=True,
)

print(response.statusCode)
print(response.json)

Library installation example#

In this example a virtual environment is created for luna3 installation.

You can use this Python library on Windows, Linux, MacOS.

Install Python packages (version 3.7 and later) before launching installation. The packages are not provided in the distribution package and their installation is not described in this manual:

  • python3.7
  • python3.7-devel

Install gcc.

yum -y install gcc

Go to the directory with any script, for example, folder_uploader.py

cd /var/lib/luna/current/extras/utils/folder_uploader

Create a virtual environment.

python3.7 -m venv venv

Activate the virtual environment.

source venv/bin/activate

Install luna3 libraries.

pip3.7 install ../luna3*.whl

Deactivate virtual environment.

deactivate

Plugins#

Plugins are used to perform secondary actions for the user's needs. For example, you can create your own resource based on the abstract class, or you can describe what needs to be done in some resource in addition to the standard functionality.

Files with base abstract classes are located in the .plugins/plugins_meta folder of specific service.

Plugins should be written in the Python programming language.

There are two sorts of plugins:

  • Event plugin
  • Background plugin
  • Matching plugin

Event plugins#

The first sort is triggered when an event occurs. The plugin should implement a callback function. This function is called on each event of the corresponding type. The set of event types is defined by the service developers. There are two types of event plugins available for the Remote SDK service:

  • Monitoring event
  • Sending event

For other services, only monitoring event type is available.

For examples of monitoring and sending plugins, see the development manual.

Background plugins#

The second sort of plugin is intended for background work. The background plugin can implement:

  • Custom request for a specific resource (route).
  • Background monitoring of service resources.
  • Collaboration of an event plugin and a background plugin (batching monitoring points).
  • Connection to other data sources (Redis, RabbitMQ) and their data processing.

For examples of backgrounds plugins, see the development manual.

Matching plugins#

The third type of plugins enables you to significantly speed up the processing of matching requests.

Plugins are not provided as a ready-made solution for matching. It is required to implement the logic required for solving particular business tasks.

Matching plugins are enabled in Python Matcher Proxy service. This service is not installed by default. You should run it to work with plugins. See the Python Matcher Proxy launching command in the "Use Python Matcher with Python Matcher Proxy" section of the LUNA PLATFORM installation manual.

In the default LUNA PLATFORM operation processing using Python Matcher Proxy, all matching requests are processed by Python Matcher Proxy service by redirecting requests to Python Matcher service. It is possible that matching requests processing is slower than it needs for several reasons, including:

  • Large amount of data and inability to speed up requests by any database configuration changes, e.g. create an index in a database that speeds up request.

  • Way of data storage — descriptor and entity id (face_id/event_id) are kept in different database tables. Filters, which specified in matching request also can be presented in a separate database table, which slows down the request processing speed.

  • Internal database specific restrictions.

It is possible to separate some groups of requests and improve their processing speed by utilizing matching plugins, including by transferring data to another storage with a specific way of data storage which makes possible the fastest matching in comparison to the default way (see "Plugin data source").

Examples:

  • Matching requests where all faces (let’s say that all matching candidates are faces) are linked to one list and any other filters do not specify in the request.

    In this case, it is possible to duplicate date for these matching candidates to other data storage than the default data storage and create a matching plugin, which will only match specified references with these candidates, but not with any other entities.

    The matching request processing will be faster in comparison to the default way because the plugin will not spend time to separate faces, which are linked to list from all faces, which are store in the database.

  • Matching requests where all candidates are events and specify only one filter — "event_ids" and it is required to match only by bodies.

    In this case, it is possible to duplicate all event_id and its body descriptors to other data storage than the default data storage and create a matching plugin, that will match specified references with these candidates, but not with any other entities.

    The matching request processing will be faster in comparison to the default way because the plugin will not spend time to separate events with bodies from all events and overlook filters.

You can use built-in matching plugins or create user matching plugins.

There are three examples of built-in matching plugins available:

  • The "Thin event" plugin, which is used for fast matching the faces with the simplified events.
  • The "Thin face" plugin, which is used for fast matching the faces with the simplified faces.
  • The "Cached Matcher" plugin, which is used for fast matching the faces from large lists.

Thin event

The plugin uses its own table in the "luna_events" database.

Below are a few features that speed up the matching using the plugin compared to the default method:

  • The "Thin event" database contains fewer data fields.
  • The "Thin event" database stores "event_id" and face "descriptor" in one table.
  • The "Thin event" database stores "age", "gender" and some other filters in one table.

Thin face

The plugin uses its own table in the database "luna_faces" with three mandatory columns ("face_id", "descriptor", "descriptor_version") and a number of additional ones that can be configured: "account_id", "lists", "create_time", "external_id", "user_data", "event_id", "avatar".

Cached Matcher

Matching faces by list is a time-consuming process, so the following methods are implemented using the plugin to improve performance:

  • Separate service "LUNA-CACHED-MATCHER" is used to match candidates and references.
  • Data for the candidate ("list_id", "face_id", "descriptor:) is stored in the memory cache. This provides quick access to data.
  • Data is divided horizontally into segments (the "LUNA-CACHED-MATCHER" service is used as a segment), which provides a quick search for matching results.

See the detailed description of the built-in plugins and instructions for writing user plugins in the developer manual.

General plugin processing pipeline#

Each matching request is presented in the form of all possible combinations of candidates and references, then each such combination is processed as a separate sub-request as follows (further sub-request means combination of reference and candidates):

  • Get the sub-request matching cost (see "Matching cost").

  • Choose the way for the sub-request processing using the lowest estimated matching cost: matching plugin or Python Matcher service.

    • If in the previous step Python Matcher service was selected, it will process sub-request, returns the response to the Python Matcher Proxy service.

    • If in the previous step matching plugin was selected, it will process sub-request. If sub-request was successfully processed, the response returns to the Python Matcher Proxy service. If a sub-request was not successfully processed, it will try to process by Python Matcher service.

  • If the request was successfully processed by matching plugin and plugin does not have access to all matching targets which specified in sub-request, then Python Matcher Proxy service will enrich data before next step, see matching targets for details.

  • The Python Matcher Proxy service collects results from all sub-requests, sorts them in the right order, and replies to the user.

Matching cost#

Matching cost is a float numeric expression of matching request process complexity using a plugin. Matching cost is necessary to choose the best way to process a matching request: Python Matcher service or one or more plugins.

The matching cost value for the Python-Matcher service is 100. If there are several plugins, then the matching cost value will be calculated for each plugin. The matching plugin with the lowest matching cost will be used if its matching cost is lower than the Python Matcher matching cost. All requests with matching costs greater than 100 will be processed in the Python Matcher service. If there are no plugins, Python Matcher will be used for the request processing.

Matching targets#

The Python Matcher service has access to all data of matching entities, so it can process matching requests with all targets. Matching plugins may not have access to data, which is specified in request targets. In this case, Python Matcher Proxy service will enrich response of plugin with missing targets data, e.g.:

  • Matching response contains next targets: face_id, user_data and similarity and the chosen matching plugin does not have access to user_data field:

    • Matching plugin match reference with specified face_ids and return the matching response to the Python Matcher Proxy, which contains only pairs of face_id and similarity.

    • For every match candidate in result, Python Matcher Proxy service will get user_data from the main database by face_id and merge face_id and similarity with user_data.

    • Return a prepared response with specified targets and face_id as target to the user. This mechanics requires that plugin must supports corresponding entity ID as target. If plugin does not support the entity ID as target such request will not sent to this plugin.

  • Matching response contains next targets: age, gender (all candidates are events’ faces) and the chosen matching plugin have access only to event_id, descriptor, and age fields:

    • Matching plugin match reference and return the matching response to the Python Matcher Proxy, which contains only pairs of event_id, age and similarity.

    • For every match candidate in result, Python Matcher Proxy service will get gender from the main database by event_id and merge event_id with gender, also after that it drops non-required event_id and similarity from the response.

    • Return a prepared response with specified targets and event_id as target to the user. This mechanics requires that plugin must supports corresponding entity ID as target. If plugin does not support the entity ID as target such request will not sent to this plugin.

The workflow of matching plugins is shown below:

Matching plugin workflow
Matching plugin workflow

Plugin data source#

To speed up request processing, each matching plugin may use a separated data source instead of the default one (Events DB, Faces DB, or Attributes DB (see the "Database description" section)), for example, use a separate database, a new table in the existing database, in-memory cache, etc.

For more information about matching plugins, see the developer manual.

Plugins usage#

Adding plugins to the directory manually#

This way can be used when the plugin does not require any additional dependencies that are not provided in the service Docker container.

There are two steps required to use a plugin with the service in Docker container:

  • Add the plugin file to the container.
  • Specify the plugin usage in the container configurations.

When starting the container, you need to forward the plugin file to the folder with plugins of specific service. For example, for the Remote SDK service it will be the /srv/luna_remote_sdk/plugins folder.

This can be done in any convenient way. For example, you can mount the folder with plugins to the required service directory during service launching (see service launching commands in the installation manual):

You should add the following volume if all the required plugins for the service are stored in the "/var/lib/luna/remote_sdk/plugins" directory:

-v /var/lib/luna/remote_sdk/plugins:/srv/luna_remote_sdk/plugins/ \

The command is given for the case of manual service launching.

Next, you should add the filename(s) to the "LUNA_REMOTE_SDK_ACTIVE_PLUGINS" configuration in the Configurator service.

[   
   "plugin_1",
   "plugin_2",
   "plugin_3"   
]

The list should contain filenames without extension (.py).

After completing these steps, LP will automatically use the plugin(s).

More information about plugins for a specific service can be found in the API development manual.

Building new Docker container with plugin#

This way can be used when additional dependencies are required for the plugin utilization or when the container with the plugin is required for the production usage.

You should create your docker container based on the basic service container.

Add "Dockerfile" with the following structure to your CI:

FROM dockerhub.visionlabs.ru/luna/luna-remote-sdk:v.0.14.0
USER root
...
USER luna

FROM should include the address to the basic service container that will be used. USER root — change the privileges to the root user to perform the following actions. Then the commands for adding the plugin and its dependencies should be listed. They are not given in this manual. Check the Docker documentation. USER luna — after all the commands are executed, the user should be changed back to "luna".

Add the plugin filename to the "LUNA_REMOTE_SDK_ACTIVE_PLUGINS" configuration in the Configurator service.

You can:

  • Update settings manually in the Configurator service as described above.

  • Create a dump file with LP plugin settings and add them to the Configurator service after its launch.

An example of the dump file with the Remote SDK plugin settings is given below.

{
    "settings":[
        {
            "value": [   
                        "plugin_1",
                        "plugin_2",
                        "plugin_3"   
                     ],
            "description": "list active plugins",
            "name": "LUNA_REMOTE_SDK_ACTIVE_PLUGINS",
            "tags": []
        },
    ]
}

Then the file is applied using the following command. For example the file is stored in "/var/lib/luna/". The dump filename is "luna_remote_sdk_plugin.json".

docker run \
-v /var/lib/luna/luna_remote_sdk_plugin.json:/srv/luna_configurator/used_limitations/luna_remote_sdk_plugin.json \
--network=host \
--rm \
--entrypoint=python3 \
dockerhub.visionlabs.ru/luna/luna-configurator:v.2.2.69 \
./base_scripts/db_create.py --dump-file /srv/luna_configurator/used_limitations/luna_remote_sdk_plugin.json

Monitoring#

There are several monitoring methods in the LUNA PLATFORM:

Send data to InfluxDB#

Starting with version 5.5.0, LUNA PLATFORM provides a possibility to use InfluxDB of version 2.

If necessary, you can migrate from version 1 to version 2 using the built-in tools. See the Influx documentation.

To work with InfluxDB, you need to register with a username and password and specify the bucket name, organization name and token. All this data is set when starting the InfluxDB container using environment variables.

To use monitoring, it is necessary to set exactly the same data for the fields bucket, organization, token in the settings of each service that was specified when launching the InfluxDB container. So, for example, if the following settings were used when starting the InfluxDB container...:

-e DOCKER_INFLUXDB_INIT_BUCKET=luna_monitoring \
-e DOCKER_INFLUXDB_INIT_USERNAME=luna \
-e DOCKER_INFLUXDB_INIT_PASSWORD=password \
-e DOCKER_INFLUXDB_INIT_ORG=luna \
-e DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=kofqt4Pfqjn6o \

... then the following parameters should be specified in the settings of each service:

"influxdb": {
    "organization": "luna",
    "token": "kofqt4Pfqjn6o",
    "bucket": "luna_monitoring",

By default, the values for the container environment variables and the values of the service settings are the same. This means that if the LUNA PLATFORM is launched using the installation manual or the Docker Compose script, monitoring will be automatically enabled.

Login and password are used to access the InfluxDB user interface.

It is also possible to launch InfluxDB on a separate server. The address of the server with InfluxDB must be specified in the host and port parameters in the service settings.

See other settings for InfluxDB on the example of the API service in the section "LUNA_MONITORING".

Data being sent#

There are two types of events that are monitored: "request" (all requests) and "error" (failed requests only).

Every event is a point in the time series. For the API service, the point is represented using the following data:

  • Series name (requests or errors)
  • Timestamp of the request start
  • Tags
  • Fields

For other services, the set of event types may differ. For example, the Remote SDK service also collects data on SDK usage, estimations, and licensing.

The tag is an indexed data in storage. It is represented as a dictionary, where:

  • Keys — String tag names.
  • Values — String, integer or float.

The field is a non-indexed data in storage. It is represented as a dictionary, where:

  • Keys — String field names.
  • Values — String, integer or float.

Below is an example of tags and fields for the API service. These tags are unique for each service. You can find information about monitoring a specific service in the relevant documentation:

Saving data for requests is triggered on every request. Each point contains data about the corresponding request (execution time and etc.).

  • Tags
Tag name Description
service Always "luna-api".
account_id Account ID or none.
route Concatenation of a request method and a request resource (POST:/extractor).
status_code HTTP status code of response .
  • Fields
Fields Description
request_id Request ID.
execution_time Request execution time.

Saving data for errors is triggered when a request fails. Each point contains "error_code" of LUNA error.

  • Tags
Tag name Description
request_id Always "luna-api".
account_id Account ID or none.
route Concatenation of a request method and a request resource (POST:/extractor).
status_code HTTP status code of response .
error_code LUNA PLATFORM error code.
  • Fields
Fields Description
request_id Request ID.

Every handler can add additional tags or fields. For example, handler of "/handlers/{handler_id}/events" resource adds tag handler_id.

View monitoring data#

You can use the InfluxDB GUI to view monitoring data.

  • Go to the InfluxDB GUI <server_ip>:<influx_port>. The default port is 8086. The default login data is luna/password.

  • Select the "Explore" tab.

  • Select a way to display information in the drop-down list (graph, histogram, table, etc.).

  • Select a bucket at the bottom of the page. By default — luna_monitoring.

  • Filter the necessary data.

  • Click "Submit".

InfluxDB version 2 also enables you to visualize monitoring data using the "LUNA Dashboards (Grafana)" tool.

Requests and estimations statistics gathering#

LUNA PLATFORM gathers the number of completed requests and estimates per month based on monitoring data if statistics gathering is enabled. Statistics gathering works only when monitoring is enabled and InfluxDB of version 2.0.8 and later is installed.

To get statistics, use the "/luna_sys_info" resource or go to the Admin service GUI to the "help" tab and click "Get LUNA PLATFORM system info". The necessary information is contained in the "stats" section.

This section contains two fields — "estimators_stats" and "routes_stats".

The first field contains a list of performed estimations. Three fields are displayed for each estimation:

  • "name" — Name of estimation performed (for example, estimate_emotions).
  • "count" — Total number of completed estimations of the same type.
  • "month" — Month for which gathering was made (for example, 2021-09).

The second field contains a list of services to which requests were made. Five fields are displayed for each service:

  • "service" — Name of service (for example, luna-api).
  • "route" — Method and request (for example, GET:/version).
  • "month" — Month for which gathering was made.
  • "errors" — Number of requests performed with a specific error (for example, [ { "count": 1, "error_code": "12012" } ]).
  • "request_stats" — Number of successful requests (for example, [ { "count": 1, "status_code": "200" } ]).

The information is impersonal and contains only quantitative data.

Statistics are gathered in InfluxDB based on data from the "luna_monitoring" bucket and stored in the "luna_monitoring_aggregated" bucket. The buckets are created in InfluxDB. Do not delete data from this bucket, otherwise, it will be impossible to get statistics.

Statistics are gathered once a day, so they are not displayed immediately after the LP is launched.

Tasks for gathering statistics can be found in the InfluxDB GUI on the "Tasks" tab. There you can manually start their performing.

To enable this functionality, run the python3 ./base_scripts/influx2_cli.py create_usage_task --luna-config http://127.0.0.1:5070/1 command after starting the Admin service (see the installation manual). The command automatically creates the necessary bucket "luna_monitoring_aggregated". If this command is not performed, the response "/luna_sys_info" will not display statistics.

If necessary, you can disable statistics gathering by deleting or disabling the corresponding tasks on the "Tasks" tab in the InfluxDB GUI.

Export metrics in Prometheus format#

Each LUNA PLATFORM service can collect and save metrics in Prometheus format in the form of time series data that can be used to track the behavior of the service. Metrics can be integrated into the Prometheus monitoring system to track performance. See Prometheus official documentation for more information.

By default, the collection of metrics is disabled. The collection of metrics is enabled in the "LUNA_SERVICE_METRICS" section.

Note that all metric data is reset when the service is shut down.

Type of metrics#

Two types of metrics are available:

  • Counters, which increase with each event.
  • Cumulative histograms, which are used to measure the distribution of duration or size of events.

A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. See description in Wikipedia.

The following metrics of type counters are available:

  • request_count_total — Total number of requests
  • errors_count_total — Total number of errors

Each of them has at least two labels for sorting:

  • status_code (or error_code for error metrics)
  • path — Path consisting of a request method and an endpoint route.

Labels are key pairs consisting of a name and a value that are assigned to metrics.

If necessary, you can add custom label types by specifying the pair tag_name=tag_value in the "extra_labels" parameter.

Note that the pair tag_name=tag_value will be added to each metric of the LUNA PLATFORM service.

A special manager distributes all requests passing through the service among the counters using these tags. This ensures that two successful requests sent to different endpoints or to the same endpoint, but with different status codes, will be delivered to different metrics.

Unsuccessful requests are distributed according to the metrics request_count_total and request_errors_total.

The requests metric of cumulative histogram type tracks the duration of requests to the service. The following intervals (bucket) are defined for the histogram, in which the measurements fall:

  • 0.0001
  • 0.00025
  • 0.0005
  • 0.001
  • 0.0025
  • 0.005
  • 0.01
  • 0.025
  • 0.05
  • 0.075
  • 0.1
  • 0.25
  • 0.5
  • 0.75
  • 1.0
  • 2.5
  • 5.0
  • 7.5
  • 10.0
  • Inf

In this way the range of request times can be broken down into several intervals, ranging from very fast requests (0.0001 seconds) to very long requests (Inf - infinity). Histograms also have labels to categorize the data, such as status_code for the status of a request or route to indicate the route of a request.

Examples

If you send one request to the /healthcheck resource, followed by three requests to the /docs/spec resource, one of which will be redirected (response status code 301), then when executing the request to the /metrics resource, the following result will be displayed in the response body:

# HELP request_count_total Counter of requests
# TYPE request_count_total counter
request_count_total{path="GET:/docs/spec",status_code="200"} 2.0
request_count_total{path="GET:/docs/spec",status_code="301"} 1.0
request_count_total{path="GET:/healthcheck",status_code="200"} 1.0

If you send one invalid POST request to the /handlers resource, then when executing the request to the /metrics resource, the following result will be displayed in the response body:

# HELP request_count_total Counter of requests
# TYPE request_count_total counter
request_count_total{path="POST:/handlers",status_code="401"} 1.0
# HELP request_errors_total Counter of request errors
# TYPE request_errors_total counter
request_errors_total{error_code="12010",path="POST:/handlers"} 1.0
# HELP requests Histogram of request time metrics
# TYPE requests histogram
requests_sum{route="GET:/docs/spec",status_code="200"} 0.003174567842297907
requests_bucket{le="0.0001",route="GET:/docs/spec",status_code="200"} 0.0
requests_bucket{le="0.00025",route="GET:/docs/spec",status_code="200"} 0.0
requests_bucket{le="0.0005",route="GET:/docs/spec",status_code="200"} 0.0
requests_bucket{le="0.001",route="GET:/docs/spec",status_code="200"} 1.0
...
requests_count{route="GET:/docs/spec",status_code="200"} 2.0
requests_sum{route="GET:/docs/spec",status_code="301"} 0.002381476051209132

Configuring metrics collection for Prometheus#

Prometheus must be configured to collect LUNA PLATFORM metrics.

Example Prometheus configuration for collecting LP service metrics:

  - job_name: "luna-api"
     static_configs:
       - targets: ["127.0.0.1:5000"]
   ...

   - job_name: "luna-configurator"
     static_configs:
       - targets: ["127.0.0.1:5070"]

See the official documentation for an example of running Prometheus.

Prometheus dashboards have already been created in the LUNA Dashboards service.

LUNA Dashboards#

The LUNA Dashboards tool is intended to visualize monitoring data. LUNA Dashboards based on the Grafana web application create a set of dashboards for analyzing the state of individual services, as well as two summarised dashboards that can be used to evaluate the state of the system.

Use "http://IP_ADDRESS:3000" to go to the Grafana web interface.

API service dashboard when starting testing
API service dashboard when starting testing

The data source is configured in Grafana, with the help of which it can communicate with the InfluxDB from which monitoring data is received during the operation of all LP services.

LUNA Dashboards workflow
LUNA Dashboards workflow

LUNA Dashboard can be useful:

  • To monitor the state of the system.
  • For error analysis.
  • To get statistics on errors.
  • To analyze the load on individual services and on the platform as a whole, load by days of the week, by time of day, etc..
  • To analyze statistics of request execution, i.e. what resources account for what proportion of requests for the entire platform.
  • To analyze the dynamics of request execution time.
  • To evaluate the average value of the execution time of requests for a specific resource.
  • To analyze Prometheus metrics.
  • To analyze changes in the indicator over time.

After installing the dashboards (see below), the "luna_platform_5" directory becomes available in the Grafana web interface, which contains the following dashboards:

  • Luna Platform Heatmap.
  • Luna Platform Summary.
  • Dashboards for individual services.
LUNA Dashboards structure
LUNA Dashboards structure

Luna Platform Heatmap enables you to evaluate the load on the system without a specific resource. In statistics, you can evaluate the time of activity for the system at a certain time.

Luna Platform Summary enables you to get statistics on requests for all services in one place, as well as evaluate graphs by RPS (Request Per Seconds).

Dashboards for individual services enables you to get information about requests for individual resources, errors and status codes for each service. In such a dashboard, not load data will be displayed, but artificially generated data at a selected time interval.

The following software is required for Grafana dashboard utilization:

  • InfluxDB 2.0 (the currently used version is 2.0.8-alpine)
  • Grafana (the currently used version is 8.5.20)

InfluxDB and Grafana are already included in the package. You can use your own Grafana installation or install it manually.

Manual installation of LUNA dashboards#

Note: Below are the steps to manually install Grafana dashboards. Dashboards can also be automatically launched using a special command in the section "Monitoring and logs visualization using Grafana" of the installation manual.

Plugin installation

In addition to build-in Grafana plugins, dashboards also use a piechart plugin. Use the new grafana-cli tool to install piechart-panel from the command line:

grafana-cli plugins install grafana-piechart-panel

If necessary, you can use the archive "grafana-piechart-panel.zip" in "/var/lib/luna/current/extras/utils/".

A restart is required to apply the plugin:

sudo service grafana-server restart

Launch dashboards

Dashboards can be launched manually or using a special Grafana container with dashboards called luna-dashboards:

  • Launching using a special Grafana container with dashboards is described in the installation manual.

  • Manual launch is described further in this document.

Install Grafana. An example of the command is given below:

docker run \
--restart=always \
--detach=true \
--network=host \
--name=grafana \
-v /etc/localtime:/etc/localtime:ro \
-e "GF_INSTALL_PLUGINS=grafana-piechart-panel" \
dockerhub.visionlabs.ru/luna/grafana:8.5.20

If necessary, in the environment variable "GF_INSTALL_PLUGINS" you can specify the path to the archive "/var/lib/luna/current/extras/utils/grafana-piechart-panel.zip".

The scripts for Grafana plugins installation can be found in "/var/lib/luna/current/extras/utils/".

Install Python version 3.7 or later before launching the following script. The packages are not provided in the distribution package and their installation is not described in this manual.

Go to the luna dashboards directory.

cd /var/lib/luna/current/extras/utils/luna-dashboards_linux_rel_v.*

Create a virtual environment.

python3.7 -m venv venv

Activate the virtual environment.

source venv/bin/activate

Install luna dashboards file.

pip install luna_dashboards-*-py3-none-any.whl

Go to the following directory.

cd luna_dashboards

The "luna_dashboards" folder contains the configuration file "config.conf", which includes the settings for Grafana, InfluxDB and monitoring periods. By default, the file already includes the default settings, but you can change the settings use "vi config.conf".

Run the following script to create dashboards.

python create_dashboards.py

Deactivate virtual environment.

deactivate

Use "http://IP_ADDRESS:3000" to go to the Grafana web interface when the Grafana and InfluxDB containers are running.

In the upper left corner, select the "General" button, then expand the "luna_platform_5" folder and select the necessary dashboard.

Grafana Loki#

Grafana Loki is a log aggregation system that enables you to flexibly work with LUNA PLATFORM logs in Grafana.

With Grafana Loki, you can perform the following tasks:

  • Collecting LUNA PLATFORM logs.
  • Search by LUNA PLATFORM logs.
  • Visualization of LUNA PLATFORM logs.
  • Extraction of numeric metrics from LUNA PLATFORM logs.
  • Other.

See the detailed information about the capabilities of Grafana Loki in the official documentation: https://grafana.com/oss/loki/.

Grafana Loki is included in the LUNA PLATFORM distribution.

To launch Grafana Loki, the following software is required:

  • Launched Grafana with configured Loki data source in Grafana (see "LUNA Dashboards" section).
  • Launched Promtail log delivery agent (see "Promtail" section below).

Thus, the following chain of actions is performed in Grafana to work with LUNA PLATFORM logs:

Grafana Loki workflow
Grafana Loki workflow

Commands for launching Grafana and Promtail are given in the installation manual. The Grafana container from the LUNA PLATFORM distribution already has a Loki data source configured.

You can also launch Grafana and Promtail by executing an additional Docker Compose script after the main Docker Compose script (see "Deployment using Docker Compose" document).

Promtail#

A specially configured Promtail agent is used to deliver LUNA PLATFORM logs to Grafana Loki. Like Grafana Loki, the Promtail agent is included in the LUNA PLATFORM distribution.

A configured Promtail settings file is available in the LUNA PLATFORM distribution, which provides the ability to filter by the following tags:

  • LP logging level
  • LP services
  • URI
  • LP status codes

Some additional labels (for example, LP service version) can also be specified using the client.external-labels argument in the Promtail launch command.

A derived field is also configured in Grafana Loki to search for a specific request ID in the logs.

Databases#

Manual creation of services databases in PostgreSQL#

This section describes commands required for configuring external PostgreSQL for working with LP services. External means that you already have working DB and want to use it with LP.

Commands for Oracle are not listed in this documentation.

It is necessary to specify an external database in LP service configurations.

For Faces and Events services it is necessary to add additional VLMatch functions to the used database. The VLMatch library must first be compiled and migrated to PostgreSQL, and then the VLMatch function must be added to the Events and Faces databases. See "Build VLMatch for PostgreSQL", "Add VLMatch function for Faces DB in PostgreSQL" and "Add VLMatch function for Events DB in PostgreSQL" for details.

PostgreSQL user creation#

Create a database user.

runuser -u postgres -- psql -c 'create role luna;'

Assign a password to the user.

runuser -u postgres -- psql -c "ALTER USER luna WITH PASSWORD 'luna';"

Configurator DB creation#

It is assumed that the DB user is already created.

Create the database for the Configurator service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_configurator;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_configurator TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Accounts DB creation#

It is assumed that the DB user is already created.

Create the database for the Accounts service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_accounts;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_accounts TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Handlers DB creation#

It is assumed that the DB user is already created.

Create the database for the Handlers service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_handlers;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_handlers TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Backport 3 DB creation#

It is assumed that the DB user is already created.

Create the database for the Backport 3 service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_backport3;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_backport3 TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Faces DB creation#

It is assumed that the DB user is already created.

Create the database for the Faces service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_faces;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_faces TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Events DB creation#

It is assumed that the DB user is already created.

Create the database for the Events service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_events;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_events TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Tasks DB creation#

It is assumed that the DB user is already created.

Create the database for the Tasks service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_tasks;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_tasks TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Lambda DB creation#

It is assumed that the DB user is already created.

Create the database for the Lambda service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_lambda;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_lambda TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Video Manager DB creation#

It is assumed that the DB user is already created.

Create the database for the Video Manager service.

runuser -u postgres -- psql -c 'CREATE DATABASE luna_video_manager;'

Grant privileges to the database user.

runuser -u postgres -- psql -c 'GRANT ALL PRIVILEGES ON DATABASE luna_video_manager TO luna;'

Allow user to authorize in the DB.

runuser -u postgres -- psql -c 'ALTER ROLE luna WITH LOGIN;'

Build VLMatch for PostgreSQL#

Note: The following instruction describes installation for PostgreSQL 16.

You can find all the required files for the VLMatch user-defined extension (UDx) compilation in the following directory:

/var/lib/luna/current/extras/VLMatch/postgres/

The following instruction describes installation for PostgreSQL 16.

For VLMatch UDx function compilation one needs to:

  • Install RPM repository:
dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
  • Install PostgreSQL:
dnf install postgresql16-server
  • Install the development environment:
dnf install postgresql16-devel
  • Install the gcc package:
dnf install gcc-c++
  • Install CMAKE. The version 3.5 or higher is required.

  • Open the make.sh script using a text editor. It includes paths to the currently used PostgreSQL version. Change the following values (if necessary):

SDK_HOME specifies the path to PostgreSQL home directory. Default value: /usr/include/postgresql/16/server;

LIB_ROOT specifies the path to PostgreSQL library root directory. Default value: /usr/lib/postgresql/16/lib.

  • Go to the make.sh script directory and run it:
cd /var/lib/luna/current/extras/VLMatch/postgres/ 
chmod +x make.sh
./make.sh

Move the generated VLMatchSource.so file to any convenient location if PostgreSQL is running outside the container or to the /srv directory in the PostgreSQL container.

The path to the library is specified when creating the function in the database (see below).

Add VLMatch function for Faces DB in PostgreSQL#

The Faces service requires an additional VLMatch function to be added to the database used. LUNA PLATFORM cannot perform descriptor matching calculations without this function.

The VLMatch library is compiled for a specific version of the database.

Do not use a library created for a different version of the database. For example, a library created for PostgreSQL version 12 cannot be used for PostgreSQL version 16.

This section describes how to create a function for PostgreSQL. The VLMatch library must be compiled and ported to PostgreSQL. See "Build VLMatch for PostgreSQL".

Add VLMatch function to Faces database#

The VLMatch function should be applied to the PostgreSQL DB.

  • Define the function inside the Faces database.
sudo -u postgres -h 127.0.0.1 -- psql -d luna_faces -c "CREATE FUNCTION VLMatch(bytea, bytea, int) RETURNS float8 AS '/srv/VLMatchSource.so', 'VLMatch' LANGUAGE C PARALLEL SAFE;"

Important: Here /srv/VLMatchSource.so is the full path to the compiled library. You must replace the path with the actual one.

  • Test function by sending re following request to the service database.
sudo -u postgres -h 127.0.0.1 -- psql -d luna_faces -c "SELECT VLMatch('\x1234567890123456789012345678901234567890123456789012345678901234'::bytea, '\x0123456789012345678901234567890123456789012345678901234567890123'::bytea, 32);"

The result returned by the database must be "0.4765625".

Add VLMatch function for Events DB in PostgreSQL#

The Events service requires an additional VLMatch function to be added to the database used. LUNA PLATFORM cannot perform descriptor matching calculations without this function.

The VLMatch library is compiled for a specific version of the database.

Do not use a library created for a different version of the database. For example, a library created for PostgreSQL version 12 cannot be used for PostgreSQL version 16.

This section describes how to create a function for PostgreSQL. The VLMatch library must be compiled and ported to PostgreSQL. See "Build VLMatch for PostgreSQL".

Add VLMatch function to Events database#

The VLMatch function should be applied to the PostgreSQL DB.

Define the function inside the Events database.

sudo -u postgres -h 127.0.0.1 -- psql -d luna_events -c "CREATE FUNCTION VLMatch(bytea, bytea, int) RETURNS float8 AS 'VLMatchSource.so', 'VLMatch' LANGUAGE C PARALLEL SAFE;"

Test function within call.

sudo -u postgres -h 127.0.0.1 -- psql -d luna_events -c "SELECT VLMatch('\x1234567890123456789012345678901234567890123456789012345678901234'::bytea, '\x0123456789012345678901234567890123456789012345678901234567890123'::bytea, 32);"

The result returned by the database must be "0.4765625".

VLMatch for Oracle#

Note: The following instruction describes installation for Oracle 21c.

You can find all the required files for the VLMatch user-defined extension (UDx) compilation in the following directory:

/var/lib/luna/current/extras/VLMatch/oracle

For VLMatch UDx function compilation one needs to:

sudo yum install gcc g++ 
  • Change SDK_HOME variable — oracle sdk root (default is $ORACLE_HOME/bin, check $ORACLE_HOME environment variable is set) in the makefile:
vi /var/lib/luna/current/extras/VLMatch/oracle/make.sh
  • Open the directory and run the file "make.sh ".
cd /var/lib/luna/current/extras/VLMatch/oracle
chmod +x make.sh
./make.sh
  • Define the library and the function inside the database (from database console):
CREATE OR REPLACE LIBRARY VLMatchSource AS '$ORACLE_HOME/bin/VLMatchSource.so';
CREATE OR REPLACE FUNCTION VLMatch(descriptorFst IN RAW, descriptorSnd IN RAW, length IN BINARY_INTEGER)
   RETURN BINARY_FLOAT 
AS
   LANGUAGE C
   LIBRARY VLMatchSource
   NAME "VLMatch"
   PARAMETERS (descriptorFst BY REFERENCE, descriptorSnd BY REFERENCE, length UNSIGNED SHORT, RETURN FLOAT);
  • Test function within call (from database console):
SELECT VLMatch(HEXTORAW('1234567890123456789012345678901234567890123456789012345678901234'), HEXTORAW('0123456789012345678901234567890123456789012345678901234567890123'), 32) FROM DUAL;

The result returned by the database must be "0.4765625".

Move the generated VLMatchSource.so file to any convenient location if Oracle is running outside the container or to the /srv directory in the Oracle container.

The path to the library is specified when creating the function in the database (see below).

Collect information for technical support#

For efficient and prompt problem solving, VisionLabs technical support needs to provide LUNA PLATFORM service logs and additional information about the status of third-party services, license status, LUNA PLATFORM settings, etc.

Collect the data described below and send it to VisionLabs specialists.

Collect services logs#

There are two ways to output logs in LUNA PLATFORM:

  • Standard log output (stdout).
  • Log output to a file.

The log output settings are specified in the settings of each service in the <SERVICE_NAME>_LOGGER section.

By default, logs are output only to standard output.

For more information about the LUNA PLATFORM logging system, see "Logging information" in the administrator manual.

Collect logs for all services. For example, you can collect logs for the last 10 minutes for all services using the commands below.

docker logs --since 10m luna-licenses > luna-licenses_log.txt
docker logs --since 10m luna-faces > luna-faces_log.txt
docker logs --since 10m luna-image-store > luna-image-store_log.txt
docker logs --since 10m luna-accounts > luna-accounts_log.txt
docker logs --since 10m luna-tasks > luna-tasks_log.txt
docker logs --since 10m luna-events > luna-events_log.txt
docker logs --since 10m luna-sender > luna-sender_log.txt
docker logs --since 10m luna-admin > luna-admin_log.txt
docker logs --since 10m luna-remote-sdk > luna-remote-sdk_log.txt
docker logs --since 10m luna-handlers > luna-handlers_log.txt
docker logs --since 10m luna-lambda > luna-lambda_log.txt
docker logs --since 10m luna-python-matcher > luna-python-matcher_log.txt
docker logs --since 10m luna-backport3 > luna-backport3_log.txt
docker logs --since 10m luna-backport4 > luna-backport4_log.txt

Collect additional information#

The following additional information must be collected:

  • LUNA PLATFORM version.

    The LUNA PLATFORM version can be found in the name of the archive with the delivery set. You can also find out the current version by going to the http://your_server_ip_adress:5000/version page in your browser.

  • License status depending on the vendor:

    • HASP — Information from http://your_server_ip_adress:1947/_int_/features.html and http://your_server_ip_adress:1947/_int/devices.html pages
    • Guardant — Information from pages http://your_server_ip_adress:3189/#/dongles/list and http://your_server_ip_adress:3189/#/sessions.
  • Actual settings of LUNA PLATFORM.

    Up-to-date settings can be obtained by going to the page http://your_server_ip_adress:5010/4/luna_sys_info, specifying the login and password from the account. The default login and password is root@visionlabs.ai/root. After entering the password, a file in json format will be downloaded and should be submitted to technical support.

  • Status of third-party services:

    • Docker: systemctl status docker
    • aksusbd: systemctl status aksusbd
    • grdcontrol: systemctl status grdcontrol
  • Status of LUNA PLATFORM containers.

    You can get a list of all containers using the docker ps -a command.

  • List of open ports.

    You can get a list of open ports using the ss -ltn command.

  • A list of registries in the Docker configuration that can be connected to without using a secure connection.

    You can get a list of registries using the cat /etc/docker/* command.

  • Firewall rules.

    Firewall rules can be obtained using the iptables-save command.

  • General system information:

  • CPU information: lscpu
  • Memory usage: free -h; lsmem
  • Disk space usage: df -h

  • Environment and server type.

    Specify the environment in which the system is running (test, production) and whether the server is virtual or physical.