Skip to content

General concepts#

The following sections explain general LP concepts, describe existing services and created data types. It does not describe all the requests, database structures, and other technical nuances.

Services#

LP consists of several services. All the services communicate via HTTP requests: a service receives a request and always returns a response.

You can find information about LUNA PLATFORM 5 architecture in the "Interaction of LP Services" section.

All the services can be divided into general and additional. Using the general services, the optimal operation of the LP is ensured. Using of all general services is enabled by default in the API service settings. If necessary, some general services can be disabled (see the "Optional services usage" section in the installation manual).

Most of the services have their database or file storage.

General services#

Service Description Database Disableable
API The main gateway to LP. Receives requests, distributes tasks to other LP services - No
Accounts Manages accounts PostgreSQL/ Oracle No
Handlers Detects faces in images, extracts face properties and creates samples. Extracts descriptors from samples. Extracts basic attributes of the images. Creates and stores handlers PostgreSQL/ Oracle No
Python Matcher Performs matching tasks - No
Faces Creates faces, lists, and attributes. Stores these objects in the database. Allows other services to receive the required data from the database PostgreSQL/ Oracle, Redis No
Image Store Stores samples, reports about long tasks execution, created clusters and additional metadata Local storage/ Amazon S3 No
Configurator Stores all configurations of all the services in a single place PostgreSQL/ Oracle No
Licenses Checks your license conditions and returns information about them - No
Events Stores data on the generated events in the database. PostgreSQL Yes
Admin Enables to perform general administrative routines PostgreSQL/ Oracle Yes
Tasks Performs long tasks, such as garbage collection, extraction of descriptors with a new neural network version, clustering PostgreSQL/ Oracle Yes
Tasks Worker Performs the internal work of the Tasks service PostgreSQL/ Oracle Yes
Sender Sends notifications about created events via web-socket. Redis Yes

Additional services#

Additional services provide more opportunities for the system to work. Launching additional services is optional.

Service Description Database
Backport 3 The service is used to process LUNA PLATFORM 3 requests using LUNA PLATFORM 5. PostgreSQL/ Oracle
Backport 4 The service is used to process LUNA PLATFORM 4 requests using LUNA PLATFORM 5. -
User Interface 3 User Interface is used to visually represent the features provided with the Backport 3 service. It does not include all the functionality available in LP 3. -
User Interface 4 User Interface is used to visually represent the features provided with the Backport 4 service. It does not include all the functionality available in LP 4. -
Python Matcher Proxy The service manages matching requests and routes them to Python Matcher or matching plugins. -

Below is a diagram of the interaction of the general and additional services.

Simplified interaction diagram of general and additional services
Simplified interaction diagram of general and additional services

This diagram does not describe the communication lines between the services. For a full description of the interaction of services, see the "Interaction of LP services" section.

Third-party services#

There are several third-party services that are usually used with LP.

Function Description Supported services
Balancer Balance requests between LP services when there are several similar services launched. For example, they can balance requests between API services or between two contours of LP upon scaling. NGINX
Monitoring system The monitoring system is used to monitor and control the number of processes launched for LP. Supervisor
Monitoring database The database is used for monitoring purposes. Influx
Monitoring visualization Monitoring visualization is represented by a set of dashboards. You can evaluate the total load on the system or the load on individual services. Grafana, Grafana Loki
Log rotation service All LP services write logs and their size may grow dramatically. Log rotation service is required to delete outdated log files and free disk space. Logrotate, etc.

These services are not described in this document. See the corresponding documentation provided by the services vendors.

Authorization system#

Most requests to the LUNA PLATFORM require a mandatory account, except for requests that do not involve authorization.

Accounts

The account is required to delimit the visibility areas of objects for a particular user. Each created account has its own unique "account_id".

The account can be created using a POST request "create account" to the API service, or by requesting "register account", or using the Admin user interface. When creating the account, you must specify the following data: login (email), password and account type.

The account type determines what data is available to the user.

  • user - the type of account with which you can create objects and use only your account data.
  • advanced_user - the type of account for which rights similar to "user" are available, and there is access to the data of all accounts. Access to data from other accounts means the ability to receive data (GET requests), check their availability (HEAD requests) and perform comparison requests based on data from other accounts.
  • admin - the type of account for which rights similar to "advanced_user" are available, and there is also access to the Admin service (see "Admin service" below).

Using the "Luna-Account-Id" header in the "create account" request, you can set the desired account ID.

Tokens

Token is linked to an existing account with any type and enables you to impose extended restrictions on the requests being made. For example, when creating the token, you can give the user permission only to create and modify all lists and faces, or you can prevent the use of certain handlers by specifying their ID.

The token and all its permissions are stored in the database and linked to the account by the "account_id" parameter.

When creating the token, you can set the following parameters:

  • expiration_time – expiration time of the token in RFC 3339 format. You can specify an infinite token expiration time using the value "null"
  • permissions – permissions that are available to the user
  • visibility_area – token visibility of data from other accounts

Authorization types for accessing resources

There are three types of authorization available in LUNA PLATFORM:

  • BasicAuth. Authorization by login and password (set during account creation).
  • BearerAuth. Authorization by JWT token (issued after the token is created).
  • LunaAccountIdAuth. Authorization by "Luna-Account-Id" header, which specifies the "account_id" generated after creating the account (this method was adopted as the main one before version 5.30.0).

LunaAccountIdAuth authorization has the lowest priority compared to other methods and can be disabled using the "ALLOW_LUNA_ACCOUNT_AUTH_HEADER" setting in the "OTHER" section of the API service settings in the Configurator (enabled by default). In the OpenAPI specification the "Luna-Account-Id" header is marked with the word Deprecated.

See detailed information about the LUNA PLATFORM 5 authorization system in the "Accounts, tokens and authorization types" section.

Approaches at work#

There are three main operations in LUNA PLATFORM:

  1. Detection is the process of recognizing a face or body in a photo and normalizing the image (creating a sample) for further work. At the stage of detection, estimation of face or body parameters also performs, i.e. estimation of emotions, direction of gaze, upper clothes, etc.
  2. Extraction is the process of extracting gender and age from a face image and extracting a set of unique parameters of a face or body (descriptors) from a sample to perform further matching.
  3. Matching is the process of comparing descriptors.

These operations are performed strictly one after the other. It is impossible to match face descriptors without first extracting them.

Basic LUNA PLATFORM operations
Basic LUNA PLATFORM operations

There are two main approaches when performing the operations described above.

Parallel performing of requests#

The first and main approach is to set the rules of detection, estimation, extraction, matching, etc. in a single handler object. After that, you need to create an object event, which will give a result based on all the rules specified in the handler. Using this approach is the most optimal from the point of view of business logic.

With this approach, the following actions are performed:

  • using the "create handler" request, a handler is created containing information about image processing rules;
  • in the "generate events" request, the received handler ID is specified, the processed image is attached and an event is generated containing information obtained by processing handler rules.

Thus, when an event is generated, detection, estimation, extraction, matching, saving to the database, and so on are performed simultaneously.

"Parallel performing of requests" approach

Examples of common scenarios for using handlers:

  • registration of a reference descriptor with saving to the list;
  • biometric identification of faces on the list without saving;
  • saving faces identified in the list in the database;
  • saving events only for unique faces for later counting;
  • batch identification;
  • batch import.

For some scenarios, it may be necessary to pre-create certain objects in the system using separate requests. For example, to identify a set of faces by a list, you first need to create a list object, link the previously created face objects to it, and then use handler and event.

Advantages of the approach:

  • all the rules in one place;
  • flexible management of saving objects to databases, setting up save filters;
  • the ability to work with packages of images located in ZIP archive, FTP server, S3-like storage, etc.;
  • ability to send notifications via web sockets;
  • collecting statistics.

The classic way to use this approach is paired with the FaceStream module. FaceStream analyzes the video stream and sends the best shots from the video stream to LUNA PLATFORM for further processing. Using the specified handler, LUNA PLATFORM processes images according to the specified rules and saves objects to the specified databases.

Sequential performing of requests#

The second approach is to sequential performing of requests, i.e. in one request you need to perform a face detection and get its result, then use this result in an extraction request and so on.

With this approach, the following actions are performed:

As a rule, this approach is used when working with faces, but if necessary, you can also make separate requests with bodies, for example, to match body descriptors using the "matching bodies" request.

"Sequential performing of requests" approach

This approach is recommended for use:

  • during the initial acquaintance with the system;
  • if you need to measure the speed of each stage of image processing separately;
  • if you need to implement a processing scenario that is difficult to implement using handlers and events.

More detailed information about these approaches will be provided below.

Detection#

LP searches for all faces and/or bodies on each incoming photo image.

Detection is performed using neural networks located in the Handlers container.

Detailed information about neural networks is described in the "Neural networks" section.

Either the source image or a special format image obtained using the VisionLabs software (sample) can be submitted to the input.

Image format requirements#

Images should be sent only in allowed formats. The image format is specified in the "Content-Type" header.

The following image formats are supported: JPG, PNG, BMP, PORTABLE PIXMAP, TIFF. Each format has its own advantages and is intended for specific tasks.

The most commonly used formats are PNG and JPG. Below is a table with their advantages and disadvantages:

Format Compression Advantages Disadvantages
PNG Lossless Better image processing quality More image weight
JPG Lossy Less image weight Worse image processing quality

Thus, if it is not intended to save the source images in the Image Store, or the Image Store has a sufficiently large volume, then it is recommended to use PNG format to get the best image processing results.

It is not recommended to send images that are too compressed, as the quality of face and body estimations and matching will be reduced.

LUNA PLATFORM 5 supports many color models (for example, RGB, RGBA, CMYK, HSV, etc.), however, when processing an image, all of them are converted to the RGB color model.

The image can also be encoded in Base64 format.

For more information about the source images, see "Source images".

Face detection and estimation process#

Below is the order of face detection in the image:

Face image processing order
Face image processing order

Basic steps for face images detection:

  • Face detection. The bounding box of the face (height, width, "X" and "Y" coordinates) and five landmarks of the face are determined.
  • Normalization. The image is centered in a certain way and cropped to the size of 250x250 pixels required for further work. Thus, all processed images look the same. For example, the left eye is always in a frame defined by some coordinates. Such a normalized image is called a sample. The sample is stored in the Image Store. After creating the sample, it is assigned a special identifier "sample_id", which is used for further image processing.

All objects created in LUNA PLATFORM are assigned identifiers. Sample - "sample_id", face - "face_id", event - "event_id", etc. Identifiers are the main way to transfer data between requests and services.

For more information about samples, see "Sample object".

  • Estimation. The following face parameters are estimated:
    • gaze direction (rotation angles for both eyes);
    • head pose (yaw, roll and pitch angles);
    • sixty-eight landmarks. The number of landmarks depends on which face parameters are to be estimated;
    • image quality (lightness, darkness, blurriness, illumination and specularity);
    • mouth attributes (overlap, presence of smile);
    • mask presence (medical or tissue mask on the face, mask is missing, mouth is occluded);
    • emotions (anger, disgust, fear, happiness, neutrality, sadness, surprise);
    • others (see the "Face parameters" section).

If necessary, you can configure filtering by head pose angles.

Body detection and estimation process#

Below is the order of body image detection:

Body image processing order
Body image processing order

Basic steps for body images detection:

  • Body detection. The bounding box of the body (height, width, "X" and "Y" coordinates) is determined.
  • Normalization. The image is centered in a certain way and cropped to the size of 128x250 pixels required for further work (otherwise, the principle of operation is similar to the process of normalization of the face).
  • Estimation. The following body parameters are estimated:
    • upper body (presence and color of headwear, color of upper clothing);
    • sleeve length (long sleeves, short sleeves, unknown);
    • lower body (type of lower garment - trousers, shorts, skirt, unknown; color of lower garment, presence and color of shoes);
    • accessories;
    • others (see the "Body parameters" section).

Services interaction during detection#

Below is a diagram of the interaction of services when performing detection.

Services interaction during detection
Services interaction during detection

Requests to perform detection#

Below are the main requests where you can perform face or body detection.

Requests "create handler" and "generate events"#

These requests are used in the approach "Parallel performing of requests".

Parameter name in request body Response body Saving to DB
"detect_face" or "detect_body" of the "detect_policy" policy Handler ID containing detection rules. Handler information is stored in Handlers database
Request body Response body Saving to DB
Image Detection result Sample is stored in the Image Store, you can disable saving using the "store_sample" parameter. If the "store_event" parameter is enabled in the handler, the detection results will be stored in the Events database in the "face_detect_result" or "body_detect_result" table

It is possible to use the "multipart/form-data" schema in the request. Using a schema enables you to send multiple images at the same time, specify additional information for an event, and more. See "Use "multipart/form-data" schema when generating event" for more information.

Request "detect faces"#

The "detect faces" request is used in the approach "Sequential performing of requests". This request cannot be used for the body image.

Request body Response body Saving to DB
Image Detection result Sample is stored in the Image Store, it is impossible to disable saving

In the request, you can send an image or specify the URL to the image. You can use a multipart/form-data schema to send several images in the request. Each of the sent images must have a unique name. The content-disposition header must contain the actual filename.

You can also specify a bounding box parameters using the multipart/form-data schema. It enables you to specify the certain zone with a face on the image. You can specify the following properties for the bounding box:

  • top left corner "x" and "y" coordinates,
  • height,
  • width.

Request "sdk"#

The "sdk" request also allows face or body detection on the source image. Unlike other requests, its data cannot be used in the future. For more information, see "Sdk resource".

Requests to perform estimation#

The estimation of the parameters of faces and bodies is performed in conjunction with detection. Most often, the parameters for estimation have a name like estimate_<estimation_name>. For example, estimate_glasses.

The "Estimated data" section contains a list of all possible parameters that can be estimated in LUNA PLATFORM 5.

Detection results#

The following information is returned in responses to detection requests:

  • address to the stored face/body sample
  • id of the face/body sample
  • coordinates of the bounding box of the face/body
  • five landmarks of the face

All this data is used for further extraction and matching.

This information is supplemented depending on the enabled face/body estimation parameters.

You can read more about the detection of faces, bodies and the Handlers service in the "Handlers service" section.

Extraction#

In LUNA PLATFORM, the word "extraction" means:

  • extraction of descriptors and basic attributes (gender, age, ethnicity) of faces;
  • extraction of descriptors of bodies.

It is also estimate to determine a person's gender and age from the body image, but this method of determination is less accurate and occurs at the estimation stage. In addition, it is recommended to estimate the basic attributes of the body together with the extraction of the basic attributes of the face.

Descriptors are sets of characteristics read from faces or bodies in images. Descriptors requires much less memory than the source image.

It is impossible to restore the source image of a face or body from the descriptor due to personal data security reasons.

Descriptors are used to match faces to faces and bodies to bodies. It is impossible to match two faces or bodies in the absence of extracted descriptors. For example, if you want to match a face from a database with an input face image, you need to extract the descriptor for that image.

Descriptor is extracted using a neural network located in the Handlers container.

Detailed information about neural networks is described in the "Neural networks" section.

See "Descriptor" section for more information on descriptors.

Extraction process#

Data extraction requires the following information obtained during detection:

  • sample of the face or body ;
  • coordinates of the bounding box for the face or body;
  • five landmarks of the face.

The following is the order in which the extraction is performed:

Extraction order
Extraction order

* In most extraction requests, the descriptor is saved to the database without being returned in the response body (see "Requests to perform extraction").

Specifics of extracting data from faces#

The descriptor and basic attributes of the face derived from the same image are stored in the database as an attribute object. This object may include both descriptor and basic attributes, or only one of these entities.

It is possible to extract descriptor and basic attributes using several samples of the same face at the same time. In this way, an aggregated descriptor and aggregated basic attributes can be obtained. The accuracy of matching and extraction of basic attributes by means of the aggregated descriptor is higher. Aggregation should be used when working with images received from webcams.

Aggregated descriptor can be obtained from images, but not from already created descriptors.

Detailed information about aggregation is described in the "Aggregation" section.

Temporary attributes#

After extracting the attributes using the appropriate requests (see "Requests to perform extraction"), the extracted data is stored in the Redis database as temporary attributes.

Temporary attributes have a TTL (time to live) and are removed from the database after a specified period. In the request, you can specify a period from 1 to 86400 seconds. The default TTL is 5 minutes.

You can get information about a temporary attribute by its identifier until its TTL expires.

In order to save the attributes to the database, you need to create a face object and link the attributes to it. The face is created either by a separate request after the attributes are retrieved (the face is stored in the Faces DB) or during the generation of the event (the face is saved in the Faces DB or in the Events DB). You can create multiple faces and combine them into a list that can be used for matching.

See "Face object" and "List object" for details.

You can store attributes in an external database and use them in LP only when required. See "Create objects using external data".

For details on attributes, see "Attribute object".

Specifics of extracting data from bodies#

There are no attributes for bodies. All body information can only be stored as an event object. Accordingly, the extracted data is not stored in the Redis database. Information about bodies can only be saved to the database of the Events service when the corresponding option is enabled.

Just like faces, events can be aggregated. Detailed information about aggregation is described in the "Aggregation" section.

Services interaction during extraction#

Below is a diagram of the interaction of services when performing extraction.

Services interaction during extraction
Services interaction during extraction

Requests to perform extraction#

Below are the main requests where you can extract data from images of faces or bodies.

Request "create handler" and "generate events"#

These requests are used in the approach "Parallel performing of requests".

Parameters name in request body Response body Saving to DB
Face descriptor: "extract_face_descriptor" of "extract_policy" Handler ID containing extraction rules Handler information is stored in Handlers database
Body descriptor: "extract_body_descriptor" of "extract_policy" Similar to the description above Similar to the description above
Face basic attributes: "extract_basic_attributes" of "extract_policy" Similar to the description above Similar to the description above
Request body Response body Saving to DB
Image Extraction results If the "store_event" parameter is enabled in the handler, then the face/body DB will be stored in the in the "face_descriptor" or "body_descriptor" tables of the Events database, and the basic attributes are stored in the "event" table. If the "store_face" parameter is enabled in the handler, then a face will be created and the corresponding face descriptor will be stored in the in the "descriptor" table of the Faces database, and the basic attributes are stored in the "event" table. If the "store_attribute" parameter is enabled in the handler, then the face's descriptor and basic attributes will be stored into the Redis database for the TTL period

It is possible to use the "multipart/form-data" schema in the request. Using a schema enables you to send multiple images at the same time, specify additional information for an event, and more. See "Use "multipart/form-data" schema when generating event" for more information.

Request "extract attributes"#

The "extract attributes" request is used in the "Sequential performing of requests" approach. This request cannot be used for body sample.

Query parameter Request body Response body Saving to DB
"extract_descriptor" and "extract_basic_attributes" Image Extraction results Temporary attributes are stored in the Redis database and removed from there after the TTL period has expired. When a face is created, the descriptor is stored in the "descriptor" table of the Faces database, and the basic attributes is stored in the "attributes" field

To save information in the Faces database, you need to link attributes to a face using the "create face" request. Otherwise, after the TTL expires, the face's temporary attributes will be removed from the Redis database. It is not possible to save a face to the Events database using the "extract attributes" + "create face" request combination. To do this, you need to use the "Parallel performing of requests" approach (see above).

Request "sdk"#

The "sdk" request also enables you to extract basic attributes or descriptor. This request returns face or body descriptor in a specific format that can be used when matching "raw" descriptors. See "Sdk resource" for details.

Extraction results#

The following information is returned in responses to extraction requests:

  • identifier of the temporary face attribute (not returned if the "store_attribute" parameter of the "attribute_policy" is disabled in the handler)
  • address to the temporary face attribute stored in the Redis database (not returned if the "store_attribute" parameter of the "attribute_policy" is disabled in the handler)
  • face basic attributes
  • face/body descriptor quality
  • set of identifiers of samples for which the extraction was performed

Matching#

Given the presence of descriptors, LP enables you to search for similar faces or bodies in the database by comparing the given descriptor with descriptors stored in the Redis, Faces, Events databases or with descriptors passed directly.

The matching is done using the Python Matcher service. In response to the matching, the degree of similarity is returned, the value of which lies in the range from 0 to 1. A high degree of similarity means that two descriptors belong to the same person.

There are two types of descriptors - face descriptor and body descriptor. Body descriptor matching is not as accurate as face descriptor matching. With a large number of candidate events, the probability of false determinations of the best matches is higher than with a large number of candidate faces.

The sources of matched objects are represented as references (the objects you want to compare) and candidates (the set of objects that you want to compare with). Each reference will be matched with each of the specified candidates.

Matching cannot be performed if there is no descriptor for any of the compared faces.

You can select the following objects as references and candidates.

References:

  • Attributes
  • Faces
  • External IDs of faces and events
  • Events (face or body)
  • Event track IDs
  • Descriptors

Candidates:

  • Faces
  • Events (face or body)
  • Attributes
  • Descriptors

The references are specified using IDs of the corresponding objects. If a non-existent reference is set (for example, a non-existent ID is set in the "event_id" field or the "face_id" field), the corresponding error is returned.

The candidates are specified using filters. Matching results are returned for the candidates that correspond to the specified filters. If none of the candidates corresponds to the filters (for example, a non-existent ID is set in the "event_ids" field or the "face_ids" field), there will be no matching result and no error returned. The result field will be empty.

Matching process#

Below is the process of performing matching using faces as candidates and references:

Matching process
Matching process

* Descriptors can also be passed directly using "raw matching" request.

Service interaction during matching#

Below is a diagram of the interaction of services when performing matching.

Service interaction during matching
Service interaction during matching

See "Matching diagrams" for more service interaction scenarios when performing a matching.

Filtering of matching results#

The results of the matching can be filtered. To filter, you need to specify the appropriate request parameters.

Several examples for events and faces filters are given below.

For events filtration one can:

  • Specify camera ("source" field) and a period ("create_time__gte" and "create_time__lt" or "end_time__gte" and "end_time__lt" fields);
  • Specify tags ("tags" field) for events as filters and perform matching for events with these tags only;
  • Specify geographical area and perform matching with the events created in this area only. You can specify direct location (city, area, district, street, house) or a zone specified using geographical coordinates.

For faces filtration one can:

  • Specify a list of external face IDs to perform matching with them;
  • Specify a list ID to perform matching by list.

See the "matcher" > "matching faces" and "matcher" > "human body matching" sections in the API service reference manual for details about all the available filters.

Requests to perform matching#

Below are the main requests where descriptors can be matched.

Requests "create handler" and "generate events"#

These requests are used in the approach "Parallel performing of requests".

Policy name in request body Response body Saving to DB
"match_policy" Handler ID containing the matching rules. Handler information is stored in the Handlers database
Request body Response body Saving to DB
Image Matching results If the "store_event" parameter is enabled in the handler, then the match results will be stored in the "face_match_result" or "event_match_result" tables of the Events database

It is possible to use the "multipart/form-data" schema in the request. Using a schema enables you to send multiple images at the same time, specify additional information for an event, and more. See "Use "multipart/form-data" schema when generating event" for more information.

Requests "matching faces" and "matching bodies"#

The "matching faces" and "matching bodies" requests are used in the "Sequential performing of requests".

Request body Response body Saving to DB
candidates, references and filters are specified in the parameters "candidates" and "references" Matching results No

Request "raw matching"#

You can set descriptors as references and candidates in SDK, Raw and XPK files using the "raw matching" request.

All the descriptors data is provided with the request hence you can use this request when you need to match descriptors that are not stored in LUNA PLATFORM databases.

In LUNA PLATFORM, it is possible to extract descriptor in SDK format (see the "Descriptor formats" section).

Matching results#

The following information is returned in responses to match requests:

  • information about the reference
  • information about the candidate
  • the degree of similarity of each candidate with the reference
  • filtered candidates

For more information about descriptor matching services, see the "Matching services" section.

Verification#

You can use the "/verifiers" resource to create a special handler for verification. It includes several policies for the incoming images processing. See the "Verifiers description" section for details on the verifier.

Generally this request is used to match one given object with the incoming object. Use other matching requests for identification tasks.

Matching a large set of descriptors#

Sometimes it is necessary to match a very large set of descriptors (candidates). When matching a large set of descriptors by classical brute-force matching, it is impossible to get a low latency with a high number of requests per second. Therefore, it is required to use approximation methods implemented in LIM that exchange some accuracy for high speed. These methods speed up the matching by building an index containing preprocessing data.

The work of the module is as follows: The index is created using a task containing a "list_id" with linked faces, by which the matching will be made. You can also not specify "list_id", but set the settings for automatic indexing of all lists whose number of faces exceeds the value specified in a certain setting. After the index is created, the user sends a request to the API service, which redirects it to the Python Matcher Proxy service. The Python Matcher Proxy service determines where the matching will be performed - in the Python Matcher service or in the Indexed Matcher service. After matching, the response is returned to the user.

LUNA Index Module is licensed separately and is available on request to VisionLabs. The module delivery contains all the necessary documentation and the Docker Compose script.

Stored data and LP objects#

This section describes data stored by LUNA PLATFORM 5.

This information can be useful when utilizing LUNA PLATFORM 5 according to the European Union legal system and GDPR.

This section does not describe the legal aspects of personal data utilization. You should consider which stored data can be interpreted as personal data according to your local laws and regulations.

Note that combinations of LUNA PLATFORM data may be interpreted by law as personal data, even if data fields separately do not include personal data. For example, a Face object including "user_data" and descriptor can be considered personal data.

Objects and data are stored in LP upon performing certain requests. Hence it is required to disable unnecessary data storage upon the requests execution.

It is recommended to read and understand this section before making a decision on which data should be received and stored in LUNA PLATFORM.

This document considers usage of the resources listed in the APIReferenceManual document and creation of LUNA PLATFORM 5 objects only. The data created using Backport 3 and Backport 4 services is not considered in this section.

Source images#

Photo images are general data sources for LP. They are required for samples creation and Liveness check.

You can provide images themselves or URLs to images.

After normalization of the source images, samples are saved in JPG format by default, even if the source image is sent in PNG format. If necessary, you can change the format of the stored samples using the "default_image_extension" setting of the Image Store service.

It is not recommended to send rotated images to LP as they are not processed properly and should be rotated. You can rotate the image using LP in two ways:

  • by enabling the "use_exif_info" auto-orientation parameter by EXIF ​​data in the query parameters;
  • by enabling the "LUNA_HANDLERS_USE_AUTO_ROTATION" auto-orientation setting in the Configurator settings.

More information about working with rotated images can be found in the Nuances of working with services section.

See the detailed information about the requirements for the source images in the "Image format requirements" section.

Source images usage#

Source images can be specified for processing when performing POST requests on the following resources:

Source images saving#

Generally, it is not required to store source images after they are processed. They can be optionally stored for system testing purposes or business cases, for example, when the source image should be displayed in a GUI.

Source images can be stored in LP:

  • using the POST request on "/images" resource
  • during the the POST request on "/handlers/{handler_id}/events" resource. Source images are stored if the "store_image" option is enabled for "image_origin_policy" of the "storage_policy" during the handler creation using the "/handlers" resource.

Samples are stored in buckets of the Image Store for an unlimited time.

Samples are stored in buckets:

  • "visionlabs-samples" is the name of the bucket for faces samples.
  • "visionlabs-bodies-samples" is the name of the bucket for human bodies samples.

Paths to the buckets are specified in the "bucket" parameters of "LUNA_IMAGE_STORE_FACES_SAMPLES_ADDRESS" and "LUNA_IMAGE_STORE_BODIES_SAMPLES_ADDRESS" sections in the Configurator service.

Saving metadata with an image

In the resource "/images", user metadata can be saved along with the image using the header X-Luna-Meta-<user_defined_key> with the value <user_defined_value>. In the Image Store bucket, metadata is saved in a separate file <image_id>.meta.json, which is located next to the source image.

In the response to the "get image" request, you need to specify the header with_meta=1 to get the image metadata in the response header.

To store metadata values for multiple keys, multiple headers must be set.

Source images deletion#

Source images are stored in the bucket for an unlimited period.

You can delete source images from the bucket:

  • using the DELETE request on the "/images/{image_id}" resource.
  • manually, by deleting the corresponding files from the bucket.

Sample object#

Samples usage#

Separate samples are created for face and body.

Samples are required:

  • for basic attributes estimation.
  • for face, body and image parameters estimation.
  • for face and body descriptors extraction.
  • when changing the descriptors NN version.

When the neural network is changed, you cannot use the descriptors of the previous version. A descriptor of a new version can be extracted if the source sample is preserved only.

Samples can also be used as avatars for faces and events. For example, when it is required to display the avatar in GUI.

Samples are stored in buckets of the Image Store.

Samples should be stored until all the required requests for face, body parameters estimation, basic attributes estimation, and descriptors extraction are finished.

Samples creation and saving#

Samples are created upon the face and/or human body detection in the image. Samples are created during the execution of the following requests:

  • POST on the "/detector" resource. Samples are created and stored implicitly. The user does not affect their creation.
  • POST on the "/handlers/{handler_id}/events" resource. Samples are stored if the "store_sample" option is enabled for "face_sample_policy" and "body_sample_policy" of the "storage_policy" during the handler creation using the "/handlers" resource.
  • POST on the "/verifiers/{verifier_id}/verifications" resource. Samples are stored if the "store_sample" option is enabled for "face_sample_policy" of the "storage_policy" during the verifier creation using the "/verifiers" resource. Human body samples are not stored using this resource.

External samples saving

Samples can be provided to LP directly from external VisionLabs software (e. g., FaceStream). The software itself performs sample creation from the source image.

You can manually store external samples in LP (an external sample should be provided in the requests):

  • Using the POST request on the "/samples/{sample_type}" resource.
  • When the "warped_image" option is set for the POST request on the "/detector" resource.
  • When the "image_type" option is set to "1" ("face sample) or "2" (human body sample) in the query parameters for the POST request on the "/handlers/{handler_id}/events" resource. The "store_sample" option should be enabled for "face_sample_policy" and "body_sample_policy" of the "storage_policy".

Samples saving disabling#

There is no possibility to disable samples saving for the request on the "/detector" resource. You can delete the created samples manually after the request execution.

The "/handlers" resource provides "storage_policy" that allows you to disable saving following objects:

  • Face samples. Set "face_sample_policy" > "store_sample" to "0".
  • Human body samples. Set "body_sample_policy" > "store_sample" to "0".

The "/verifiers" resource provides "storage_policy" that allows you to disable saving the following objects:

  • Face samples. Set "face_sample_policy" > "store_sample" to "0".

Samples deletion#

You can use the following ways to delete face or body samples:

  • Perform the DELETE request to "/samples/faces{sample_id}" resource to delete a face sample by its ID.
  • Perform the DELETE request to "/samples/bodies{sample_id}" resource to delete a body sample by its ID.
  • Manually delete the required face or body samples from their bucket.
  • Use "remove_samples" parameter in the "gc task" when deleting events.

Getting information about samples#

You can get face or body sample by ID.

If a sample was deleted, then the system will return an error when making a GET request.

Descriptor#

The basic description and use of descriptors is described in the "Extraction" section above.

See the additional information in the "Descriptor formats" section.

Attribute object#

This object is intended to work with faces only.

It is recommended to introduce yourself with the "Extraction" section before reading.

Attributes are temporary objects that include basic attributes and face descriptors. This data is received after the sample processing.

Basic attributes include the following personal data:

  • age. The estimated age is returned in years.

  • gender. The estimated gender: 0 - female, 1 - male.

  • ethnicity. The estimated ethnicity.

You can disable basic attributes extraction to avoid this personal data storage.

A descriptor cannot be considered personal data. You cannot restore the source face using a descriptor.

Attributes usage#

Attribute object can be used in the following cases:

As attributes have TTL (by default, it is set to 5 minutes), it is convenient to use them for verification or identification purpose. They will be deleted soon after you receive the result.

It is not the only way to save descriptors and basic attributes to a face. You can use "/handlers/{handler_id}/events" to create a face and add the extracted descriptor and basic attributes to it.

Basic attributes saved in faces or events objects can be used for filtration of the corresponding objects during requests execution.

Descriptors are required for matching operations. You cannot compare two faces or bodies without their descriptors.

Attributes creation and saving#

Attributes can be created when sending requests on the following resources:

The "extract_basic_attributes" and "extract_descriptor" query parameters should be enabled for extraction of the corresponding data. Attributes are implicitly stored after the request execution.

The "extract_basic_attributes", "extract_face_descriptor", and "extract_body_descriptor" parameters should be enabled in the handler for extraction of the corresponding data. The "storage_policy" > "attribute_policy" > "store_attribute" option should be enabled in the handler for attributes storage.

The "extract_basic_attributes" parameter should be enabled in the verifier for extraction of the corresponding data. The "storage_policy" > "attribute_policy" > "store_attribute" option should be enabled in the verifier for attributes storage.

Attributes can be created using external descriptors and external basic attributes using the following resource:

Attributes time to live#

Attributes have a TTL. After the TTL expiration, attributes are automatically deleted. Hence, it is not required to delete attributes manually.

The default TTL value can be set in the "default_ttl" parameter in the Configurator service. The maximum TTL value can be set in the "max_ttl" parameter in the Configurator service.

TTL can be directly specified in the requests on the following resources:

  • "/extractor" in the "ttl" query parameter.
  • "/handlers" in the "storage_policy" > "attribute_storage" in the "ttl" parameter.

Attributes extraction disabling#

You can disable basic attributes extraction by setting the "extract_basic_attributes" parameter to "0" in the following resources:

You can disable descriptor extraction by setting the "extract_descriptor" parameter to "0" in the following resources:

Attributes saving disabling#

You can disable the "storage_policy" > "attribute_policy" > "store_attribute" parameter in the "/handlers" resource to disable attributes storage. When this handler is used for the "/handlers/{handler_id}/events" resource, attributes are not saved even for the specified TTL period.

You can disable the "storage_policy" > "attribute_policy" > "store_attribute" parameter in the "/verifiers" resource to disable attributes storage. When this verifier is used for the "/verifiers/{verifier_id}/verifications" resource, attributes are not saved even for the specified TTL period.

Attributes deletion#

Attributes are automatically deleted after the TTL expiration.

Perform the DELETE request to "/attributes/{attribute_id}" resource to delete an attribute by its ID.

Getting information about attributes#

You can receive information about existing temporary attributes before TTL has not expired.

  • Perform the GET request to "/attributes/{attribute_id}" resource to receive information about a temporary attribute by its ID.
  • Perform the GET request to "/attributes" resource to receive information about previously created attributes by their IDs.
  • Perform the GET request to "/attributes/{attribute_id}/samples" resource to receive information about all the temporary attribute samples by the attribute ID.

If any attribute TTL has not expired, the attribute data is returned. Otherwise, no data is returned for this attribute in the response.

Face object#

Faces are changeable objects that include information about a single person.

The following general data is stored in the face object:

  • Descriptor ("descriptor")
  • Basic attributes ("basic_attributes")
  • Avatar ("avatar")
  • User data ("user_data")
  • External ID ("external_id")
  • Event ID ("event_id")
  • Sample ID ("sample_id")
  • List ID ("list_id")
  • Account ID ("account_id")

See the "Faces database description" section for additional information about the face object and the data stored in it.

Attributes data can be stored in a face. Basic attributes data, descriptor data, and information about samples are saved to the Faces database and linked with the face object.

When you delete a face, the linked attributes data is also deleted.

  • Descriptor. You should specify a descriptor for the face if you are going to use the face for comparison operations.

You cannot link a face with more than one descriptor.

  • Basic attributes: Basic attributes can be used for displaying information in GUI.

The face object can also include IDs of samples used for the creation of attributes.

General event fields description:

  • "user_data". This field can include any information about the person.

  • "avatar". Avatar is a visual representation of the face that can be used in the user interface.

This field can include a URL to an external image or a sample that is used as an avatar for the face.

  • "external_id". The external ID of the face.

You can use the external ID to work with external systems.

You can use the external ID to specify that several faces belong to the same person. You should set the same external ID to these faces upon their creation.

You can get all the faces that have the same external ID using the "faces" > "get faces" request.

  • "event_id". This field can include an ID of the event that gave birth to this face.

  • "list_id". This field can include an ID of the list to which the face is linked.

A face can be linked to one or several lists.

  • "account_id" - the account ID to which the face belongs.

  • "sample_id" - One or several samples can be linked to a face. It should be the samples used for attributes extraction. All the samples must belong to the same person. If no samples are save for a face, you cannot update its descriptor to a new NN version.

IDs do not usually include personal information. However, they can be related to the object with personal data.

Faces usage#

Faces usually include information about persons registered in the system. Hence, they are usually required for verification (the incoming descriptor is compared with the face descriptor) and identification (the incoming descriptor is compared with several face descriptors from the specified list) purposes.

If there are no faces stored in your system, you cannot perform the following operations with these objects:

  • Matching by faces and lists, when faces are set as candidates or references in the request to the "/matcher/faces" resource.
  • Matching by faces and lists, when faces are set as candidates in the matching policy of the request to the "/handlers" resource.
  • Cross-matching tasks, when faces are set as candidates or references in the request to the "/tasks/cross_match" resource.
  • Clustering tasks, when faces are set as filters for clustering in the request to the "/tasks/clustering" resource.
  • Verification requests, when faces IDs are set in query parameters in the request to the "/verifiers/{verifier_id}/verifications" resource.
  • ROC curves creation tasks ("/tasks/roc" resource).

Faces creation and saving#

Faces can be created using the following requests:

You can specify attributes for the face using one of several ways:

- by specifying the attribute ID of the temporary attribute;
- by specifying descriptors and basic attributes (with or without samples);
- by specifying descriptors (with or without samples);
- by specifying basic attributes (with or without samples).

The last three ways are used when you need to create a face using data stored in external storage.

  • "generate events". The "storage_policy" > "face_policy" > "store_face" parameter should be enabled in the utilized handler.

Requests for working with faces#

Below are the requests for working with faces:

Request Description
"create face" Create face.
"get faces" Get information about faces according to filters.. You can specify the list ID as a filter to get information about all the faces associated with the list.
"delete faces" Delete several faces by their IDs.
"get face count" Get information about the number of previously created faces according to filters.
"get count of faces with attributes" Get information about the number of previously created faces with attached attributes.
"get face" Get information about faces by its ID.
"patch face" Update the "user_data", "external_id", "event_id", "avatar" fields of an already created person.
"remove face" Delete face by its ID.
"check if face exists" Check if face with the specified ID exists.

List object#

A list is an object that can include faces that correspond to a similar category, for example, customers or employees.

Lists include the "description" field that can store any required data to distinguish lists from each other.

Lists usage#

You can add faces to lists for dividing the faces into groups.

Only faces can be added to lists.

Lists IDs are usually specified for matching operations as a filter for faces.

Requests for working with lists#

Below are the requests for working with lists:

Request Description
"create list" Create list.
"get lists" Get information about all previously created lists according to filters.
"delete lists" Delete several lists by their IDs.
"get list count" Get information about the number of previously created lists according to filters.
"get list" Get information about the list by its ID.
"check if list exists" Check if the list with the specified ID exists.
"update list" Update the "user_data" field of the already created list.
"delete list" Delete the list by its id.
"attach/detach faces to the list" Attach/detach specific face IDs to the list.

Event object#

Events are used in the "Parallel performing of requests" approach.

Events are immutable objects that include information about a single face and/or body. They are received after images processing using handlers or created manually.

Unlike faces, events cannot be changed after creation. The only exception is the event descriptor. It can be updated to the new neural network version.

Generally an event is created for each face/body detected in the image. If the detected face and body belong to the same person they are saved to the same event.

LP also provides possibility to create new events manually without processing using handlers. It is used in the cases when a logic for filling in event fields should differ from the logic using handlers. For example, when you want to extract descriptors only for a part of the detections and not for all the detections.

An event can be linked with a descriptor stored in the Events database.

Since there can be millions of events in the Events database, it is recommended to use a high performance column database. You can also use a relational database, but this will require additional configuration.

The following general data in stored in the event object:

  • "source". This field can include the source from which the face or human body was received. The value is specified during the "generate events" request execution.
  • "location". This group of parameters includes information about the location where the event occurred. The values are specified during the "generate events" request execution. The following fields are included in this group:

    • "city"
    • "area"
    • "district"
    • "street"
    • "house_number"
    • "geo_position" - latitude and longitude.
  • "tag". This field includes a tag (or several tags) applied during the "conditional_tags_policy" execution. The tag(s) can be specified during the "create handler" request execution or the "generate events" request execution.

  • "emotion". This field includes the predominant emotion estimated for the face. If necessary, you can disable the "estimate_emotions" parameter in the "/handlers" resource or "create verifier" request.
  • "insert_time". This field includes the time when the face appeared in the video stream. This data is usually provided by external systems.
  • "top_similar_object_id". This field includes the top similar object ID.
  • "top_similar_external_id". This field includes the external ID of the top similar candidate (event or face) with which the face is matched.
  • "top_matching_candidates_label". This field includes a label applied during the "match_policy" execution. The labels are specified in this policy in the "/handlers" resource.
  • "face_id". Events include the ID of the face to which the event gave birth.
  • "list_id". Events include the ID of the list to which the created face was attached.
  • "external_id". This external ID will be specified for the face created during the event request processing. The value is specified during the "generate events" request execution.
  • "user_data". This field can include any information. User data will be specified for the face created during the event request processing. The value is specified during the "generate events" request execution.
  • "age", "gender", and "ethnic_group" can be stored in the event. The basic attributes extraction is specified in the "extract_policy" of the "/handlers" resource.
  • face and body descriptors. Events can be used for matching operations as they include descriptors.
  • information about faces, bodies and events that were used for matching with the events.
  • information about faces and human bodies detection results that include:
  • the IDs of created samples.
  • the information about bounding boxes of detected faces/bodies.
  • "detect_time" - the time of face/body event detection. The time is returned from an external system.
  • "image_origin" - the URL of the source image where the face/body was detected.

IDs do not usually include personal information. However, they can be related to the object with personal data.

See the "Events database description" section for additional information about the event object and the data stored in it.

Attributes aggregation for events

It is possible to enable attributes aggregation for the incoming images when creating an event.

According to the specified settings faces and bodies are detected in the incoming images. If faces are detected and an aggregated descriptor creation is specified, then a single descriptor is created using all the found faces. The same logic is used for the aggregated body descriptor creation. In addition, the basic attributes are aggregated, the obtained values for the definition of Liveness, masks, emotions, upper and lower body and the body accessories.

All the information about the detected face/body and estimated properties is returned in the response separately for each image. The aggregated results are stored when an event is saved in the DB. The information about face/body detection is added to the Events database as a separate record for each image from the request.

When performing aggregation, the images in the request to the "/handlers/{handler_id}/events" resource should include only one face/body and the face/body should belong to the same person.

Events usage#

Events are required to store information about persons' occurrences in a video stream for further analysis. An external system processes a video stream and sends frames or samples with detected faces and bodies.

LP processes these frames or samples and creates events. As events include statistics about face detection time, location, basic attributes, etc., they can be used to collect statistics about the person of interest or gather general statistics using all the saved events.

Events provide the following features:

  • Matching by events

As events store descriptors, they can be used for matching. You can match the descriptor of a person with the existing events to receive information about the person's movements.

To perform matching, you should set events as references or candidates for matching operations (see section "Descriptors matching").

  • Notifications via web sockets

You can receive notifications about events using web sockets. Notifications are sent according to the specified filters. For example, when an employee is recognized, a notification can be sent to the turnstile application and the turnstile will be opened.

See the "Sending notifications via Sender" section.

  • Statistics gathering

You can gather statistics on the existing events using a special request. It can be used to:

  • Group events by frequency or time intervals.
  • Filter events according to the values of their properties.
  • Count the number of created events according to the filters.
  • Find the minimum, maximum and average values of properties for the existing events.
  • Group existing events according to the specified values of properties.

You should save generated events to the database if you need to collect statistics after the event is created.

See the "events" > "get statistics on events" request in "APIReferenceManual.html" for details.

  • User defined data

You can add custom information to events, which can later be used to filter events. The information is passed in JSON format and written to the Events database.

See "Events meta-information" for details.

Events creation and saving#

Events are created during the request to the "/handlers/{handler_id}/events" resource execution. The "event_policy" > "store_event" should be set to "1" when creating a handler using the "/handlers" resource.

For manual event creation and saving use the "/handles//events/raw" resource.

The format of the generated event is similar to the format returned by the "/handlers/{handler_id}/events" resource. The "event_id" and "url" fields are not specified when creating a request. They are returned in the response after the event is created.

Notifications using web sockets are sent when events are created using this resource.

Events deletion#

Events are only deleted using the garbage collection task. You should specify filters to delete all the events corresponding to the filters.

To delete an event execute the POST request to the "/tasks/gc" resource.

You can also manually delete the required events from the database.

Getting information about events#

You can receive information about existing events.

If the event was deleted, then the system will return an error when making a GET request.

  • The "events" > "get events" request enables you to receive information about all previously created events according to the specified filters. By default, events are filtered for the last month from the current date. If any of the following filters are specified, then the default filtering will not be used.

    • list of event IDs (event_ids);
    • lower event ID boundary (event_id__gte);
    • upper event ID boundary (event_id__lt);
    • lower create time boundary (create_time__gte);
    • upper create time boundary (create_time__lt).

Use "multipart/form-data" schema when generating event#

In the "generate events"request, you can send an image, provide an image URL, or send a raw descriptor. You can use a multipart/form-data schema to send several images in the request. Each of the sent images must have a unique name. The Content-Disposition header must contain the actual filename.

The following parameters can also be specified in the request using the "multipart/form-data" schema:

  • parameters of the bounding box of the face or body. It enables you to specify the certain zone with a face on the image
  • image detection time
  • time stamp of detection relative to the beginning of videofile
  • source image
  • user meta-information

All of the above information will be saved in the event when it is generated.

Handler object#

Handlers are used in the "Parallel performing of requests" approach.

Handler is an object that stores entry points for image processing. The entry points are called policies. They describe how the image is processed hence define the LP services used for the processing. The handler is created using the "create handler" request.

The table below includes all the existing handler policies. Each policy corresponds to one of the LP services listed in the "Service" column.

Handler policies

Policy Description Service
detect_policy Specifies face, body, image parameters to be estimated. Handlers
extract_policy Specifies the necessity of descriptor and basic attributes (gender, age, ethnicity) extraction. It also determines the threshold for the descriptor quality. Handlers
match_policy Specifies the array of lists for matching with the current face and additional filters for the matching process for each of the lists. The matching results can be used in create_face_policy and link_to_lists_policy. Python Matcher
storage_policy Enables data storage in the database for samples ("face_sample_policy"/"body_sample_policy"), origin images ("image_origin_policy"), attributes ("attribute_policy"), faces ("face_policy"), and events ("events_policy"). You can specify filters for the objects saving. You can perform automatic linking to lists for faces and set TTL for attributes. You can enable and disable notifications sending ("notification_policy") Image Store, Faces, Events
conditional_tags_policy Specifies filters for assigning tags to events Handlers

You can skip or disable policies if they are not required for your case. Skipped policies are executed accorded to the specified default values. For example, you should disable samples storage in the corresponding policy if they are not required. If you just skip the "storage_policy" in the handler, samples will be saved according to the default settings.

All the available policies are described in the "API service reference manual".

Handlers usage

Using handler you can:

  • Specify face/body detection parameters and estimated face/body parameters (see "Source image processing and samples creation").
  • Specify human body detection execution.
  • Enable basic attributes and descriptors extraction (see "Descriptor extraction and attribute creation").
  • Perform descriptors comparison (see "Descriptors matching") to receive matching results and use the result as filters for the policies.
  • Configure automatic creation of faces using filters. You can specify filters according to the estimated basic attributes and matching results.
  • Configure automatic attachment of the created faces to the lists using filters. You can specify filters according to the estimated basic attributes and matching results.
  • Specify the objects to be saved after the image processing.
  • Configure automatic tag adding for the event using filters. You can specify filters according to the estimated basic attributes and matching results.

In addition, the ability to process a batch of images using a handler is available (see the "Estimator tast" section).

Dynamic handlers

Handlers can be static or dynamic.

When a handler is static, you specify its parameters during the handler creation and then you specify the created handler ID during the event creation.

When a handler is dynamic, you can change its parameters during the event creation request. Dynamic handlers enable you to separate technical parameters (thresholds and quality parameters that should be hidden from frontend users) and business logic. Dynamic handler policies are specified using the "multipart/form-data" schema in the event generation request.

Only static verifiers are available.

Dynamic handlers usage example:

You need to separate head pose thresholds from other handler parameters.

You can save the thresholds to your external database and implement the logic of automatic substitution of this data when creating events (e. g. in your frontend application).

The frontend user sends requests for events creation and specifies the required lists and other parameters. The user does not know about the thresholds and cannot change them.

Requests for working with handlers#

Below are the requests for working with handlers:

Request Description
"create handler" Create handler.
"get handlers" Get information about all previously created handlers according to filters.
"get handler count" Get information about the number of previously created handlers according to filters.
"validate handler policies" Check the handler policies before using them to create or update the handler.
"get handler" Get information about the handler by its ID.
"replace handler" Update the parameters of an already created handler.
"check if handler exist" Check if the handler with the specified ID exists.
"remove handler" Delete the handler by its ID.

Verifier object#

Verifiers are used in the "Parallel performing of requests" approach.

The Handlers service also processes requests to create verifiers required for the verification process. They are created using the "create verifier" request.

The verifier contains a limited number of handler policies - detect_policy, extract_policy, and storage_policy.

You can specify the "verification_threshold" in the verifier.

The created verifier should be used when sending requests to:

  • "/verifiers/{verifier_id}/verifications" resource. You can specify IDs of the objects with which the verification should be performed.
  • "/verifiers/{verifier_id}/raw" resource. You can specify raw descriptors as candidates and references for matching. As raw descriptors are processed "verification_threshold" is the general parameter used from the specified verifier.

The response includes the "status" field. It shows, if the verification was successful or not for each pair of matched objects. It is successful if the similarity of two objects is greater than the specified "verification_threshold".

Requests for working with verifiers#

Below are the requests for working with verifiers:

Request Description
"create verifier" Create verifier.
"get verifiers" Get information about all previously created verifiers according to filters.
"raw verification" Perform verification of raw descriptors.
"perform verification" Perform verification for the specified objects.
"count verifiers" Get information about the number of previously created verifiers according to filters.
"get verifier" Get information about the verifier by its ID.
"replace verifier" Update the parameters of an already created verifier.
"check if verifier exists" Check if the verifier with the specified ID exists.
"remove verifier" Delete the verifier by its ID.

Sending notifications via Sender#

You can send notifications about created events to the third-party party applications via web-sockets. For example, you can configure LP to send notifications about VIP guests' arrival to your mobile phone.

When LP creates a new event, the event is added to the special DB. Then the event can be sent to the Sender service if the service is enabled.

The third-party party application should be subscribed to Sender via web-sockets. The filter for the events of interest should be set for each application. Thus you will receive notifications only for the required events.

The notification about a new event is sent by Sender to the required applications.

Sender workflow
Sender workflow

See section "Sender service" for details about the Sender service.

Licenses#

The Licenses service provides information about license terms to LP services.

See section "Licenses service" for details.

Admin#

The Admin service implements tools for administrative routines.

All the requests for Admin service are described in AdminReferenceManual.html.

See section "Admin service" for details about the Admin service.

Configurator#

The configurator service includes the settings of all the LP services. Thus it provides the possibility to configure all the services in one place.

See section "Configurator service" for details about the Configurator service.

Tasks#

Tasks requests provide additional possibilities for large data amounts processing.

The greater data array is processed, the longer it takes to finish the task. When a task is created you receive the task ID as the result of the request.

See the "Tasks service" section for details about tasks processing.

Clustering task

The clustering task provides the possibility to group events and faces that belong to the same person to clusters.

You can specify filters for choosing the objects to be processed.

Using the clustering task you can, for example, receive all the IDs of events that belong to the same person and which occurred during the specified period of time.

See the "task processing" > "clustering task" request for details about the Clustering task request creation.

See "Clustering task" for more information about this task.

Reporter task

The reporter task enables you to receive a report in CSV format with extended information about the objects grouped to clusters.

You can select columns that should be added to the report. You can receive images corresponding to each of the IDs in each cluster in the response.

See the "task processing" > "reporter task" request for details about the reporter task request creation.

See "Reporter task" for more information about this task.

Exporter task

The exporter task enables you to collect event and/or face data and export them from LP to a CSV file.

The file rows represent requested objects and corresponding samples (if they were requested).

See the "task processing" > "exporter task" request for details about the exporter task request creation.

See "Clustering task" for more information about this task.

Cross-matching task

Cross-matching means that a large number of references can be matched with a large number of candidates. Thus every reference is matched with every candidate.

Both references and candidates are specified using filters for faces and events.

See the "task processing" > "cross-matching task" request in the API service reference manual for details about the cross-matching task creation.

See "Cross-matching task" for more information about this task.

Linker task

The linker task enables you to:

  • link the existing faces to lists
  • create faces from the existing events and link the faces to lists

The linked faces are selected according to the specified filters.

See the "task processing" > "linker task" request in the API service reference manual for details about the cross-matching task creation.

See "Linker task" for more information about this task.

Estimator task

The estimator task enables you to perform batch processing of images using the specified policies.

In the request body, you can specify the handler_id of an already existing static or dynamic handler. For the dynamic handler_id, the ability to set the required policies is available. In addition, you can create a static handler specifying policies in the request.

The resource can accept five types of sources with images for processing:

  • ZIP archive
  • S3-like storage
  • Network disk
  • FTP server
  • Samba network file system

See the "task processing" > "estimator task" request in the API service reference manual for details about the estimator task creation.

See "Estimator task" for more information about this task.

Backports#

In LP 5, Backport is a mechanism of the new platform version mimicry to the older versions.

The backports are implemented for LUNA PLATFORM 3 and LUNA PLATFORM 4 by means of the Backport 3 and Backport 4 services respectively.

The backports enable you to send requests that are similar to the requests from LUNA PLATFORM 3 and LUNA PLATFORM 4 and receive responses in the appropriate format.

There are some nuances when using backports.

  • Data migration to Backports is complicated.

Database structures of LUNA PLATFORM 3 and LUNA PLATFORM 4 differ from the database structure of LUNA PLATFORM 5. Therefore, additional steps should be performed for data migration and errors may occur.

  • Backports have many restrictions.

Not all logic from LP 3 and LP 4 can be supported in Backports. See the "Restrictions for working with Backport services" section for more information.

  • Backports are limited by available features and are not developed.

New features and resources of LUNA PLATFORM 5 are not supported when using Backports and they will never be supported. Thus, these services should only be used if it is not possible to perform a full migration to LUNA PLATFORM 5 and no new features are required.

Use case:

For example, you have a frontend application that sends requests to LUNA PLATFORM 3.

When you use the Backport 3 service, the requests to LP 3 are received by the service. They are formatted and sent to LUNA PLATFORM 5 according to its API format. LP 5 processes the request and sends the response to Backport 3. The Backport 3 service formats all the received results to the LP 3 format and sends the response.

LP3 vs Backport 3 and LP 5
LP3 vs Backport 3 and LP 5

Restrictions for working with Backport services#

As the tables of the saved data and the data itself differs for LP versions there are features and restrictions on the execution of requests. The information about features and restrictions is given in "Backport 3" and "Backport 4" sections of this document.

You can use Backport service and API service of LP 5 simultaneously following these restrictions:

  • When you use Backport services, it is strongly recommended not to use the same account for requests to Backport service and the API service of LUNA PLATFORM 5.

  • Perform the admin requests that do not use account ID to the LUNA PLATFORM 5 only.

Sdk resource#

The sdk resource allows you to detect faces and/or human bodies and estimate attributes in input images. After the request is performed, the received data is not saved to the database and Image Store, it only returned in the response.

The sdk resource combines the capabilities of other resources and also provides additional options. For example, with "/sdk" request you can do the following features:

  • estimate presence of glasses in the image;
  • estimate liveness in the image;
  • estimate face/body parameter(s);
  • aggregate face/body parameter(s);
  • create a face sample and return its descriptor in the SDK format encoded in Base64;
  • create a human body sample and return its descriptor in the SDK format encoded in Base64;
  • extract face and body descriptor(s) and return them in the response;
  • set descriptor quality score threshold to filter out images that are poorly suited for further processing;
  • filter by mask states;
  • other.

Face or body descriptor in the SDK format encoded in Base64 can be used in requests for matching of "raw" descriptors.