Skip to content

General concepts#

The following sections explain general LP concepts, describe existing services and created data types. It does not describe all the requests, database structures, and other technical nuances.

LP consists of several services. All the services communicate via HTTP requests: a service receives a request and always returns a response.

You can find information about LUNA PLATFORM 5 architecture in the "Interaction of LP Services" section.

All the services can be divided into general and additional. Using the general services, the optimal operation of the LP is ensured. Using of all general services is enabled by default in the API service settings. If necessary, some general services can be disabled (see the "Optional services usage" section in the installation manual).

Most of the services have their database or file storage.

General services

Service Description Database Disableable
API The main gateway to LP. Receives requests, distributes tasks to other LP services - No
Handlers Detects faces in images, extracts face properties and creates samples. Extracts descriptors from samples. Extracts basic attributes of the images. Creates and stores handlers PostgreSQL/ Oracle No
Python Matcher Performs matching tasks - No
Faces Creates faces, lists, and attributes. Stores these objects in the database. Allows other services to receive the required data from the database PostgreSQL/ Oracle, Redis No
Image Store Stores samples, reports about long tasks execution, created clusters and additional metadata Local storage/ Amazon S3 No
Configurator Stores all configurations of all the services in a single place PostgreSQL/ Oracle No
Licenses Checks your license conditions and returns information about them - No
Events Stores data on the generated events in the database. PostgreSQL Yes
Admin Enables to perform general administrative routines PostgreSQL/ Oracle Yes
Accounts Manages accounts PostgreSQL/ Oracle Yes
Tasks Performs long tasks, such as garbage collection, extraction of descriptors with a new neural network version, clustering PostgreSQL/ Oracle Yes
Tasks Worker Performs the internal work of the Tasks service PostgreSQL/ Oracle Yes
Sender Sends notifications about created events via web-socket. Redis Yes

Additional services provide more opportunities for the system to work. Launching additional services is optional.

Additional services

Service Description Database
Backport 3 The service is used to process LUNA PLATFORM 3 requests using LUNA PLATFORM 5. PostgreSQL/ Oracle
Backport 4 The service is used to process LUNA PLATFORM 4 requests using LUNA PLATFORM 5. -
User Interface 3 User Interface is used to visually represent the features provided with the Backport 3 service. It does not include all the functionality available in LP 3. -
User Interface 4 User Interface is used to visually represent the features provided with the Backport 4 service. It does not include all the functionality available in LP 4. -
Python Matcher Proxy The service manages matching requests and routes them to Python Matcher or plugins. -

Below is a diagram of the interaction of the general and optional services.

Simplified interaction diagram of general and optional services
Simplified interaction diagram of general and optional services

This diagram does not describe the communication lines between the services. For a full description of the interaction of services, see the "Interaction of LP services" section.

There are several third-party services that are usually used with LP.

These services are not described in this document. See the corresponding documentation provided by the services vendors.

Third-party services

Function Description Supported services
Balancer Balance requests between LP services when there are several similar services launched. For example, they can balance requests between API services or between two contours of LP upon scaling. NGINX
Monitoring system The monitoring system is used to monitor and control the number of processes launched for LP. Supervisor
Monitoring database The database is used for monitoring purposes. Influx
Monitoring visualization Monitoring visualization is represented by a set of dashboards. You can evaluate the total load on the system or the load on individual services. Grafana
Log rotation service All LP services write logs and their size may grow dramatically. Log rotation service is required to delete outdated log files and free disk space. Logrotate, etc.

Accounts, tokens and authorization types#

Account#

An account is required to delimit the visibility areas of objects for a particular user. The account is necessary to perform most of the requests. Each created account has its own unique "account_id". All account data is stored in the Accounts service database under this ID.

The account can be created using a POST request "create account" to the API service, or using the Admin service . When creating the account, you must specify the following data: login (email), password and account type.

Account type#

The account type determines what data is available to the user.

There are three types of accounts:

  • user - the type of account with which you can create objects and use only your account data.
  • advanced_user - the type of account for which rights similar to "user" are available, and there is access to the data of all accounts. Access to data from other accounts means the ability to receive data (GET requests), check their availability (HEAD requests) and perform comparison requests based on data from other accounts.
  • admin - the type of account for which rights similar to "advanced_user" are available, and there is also access to the Admin service.

In the API service, you can work with all types of accounts, but only "advanced_user" and "user" types of accounts can be created, while in the Admin service you can create all three types.

By default, there is the "admin" type account in the system with the default login and password root@visionlabs.ai/root.

Using the header "Luna-Account-Id" in the "create account" request you can set the desired account ID. It should also be used if it is necessary to preserve the ability to work with data created in LP versions prior to 5.30.0 by specifying the "account_id" in the "Luna-Account-Id" header. Thus, using this parameter will link the old "account_id" to the account being created (See "Account migration" section in the LUNA PLATFORM upgrade manual for details on migration).

In response to a request to create the account, "account_id" is issued. After creating the account, you can use this ID for LunaAccountIdAuth authorization type or use BasicAuth authorization type (login/password authorization).

Token#

Token is linked to an existing account with any type ("user", "advanced_user", "admin") and enables you to impose extended restrictions on the requests being made. For example, when creating the token, you can give the user permission only to create and modify all lists and faces, or you can prevent the use of certain handlers by specifying their ID.

The token is created using the "create token" request to the API service.

An unlimited number of tokens can be created for each account. The token and all its restrictions are stored in the database and linked to the account by the "account_id" parameter. You cannot link one token to different accounts. To create tokens with the same permissions across different accounts, it is recommended to save the request body template for creating the account and use it.

There is no need to use both the token and the account. When working with tokens, you can restrict access to the "/accounts" resource using external means.

You can give additional restrictions on the token at any time using the "replace token" request or you can revoke the token using the "delete token" request. In this case, the token will be deleted from the database and can no longer be used for authorization.

When creating the token, you need to set the following parameters:

  • expiration_time – expiration time of the token in the RFC 3339 format. You can specify an infinite token expiration time using the value "null"
  • permissions – actions that the user can perform (see "Permissions set in token")

You can also specify whether the token is visible to other accounts data using the "visibility_area" parameter (see the "Viewing other account data" section).

The response to the request is "token_id" and a Base64 encoded JWT token. After creating the token, you can use the received JWT token for BearerAuth authorization type.

Permissions set in token#

The following permissions are available for the generated token:

Permission name Permission description Rights
face rights to use the face creation, view, modification, deletion, matching
list rights to use the list creation, view, modification, deletion
event ​rights to use the event creation (only "save event" request), view, matching
attribute rights to use the attribute creation, view, modification, deletion, matching
handler rights to use the handler creation, view, modification, deletion
verifier rights to use the verifier creation, view, modification, deletion
task rights to use the task creation, view, modification, deletion
face_sample rights to use the face sample creation, view, deletion
body_sample rights to use the body sample creation, view, deletion
object rights to use the object creation, view, deletion
image rights to use the image creation, view, deletion
token rights to use the token view, modification, deletion
resource rights to use resources "/iso", "/sdk", "/liveness"
emit_events permissions to perform "generate events" requests (see below)

Resources "/iso", "/sdk" and "/liveness" do not require any authorization by default.

The value [] means no permissions.

The "emit_events" permission enables you to specify whether requests can be made to the "generate events" resource, as well as blacklisting or whitelisting handler IDs. If handler IDs are blacklisted, then only their use will be prohibited. If handler IDs are present in the white list, then only their use will be allowed. The maximum number of IDs in the lists is 100. When using the "emit_events" permission, the user must not have the "creation" and "modification" rights to use the handler.

If the "emit_events" permission is granted, then all necessary objects will be created during the generation of the event, regardless of the permissions specified in the token. For example, the "faces" type permission regulates work with faces only in requests to "faces/*" resources, but does not affect the creation of the face during event generation. Thus, when using the handler with the "store_face" parameter and not having the "creation" permission for "faces", the face will still be created.

  • the "modification" permission means performing of PATCH and PUT requests
  • linking/unlinking face to list is the "modification" permission
  • deleting list with "with_faces" setting enabled ("delete lists" request) requires "face" > "deletion" permission
  • linking face to list in the "create face" request requires "list" > "modification" permission
  • when performing matching, you should have the "matching" permission for the corresponding objects (event, face, attribute)
  • requests to "get statistics on events" resource require "event" > "view" permission
  • permission "event" > "create" only grants the right to create an event using the "save event" request. To generate an event, you should use the "emit_events" permission (see above)
  • permission "event" > "create" does not apply to verifiers

See the token coverage table for specific resources in the developer manual of the API service.

Viewing other account data#

Other account details can be viewed if:

  • account type is set to "advanced_user" or "admin"
  • when creating the token, the "visibility_area" = "all" parameter was specified.

Account type "user" cannot be set to "visibility_area" = "all".

If the account type is set to "advanced_user"/"admin" and the token is created with the "visibility_area" = "account" parameter set, then when logging in using the token (BearerAuth) you will no longer be able to view data from other accounts, but when logging in using login/password (BasicAuth), this option will remain.

If the account type is set to "advanced_user"/"admin" and the token is created with "visibility_area" set to "all" and then the account type is changed to "user" (using the "patch account" request), then an attempt to perform a request to view the data of other accounts using the token will result in an error.

When using the LunaAccountIdAuth authorization, the visibility area is controlled using the "Luna-Account-Id" header.

For verifiers, the ability to use the visibility area of all accounts is not available. If the "visibility_area" = "all", only the data of your account will be visible.

Authorization types for accessing resources#

There are three types of authorization available in LUNA PLATFORM:

  • BasicAuth. Authorization by login and password (set during account creation).
  • BearerAuth. Authorization by JWT token (issued after the token is created).
  • LunaAccountIdAuth. Authorization by "Luna-Account-Id" header, which specifies the "account_id" generated after creating the account (this method was adopted as the main one before version 5.30.0).

LunaAccountIdAuth authorization has the lowest priority compared to other methods and can be disabled using the "ALLOW_LUNA_ACCOUNT_AUTH_HEADER" setting in the "OTHER" section of the API service settings in the Configurator (enabled by default).

In OpenAPI specification the "Luna-Account-Id" header is marked with the word Deprecated.

There is no need to use all three types of authorization when making requests. It is necessary to choose the preferred method depending on the required tasks.

If the authorization type is not specified in the request, an error with the status code 403 will be issued.

Credentials verifier#

Using the "verify credentials" resource, you can verify existing credentials by one of the types:

  • login/password verification. If verification is successful, the "account_id" and account type will be returned.
  • token verification. If verification is successful, the account type and all permissions for the token will be returned.
  • account ID verification. If verification is successful, the account type will be returned.

Source image processing and samples creation#

LP is designed to work with photos (either still photographs or individual video frames). LP receives a photo using the HTTP request to API service.

The photo must be in one of the standard formats (PNG, JPG, BMP, PORTABLE PIXMAP, TIFF) with RGB or CMYK color model or encoded into Base64. It is recommended to use PNG format.

The procedure for processing a face image is as follows:

Face image processing
Face image processing
  • Face detection. You can configure Handlers for:
    • the processing of images with several faces;
    • the processing of images with a single face;
    • the searching for a face of the best quality in the image.
  • Face parameters estimation. The estimation of each group of parameters is specified in the request query parameters.
  • Sample creation. The sample corresponds to the specific format and can be further processed by LP services.

All these steps are performed by the Handlers service.

After creating a sample, it is assigned a special "sample_id", which is used for further processing of the image.

The Handlers service can estimate face parameters for each detected face upon a request. The following parameters can be estimated:

  • Gaze direction (yaw and pitch angles for both eyes);
  • Bounding box (height, width, "X" and "Y" coordinates). Bounding box is an area with certain coordinates where the face was found;
  • Head pose (yaw, pitch and roll angles);
  • Five or sixty-eight landmarks. Landmarks are special characteristic points of a face. They are used to estimate face parameters. The number of required landmarks depends on the parameters you want to estimate;
  • Image quality (light, dark, blurriness, illumination and specularity);
  • Mouth attributes (occlusion, smile probability);
  • Mask estimation (medical or fabric mask is on the face, mask is missing, the mouth is occluded);
  • Emotion probability (anger, disgust, fear, happiness, neutral, sadness, surprise).
  • Other.

The procedure for processing a body image using the Handlers service is similar to the steps described above.

A complete list of the estimated parameters and their description are given in the "Face, body and image parameters" section.

If necessary, you can configure filtering by thresholds for the pitch, yaw and roll angles (4).

These parameters are not stored in the database. You can receive them in the response only.

See the "detect faces" request in the API service reference manual for details about the detection request creation.

Read more about face or body detection and Handlers service in the "Handlers service" section.

External samples saving#

You can also download an external sample and use it for the further processing. See the "save samples" request in the API service reference manual.

See the "Image Store service" section for details about samples storage.

Descriptor extraction and attribute creation#

Samples are used for descriptors extraction. Descriptors are sets of features extracted from faces or bodies in the images. Descriptors are used to compare faces and bodies.

You cannot compare two faces or bodies if there are no extracted descriptors for them. E. g. when you need to compare the face from your database with the incoming face image you must extract a descriptor from this image.

In addition to descriptors, you can extract the basic attributes of the face: age, gender, ethnicity (see section "Atribute object").

Gender and age can also be extracted from a body image (see "Gender and age by body image").

All the extraction operations are performed by the Handlers service.

The descriptor and basic attributes received from the same face image are saved in the database as an "attribute" object. The object can include both the descriptor and basic attributes or only one of them.

You can extract descriptors and basic attributes using several samples of the same face at once. Thus you receive aggregated descriptor and aggregated basic attributes.

The accuracy of comparison using the aggregated descriptor is higher. The estimation of basic attributes is more precise. Generally aggregation can be useful when working with images from web cameras.

An aggregated descriptor can be received from images, not from the already created descriptors.

See the "Basic attributes and descriptors extraction" section for details.

The "/extract" resource is used to estimate basic attributes and extract descriptors. See the "attributes" > "extract attributes" request in the API service reference manual.

You can store attributes in an external database outside LP and use them in LP when it is required only. See section "Create attribute using external data".

All the created attributes are temporary and are stored in the database for a limited period (see section "Temporary attributes"). Temporary attributes are deleted after the specified period exceeds. Thus it is not required to manually delete this data.

You should create a face using the existing attribute to save the attribute data in the database for permanent storage.

Stored and estimated data#

This section describes data estimated and stored by LUNA PLATFORM 5.

This information can be useful when utilizing LUNA PLATFORM 5 according to the European Union legal system and GDPR.

This section does not describe the legal aspects of personal data utilization. You should consider which stored data can be interpreted as personal data according to your local laws and regulations.

Note that combinations of LUNA PLATFORM data may be interpreted by law as personal data, even if data fields separately do not include personal data. For example, a Face object including "user_data" and descriptor can be considered personal data.

Objects and data are stored in LP upon performing certain requests. Hence it is required to disable unnecessary data storage upon the requests execution.

It is recommended to read and understand this section before making a decision on which data should be received and stored in LUNA PLATFORM.

This document considers usage of the resources listed in the APIReferenceManual document and creation of LUNA PLATFORM 5 objects only. The data created using Backport 3 and Backport 4 services is not considered in this section.

Source images#

Photo images are general data sources for LP. They are required for samples creation and Liveness check.

You can provide images themselves or URLs to images.

Image format requirements#

Images should be sent only in allowed formats. The image format is specified in the "Content-Type" header.

The following image formats are supported: JPG, PNG, BMP, PORTABLE PIXMAP, TIFF. Each format has its own advantages and is intended for specific tasks.

The most commonly used formats are PNG and JPG. Below is a table with their advantages and disadvantages:

Format Compression Advantages Disadvantages
PNG Lossless Better image processing quality More image weight
JPG Lossy Less image weight Worse image processing quality

Thus, if it is not intended to save the source images in the Image Store, or the Image Store has a sufficiently large volume, then it is recommended to use PNG format to get the best image processing results.

It is not recommended to send images that are too compressed, as the quality of face and body estimations and matching will be reduced.

Samples are saved in JPG format by default, even if the source image is sent in PNG format. If necessary, you can change the format of the stored samples using the "default_image_extension" setting of the Image Store service.

LUNA PLATFORM 5 supports many color models (for example, RGB, RGBA, CMYK, HSV, etc.), however, when processing an image, all of them are converted to the RGB color model.

The image can also be encoded in Base64 format.

It is not recommended to send rotated images to LP as they are not processed properly and should be rotated. You can rotate the image using LP in two ways:

  • by enabling the "use_exif_info" auto-orientation parameter by EXIF ​​data in the query parameters;
  • by enabling the "LUNA_HANDLERS_USE_AUTO_ROTATION" auto-orientation setting in the Configurator settings.

More information about working with rotated images can be found in the Nuances of working with services section.

Source images usage#

Source images can be specified for processing when performing POST requests on the following resources:

Source images saving#

Generally, it is not required to store source images after they are processed. They can be optionally stored for system testing purposes or business cases, for example, when the source image should be displayed in a GUI.

Source images can be stored in LP:

  • using the POST request on "/images" resource
  • during the the POST request on "/handlers/{handler_id}/events" resource. Source images are stored if the "store_image" option is enabled for "image_origin_policy" of the "storage_policy" during the handler creation using the "/handlers" resource.

Source images are stored in the "visionlabs-image-origin" bucket of the Image Store service.

Saving metadata with an image

In the resource "/images", user metadata can be saved along with the image using the header X-Luna-Meta-<user_defined_key> with the value <user_defined_value>. In the Image Store bucket, metadata is saved in a separate file <image_id>.meta.json, which is located next to the source image.

In the response to the "get image" request, you need to specify the header with_meta=1 to get the image metadata in the response header.

To store metadata values for multiple keys, multiple headers must be set.

Source images deletion#

Source images are stored in the bucket for an unlimited period.

You can delete source images from the bucket:

  • using the DELETE request on the "/images/{image_id}" resource.
  • manually, by deleting the corresponding files from the bucket.

Sample object#

Samples usage#

Separate samples are created for face and body.

Samples are required:

  • for basic attributes estimation.
  • for face, body and image parameters estimation.
  • for face and body descriptors extraction.
  • when changing the descriptors NN version.

When the neural network is changed, you cannot use the descriptors of the previous version. A descriptor of a new version can be extracted if the source sample is preserved only.

Samples can also be used as avatars for faces and events. For example, when it is required to display the avatar in GUI.

Samples are stored in buckets of the Image Store.

Samples should be stored until all the required requests for face, body parameters estimation, basic attributes estimation, and descriptors extraction are finished.

Samples creation and saving#

Samples are created upon the face and/or human body detection in the image. Samples are created during the execution of the following requests:

  • POST on the "/detector" resource. Samples are created and stored implicitly. The user does not affect their creation.
  • POST on the "/handlers/{handler_id}/events" resource. Samples are stored if the "store_sample" option is enabled for "face_sample_policy" and "body_sample_policy" of the "storage_policy" during the handler creation using the "/handlers" resource.
  • POST on the "/verifiers/{verifier_id}/verifications" resource. Samples are stored if the "store_sample" option is enabled for "face_sample_policy" of the "storage_policy" during the verifier creation using the "/verifiers" resource. Human body samples are not stored using this resource.

External samples saving

Samples can be provided to LP directly from external VisionLabs software (e. g., FaceStream). The software itself performs sample creation from the source image.

You can manually store external samples in LP (an external sample should be provided in the requests):

  • Using the POST request on the "/samples/{sample_type}" resource.
  • When the "warped_image" option is set for the POST request on the "/detector" resource.
  • When the "image_type" option is set to "1" ("face sample) or "2" (human body sample) in the query parameters for the POST request on the "/handlers/{handler_id}/events" resource. The "store_sample" option should be enabled for "face_sample_policy" and "body_sample_policy" of the "storage_policy".

Samples are stored in the Image Store storage for an unlimited period.

Samples are stored in buckets:

  • "visionlabs-samples" is the name of the bucket for faces samples.
  • "visionlabs-bodies-samples" is the name of the bucket for human bodies samples.

Paths to the buckets are specified in the "bucket" parameters of "LUNA_IMAGE_STORE_FACES_SAMPLES_ADDRESS" and "LUNA_IMAGE_STORE_BODIES_SAMPLES_ADDRESS" sections in the Configurator service.

Samples saving disabling#

There is no possibility to disable samples saving for the request on the "/detector" resource. You can delete the created samples manually after the request execution.

The "/handlers" resource provides "storage_policy" that allows you to disable saving following objects:

  • Face samples. Set "face_sample_policy" > "store_sample" to "0".
  • Human body samples. Set "body_sample_policy" > "store_sample" to "0".

The "/verifiers" resource provides "storage_policy" that allows you to disable saving the following objects:

  • Face samples. Set "face_sample_policy" > "store_sample" to "0".

Samples deletion#

You can use the following ways to delete face or body samples:

  • Perform the DELETE request to "/samples/faces{sample_id}" resource to delete a face sample by its ID.
  • Perform the DELETE request to "/samples/bodies{sample_id}" resource to delete a body sample by its ID.
  • Manually delete the required face or body samples from their bucket.
  • Use "remove_samples" parameter in the "gc task" when deleting events.

Getting information about samples#

You can get face or body sample by ID.

If a sample was deleted, then the system will return an error when making a GET request.

Attribute object#

General information about attributes#

Attributes are temporary objects that include basic attributes and face descriptors. This data is received after the sample processing.

Basic attributes include the following personal data:

  • age. The estimated age is returned in years.

  • gender. The estimated gender: 0 - female, 1 - male.

  • ethnicity. The estimated ethnicity.

You can disable basic attributes extraction to avoid this personal data storage.

A descriptor cannot be considered personal data. You cannot restore the source face using a descriptor.

Attributes usage#

Attribute object can be used in the following cases:

As attributes have TTL (by default, it is set to 5 minutes), it is convenient to use them for verification or identification purpose. They will be deleted soon after you receive the result.

  • Attributes may be used for ROC-curves creation using the "/tasks/roc" resource.

See the description of ROC-curves in the "ROC-curve calculating task" section.

  • You can save the data of the existing attribute to a face using the "/faces" resource.

It is not the only way to save descriptors and basic attributes to a face. You can use "/handlers/{handler_id}/events" to create a face and add the extracted descriptor and basic attributes to it.

Basic attributes saved in faces or events objects can be used for filtration of the corresponding objects during requests execution.

Descriptors are required for matching operations. You cannot compare two faces or bodies without their descriptors.

Raw descriptors usage

LP provides the possibility to use external descriptors in requests. The descriptor should be received using the VisionLabs software and the required NN version.

In this case the descriptor is provided in the request as a file or Base64 encoded.

An external raw descriptor can be used as reference in the following resources:

An external raw descriptor can be used as a candidate in the following resources:

Attributes creation and saving#

Attributes can be created when sending requests on the following resources:

The "extract_basic_attributes" and "extract_descriptor" query parameters should be enabled for extraction of the corresponding data. Attributes are implicitly stored after the request execution.

The "extract_basic_attributes", "extract_face_descriptor", and "extract_body_descriptor" parameters should be enabled in the handler for extraction of the corresponding data. The "storage_policy" > "attribute_policy" > "store_attribute" option should be enabled in the handler for attributes storage.

The "extract_basic_attributes" parameter should be enabled in the verifier for extraction of the corresponding data. The "storage_policy" > "attribute_policy" > "store_attribute" option should be enabled in the verifier for attributes storage.

Attributes can be created using external descriptors and external basic attributes using the following resource:

Attributes time to live#

Attributes have a TTL. After the TTL expiration, attributes are automatically deleted. Hence, it is not required to delete attributes manually.

The default TTL value can be set in the "default_ttl" parameter in the Configurator service. The maximum TTL value can be set in the "max_ttl" parameter in the Configurator service.

TTL can be directly specified in the requests on the following resources:

  • "/extractor" in the "ttl" query parameter.
  • "/handlers" in the "storage_policy" > "attribute_storage" in the "ttl" parameter.

Attributes extraction disabling#

You can disable basic attributes extraction by setting the "extract_basic_attributes" parameter to "0" in the following resources:

You can disable descriptor extraction by setting the "extract_descriptor" parameter to "0" in the following resources:

Attributes saving disabling#

You can disable the "storage_policy" > "attribute_policy" > "store_attribute" parameter in the "/handlers" resource to disable attributes storage. When this handler is used for the "/handlers/{handler_id}/events" resource, attributes are not saved even for the specified TTL period.

You can disable the "storage_policy" > "attribute_policy" > "store_attribute" parameter in the "/verifiers" resource to disable attributes storage. When this verifier is used for the "/verifiers/{verifier_id}/verifications" resource, attributes are not saved even for the specified TTL period.

Attributes deletion#

Attributes are automatically deleted after the TTL expiration.

Perform the DELETE request to "/attributes/{attribute_id}" resource to delete an attribute by its ID.

Getting information about attributes#

You can receive information about existing temporary attributes before TTL has not expired.

  • Perform the GET request to "/attributes/{attribute_id}" resource to receive information about a temporary attribute by its ID.
  • Perform the GET request to "/attributes" resource to receive information about previously created attributes by their IDs.
  • Perform the GET request to "/attributes/{attribute_id}/samples" resource to receive information about all the temporary attribute samples by the attribute ID.

If any attribute TTL has not expired, the attribute data is returned. Otherwise, no data is returned for this attribute in the response.

Face object#

General information#

Faces are changeable objects that include information about a single person.

The following general data is stored in the face object:

  • Descriptor ("descriptor")
  • Basic attributes ("basic_attributes")
  • Avatar ("avatar")
  • User data ("user_data")
  • External ID ("external_id")
  • Event ID ("event_id")
  • Sample ID ("sample_id")
  • List ID ("list_id")
  • Account ID ("account_id")

See the "Faces database description" section for additional information about the face object and the data stored in it.

Attributes data can be stored in a face. Basic attributes data, descriptor data, and information about samples are saved to the Faces database and linked with the face object.

When you delete a face, the linked attributes data is also deleted.

  • Descriptor. You should specify a descriptor for the face if you are going to use the face for comparison operations.

You cannot link a face with more than one descriptor.

  • Basic attributes: Basic attributes can be used for displaying information in GUI.

The face object can also include IDs of samples used for the creation of attributes.

General event fields description:

  • "user_data". This field can include any information about the person.

  • "avatar". Avatar is a visual representation of the face that can be used in the user interface.

This field can include a URL to an external image or a sample that is used as an avatar for the face.

  • "external_id". The external ID of the face.

You can use the external ID to work with external systems.

You can use the external ID to specify that several faces belong to the same person. You should set the same external ID to these faces upon their creation.

You can get all the faces that have the same external ID using the "faces" > "get faces" request.

  • "event_id". This field can include an ID of the event that gave birth to this face.

  • "list_id". This field can include an ID of the list to which the face is linked.

A face can be linked to one or several lists.

  • "account_id" - the account ID to which the face belongs.

  • "sample_id" - One or several samples can be linked to a face. It should be the samples used for attributes extraction. All the samples must belong to the same person. If no samples are save for a face, you cannot update its descriptor to a new NN version.

IDs do not usually include personal information. However, they can be related to the object with personal data.

Faces usage#

Faces usually include information about persons registered in the system. Hence, they are usually required for verification (the incoming descriptor is compared with the face descriptor) and identification (the incoming descriptor is compared with several face descriptors from the specified list) purposes.

If there are no faces stored in your system, you cannot perform the following operations with these objects:

  • Matching by faces and lists, when faces are set as candidates or references in the request to the "/matcher/faces" resource.
  • Matching by faces and lists, when faces are set as candidates in the matching policy of the request to the "/handlers" resource.
  • Cross-matching tasks, when faces are set as candidates or references in the request to the "/tasks/cross_match" resource.
  • Clustering tasks, when faces are set as filters for clustering in the request to the "/tasks/clustering" resource.
  • Verification requests, when faces IDs are set in query parameters in the request to the "/verifiers/{verifier_id}/verifications" resource.
  • ROC curves creation tasks ("/tasks/roc" resource).

Faces creation and saving#

Faces can be created using the following requests:

You can specify attributes for the face using one of several ways:

- by specifying the attribute ID of the temporary attribute;
- by specifying descriptors and basic attributes (with or without samples);
- by specifying descriptors (with or without samples);
- by specifying basic attributes (with or without samples).

The last three ways are used when you need to create a face using data stored in external storage.

Getting information about faces#

You can receive information about existing faces and lists.

  • The "faces" > "get face" request enables you to receive information about a face by its ID.
  • The "face attributes" > "get face attributes" request enables you to receive information about attributes linked to a face by the face ID.
  • The "faces" > "get faces" request enables you to receive information about several faces. You can set filters to receive information about the faces of interest. You can also set targets, so only the required information about faces will be returned.
  • The "lists" > "get list" request enables you to receive information about a list by its ID.
  • The "lists" > "get lists" request enables you to receive information about several lists. You can set filters to receive information about the lists of interest.

You should specify a list ID as a filter for "faces" > "get faces" request to receive information about all the faces linked to the list.

Faces information updating#

You can update face information, e. g., face fields or attributes:

List object#

General information#

A list is an object that can include faces that correspond to a similar category, for example, customers or employees.

Lists include the "description" field that can store any required data to distinguish lists from each other.

Lists usage#

You can add faces to lists for dividing the faces into groups.

Only faces can be added to lists.

Lists IDs are usually specified for matching operations as a filter for faces.

Lists creation and saving#

The "lists" > "create list" request enables you to create a new list.

Lists deletion#

The "lists" > "delete list" request enables you to delete a list by its ID.

The "lists" > "delete lists" request enables you to delete several lists by their ID.

Getting information about lists#

The "lists" > "get list" request enables you to receive information about a list by its ID.

If the list was deleted, then the system will return an error when making a GET request.

The "lists" > "get lists" request enables you to receive information about all previously created lists according to the filters.

The "lists" > "get list count" request enables you to receive information about the number of previously created lists according to the filters.

Lists information updating#

The "lists" > "update list" request enables you to update the "user_data" field of the list.

Event object#

General information about events#

Events are immutable objects that include information about a single face and/or body. They are received after images processing using handlers or created manually.

Unlike faces, events cannot be changed after creation. The only exception is the event descriptor. It can be updated to the new neural network version.

Generally an event is created for each face/body detected in the image. If the detected face and body belong to the same person they are saved to the same event.

LP also provides possibility to create new events manually without processing using handlers. It is used in the cases when a logic for filling in event fields should differ from the logic using handlers. For example, when you want to extract descriptors only for a part of the detections and not for all the detections.

An event can be linked with a descriptor stored in the Events database.

The following general data in stored in the event object:

  • "source". This field can include the source from which the face or human body was received. The value is specified during the "generate events" request execution.
  • "location". This group of parameters includes information about the location where the event occurred. The values are specified during the "generate events" request execution. The following fields are included in this group:

    • "city"
    • "area"
    • "district"
    • "street"
    • "house_number"
    • "geo_position" - latitude and longitude.
  • "tag". This field includes a tag (or several tags) applied during the "conditional_tags_policy" execution. The tag(s) can be specified during the "create handler" request execution or the "generate events" request execution.

  • "emotion". This field includes the predominant emotion estimated for the face. If necessary, you can disable the "estimate_emotions" parameter in the "/handlers" resource or "create verifier" request.
  • "insert_time". This field includes the time when the face appeared in the video stream. This data is usually provided by external systems.
  • "top_similar_object_id". This field includes the top similar object ID.
  • "top_similar_external_id". This field includes the external ID of the top similar candidate (event or face) with which the face is matched.
  • "top_matching_candidates_label". This field includes a label applied during the "match_policy" execution. The labels are specified in this policy in the "/handlers" resource.
  • "face_id". Events include the ID of the face to which the event gave birth.
  • "list_id". Events include the ID of the list to which the created face was attached.
  • "external_id". This external ID will be specified for the face created during the event request processing. The value is specified during the "generate events" request execution.
  • "user_data". This field can include any information. User data will be specified for the face created during the event request processing. The value is specified during the "generate events" request execution.
  • "age", "gender", and "ethnic_group" can be stored in the event. The basic attributes extraction is specified in the "extract_policy" of the "/handlers" resource.
  • face and body descriptors. Events can be used for matching operations as they include descriptors.
  • information about faces, bodies and events that were used for matching with the events.
  • information about faces and human bodies detection results that include:
  • the IDs of created samples.
  • the information about bounding boxes of detected faces/bodies.
  • "detect_time" - the time of face/body event detection. The time is returned from an external system.
  • "image_origin" - the URL of the source image where the face/body was detected.

IDs do not usually include personal information. However, they can be related to the object with personal data.

See the "Events database description" section for additional information about the event object and the data stored in it.

Attributes aggregation for events

It is possible to enable attributes aggregation for the incoming images when creating an event.

According to the specified settings faces and bodies are detected in the incoming images. If faces are detected and an aggregated descriptor creation is specified, then a single descriptor is created using all the found faces. The same logic is used for the aggregated body descriptor creation. In addition, the basic attributes are aggregated, the obtained values for the definition of Liveness, masks, emotions, upper and lower body and the body accessories.

All the information about the detected face/body and estimated properties is returned in the response separately for each image. The aggregated results are stored when an event is saved in the DB. The information about face/body detection is added to the Events database as a separate record for each image from the request.

When performing aggregation, the images in the request to the "/handlers/{handler_id}/events" resource should include only one face/body and the face/body should belong to the same person.

Events usage#

Events are required to store information about persons' occurrences in a video stream for further analysis. An external system processes a video stream and sends frames or samples with detected faces and bodies.

LP processes these frames or samples and creates events. As events include statistics about face detection time, location, basic attributes, etc., they can be used to collect statistics about the person of interest or gather general statistics using all the saved events.

Events provide the following features:

  • Matching by events

As events store descriptors, they can be used for matching. You can match the descriptor of a person with the existing events to receive information about the person's movements.

To perform matching, you should set events as references or candidates for matching operations (see section "Descriptors matching").

  • Notifications via web sockets

You can receive notifications about events using web sockets. Notifications are sent according to the specified filters. For example, when an employee is recognized, a notification can be sent to the turnstile application and the turnstile will be opened.

See the "Sending notifications via Sender" section.

  • Statistics gathering

You can gather statistics on the existing events using a special request. It can be used to:

  • Group events by frequency or time intervals.
  • Filter events according to the values of their properties.
  • Count the number of created events according to the filters.
  • Find the minimum, maximum and average values of properties for the existing events.
  • Group existing events according to the specified values of properties.

You should save generated events to the database if you need to collect statistics after the event is created.

See the "events" > "get statistics on events" request in "APIReferenceManual.html" for details.

  • User defined data

You can add custom information to events, which can later be used to filter events. The information is passed in JSON format and written to the Events database.

See "Events meta-information" for details.

Events creation and saving#

Events are created during the request to the "/handlers/{handler_id}/events" resource execution. The "event_policy" > "store_event" should be set to "1" when creating a handler using the "/handlers" resource.

For manual event creation and saving use the "/handles//events/raw" resource.

The format of the generated event is similar to the format returned by the "/handlers/{handler_id}/events" resource. The "event_id" and "url" fields are not specified when creating a request. They are returned in the response after the event is created.

Notifications using web sockets are sent when events are created using this resource.

Events deletion#

Events are only deleted using the garbage collection task. You should specify filters to delete all the events corresponding to the filters.

To delete an event execute the POST request to the "/tasks/gc" resource.

You can also manually delete the required events from the database.

Getting information about events#

You can receive information about existing events.

If the event was deleted, then the system will return an error when making a GET request.

  • The "events" > "get events" request enables you to receive information about all previously created events according to the specified filters. By default, events are filtered for the last month from the current date. If any of the following filters are specified, then the default filtering will not be used.

    • list of event IDs (event_ids);
    • lower event ID boundary (event_id__gte);
    • upper event ID boundary (event_id__lt);
    • lower create time boundary (create_time__gte);
    • upper create time boundary (create_time__lt).

Handler and verifier objects#

General information about handlers and verifiers#

Handlers are objects that include information about the incoming data processing. Each handler includes a set of policies for receiving the required information.

Handlers are used to generate events according to the specified policies.

Handlers are stored in the Handlers database.

Verifiers are handlers created for incoming images verification purposes. They include only part of standard handler policies and parameters and are stored in the same database. The threshold for successful verification is specified in the verifiers.

See the "Handlers database" description for detailed information about the data stored in the Handlers database.

Handlers and verifiers provides possibility to process images without saving any objects to the databases. You should disable objects saving in the "storage_policy".

Handlers usage provides the following features:

  • Two requests for the basic images processing

You should create a handler using the "create handler" request and then specify it for the events creation in the "generate events" request. Thus only "generate events" request is required for images processing according to the predefined logic.

You should create a verifier using the "create verifier" request and then specify it for the performing verification in the "perform verification" request. Thus only "perform verification" request is required for images processing according to the predefined logic.

  • All processing rules in a single place

You can easily edit the existing handler or create a new one for a new task. Hence there is no need in creating and configuring several different requests to perform basic operations with images.

See the detailed description of handlers and verifiers in the "Handlers service" section.

Handlers and verifiers creation and saving#

Handlers are created using the POST request on the "/handlers" resource.

Verifiers are created using the POST request on the "/verifiers" resource.

Handlers and verifiers deletion#

Handlers are deleted using the DELETE request on the "/handlers/{handler_id}" resource.

Verifiers are deleted using the DELETE request on the "/verifiers/{verifier_id}" resource.

Getting information about handlers and verifiers#

You can receive information about existing handlers and verifiers.

If the handler or verifier was deleted, then the system will return an error when making a GET request.

  • The "handlers" > "get handlers" request enables you to receive information about all previously created handlers according to the specified filters.

  • The "verifiers" > "get verifiers" request enables you to receive information about all previously created verifiers according to the specified filters.

  • The "handlers" > "get handler count" request enables you to receive information about the number of previously created handlers according to the specified filters.

  • The "verifiers" > "count verifiers" request enables you to receive information about the number of previously created verifiers according to the specified filters.

Handler and verifier updating#

You cannot update a part of a handler or verifier, so you must specify all the fields for your object.

Sdk resource#

The sdk resource allows you to detect faces and/or human bodies and estimate attributes in input images. After the request is performed, the received data is not saved to the database and Image Store, it only returned in the response.

The sdk resource combines the capabilities of other resources and also provides additional options. For example, with "/sdk" request you can do the following features:

  • estimate presence of glasses in the image;
  • estimate liveness in the image;
  • estimate face/body parameter(s);
  • aggregate face/body parameter(s);
  • create a face sample and return it in the Base64 format;
  • create a human body sample and return it in the Base64 format;
  • extract face and body descriptor(s) and return them in the response;
  • set descriptor quality score threshold to filter out images that are poorly suited for further processing;
  • filter by mask states;
  • other.

Descriptors matching#

Descriptor comparison operations are called matching. Given a descriptor, LP enables you to search similar-looking faces or bodies in the database by matching the given descriptor with the stored descriptors.

See the "Matching services" section for details about Matching services.

Matcher service enables you to compare descriptors and receive their similarity score. The similarity score value is between 0 and 1. High similarity score means that the two descriptors belong to the same person.

There is a type of descriptors - face descriptor and body descriptor. Body descriptor matching is not as accurate as face descriptor matching. With a large number of candidate events, the probability of false determinations of the best matches is higher than with a large number of candidate faces.

Matching example
Matching example

The sources of matched objects are represented as references (the objects you want to compare) and candidates (the set of objects that you want to compare with). Each reference will be matched with each of the specified candidates.

You can select the following objects as references and candidates.

References (specified using the array of IDs):

  • Attributes
  • Faces
  • External IDs of faces and events
  • Events (face or body)
  • Event track IDs
  • Descriptors

Candidates (specified using filters):

  • Faces
  • Events (face or body)
  • Attributes

The references are specified using IDs of the corresponding objects. If a non-existent reference is set (for example, a non-existent ID is set in the "event_id" field or the "face_id" field), the corresponding error is returned.

The candidates are specified using filters. Matching results are returned for the candidates that correspond to the specified filters. If none of the candidates corresponds to the filters (for example, a non-existent ID is set in the "event_ids" field or the "face_ids" field), there will be no matching result and no error returned. The result field will be empty.

Matching cannot be performed if there is no descriptor for any of the compared faces.

Several examples for events and faces filters are given below. See the "matcher" > "matching faces" and "matcher" > "human body matching" sections in the API service reference manual for details about all the available filters.

For events filtration one can:

  • Specify camera ("source" field) and a period ("create_time__gte" and "create_time__lt" or "end_time__gte" and "end_time__lt" fields);
  • Specify tags ("tags" field) for events as filters and perform matching for events with these tags only;
  • Specify geographical area and perform matching with the events created in this area only. You can specify direct location (city, area, district, street, house) or a zone specified using geographical coordinates.

For faces filtration one can:

  • Specify a list of external face IDs to perform matching with them;
  • Specify a list ID to perform matching by list.

Raw matching#

You can set raw descriptors as references and candidates using the "raw matching" request.

All the descriptors data is provided with the request hence you can use this request when you need to match descriptors that are not stored in LUNA PLATFORM databases.

See the "matcher" > "raw matching" request in the API service reference manual for details.

Verification#

You can use the "/verifiers" resource to create a special handler for verification. It includes several policies for the incoming images processing. See the "Verifiers description" section for details on the verifier.

Generally this request is used to match one given object with the incoming object. Use other matching requests for identification tasks.

Matching a large set of descriptors#

Sometimes it is necessary to match a very large set of descriptors (candidates). When matching a large set of descriptors by classical brute-force matching, it is impossible to get a low latency with a high number of requests per second. Therefore, it is required to use approximation methods implemented in LIM that exchange some accuracy for high speed. These methods speed up the matching by building an index containing preprocessing data.

The work of the module is as follows: The index is created using a task containing a "list_id" with linked faces, by which the matching will be made. You can also not specify "list_id", but set the settings for automatic indexing of all lists whose number of faces exceeds the value specified in a certain setting. After the index is created, the user sends a request to the API service, which redirects it to the Python Matcher Proxy service. The Python Matcher Proxy service determines where the matching will be performed - in the Python Matcher service or in the Indexed Matcher service. After matching, the response is returned to the user.

LUNA Index Module is licensed separately and is available on request to VisionLabs. The module delivery contains all the necessary documentation and the Docker Compose script.

Sending notifications via Sender#

You can send notifications about created events to the third-party party applications via web-sockets. For example, you can configure LP to send notifications about VIP guests' arrival to your mobile phone.

When LP creates a new event, the event is added to the special DB. Then the event can be sent to the Sender service if the service is enabled.

The third-party party application should be subscribed to Sender via web-sockets. The filter for the events of interest should be set for each application. Thus you will receive notifications only for the required events.

The notification about a new event is sent by Sender to the required applications.

Sender workflow
Sender workflow

See section "Sender service" for details about the Sender service.

Licenses#

The Licenses service provides information about license terms to LP services.

See section "Licenses service" for details.

Admin#

The Admin service implements tools for administrative routines.

All the requests for Admin service are described in AdminReferenceManual.html.

See section "Admin service" for details about the Admin service.

Configurator#

The configurator service includes the settings of all the LP services. Thus it provides the possibility to configure all the services in one place.

See section "Configurator service" for details about the Configurator service.

Tasks#

Tasks requests provide additional possibilities for large data amounts processing.

The greater data array is processed, the longer it takes to finish the task. When a task is created you receive the task ID as the result of the request.

See the "Tasks service" section for details about tasks processing.

Clustering task#

The clustering task provides the possibility to group events and faces that belong to the same person to clusters.

You can specify filters for choosing the objects to be processed.

Using the clustering task you can, for example, receive all the IDs of events that belong to the same person and which occurred during the specified period of time.

See the "task processing" > "clustering task" request for details about the Clustering task request creation.

Reporter task#

The reporter task enables you to receive a report in CSV format with extended information about the objects grouped to clusters.

You can select columns that should be added to the report. You can receive images corresponding to each of the IDs in each cluster in the response.

See the "task processing" > "reporter task" request for details about the reporter task request creation.

Exporter task#

The exporter task enables you to collect event and/or face data and export them from LP to a CSV file.

The file rows represent requested objects and corresponding samples (if they were requested).

See the "task processing" > "exporter task" request for details about the exporter task request creation.

Cross-matching task#

Cross-matching means that a large number of references can be matched with a large number of candidates. Thus every reference is matched with every candidate.

Both references and candidates are specified using filters for faces and events.

See the "task processing" > "cross-matching task" request in the API service reference manual for details about the cross-matching task creation.

Linker task#

The linker task enables you to:

  • link the existing faces to lists
  • create faces from the existing events and link the faces to lists

The linked faces are selected according to the specified filters.

See the "task processing" > "linker task" request in the API service reference manual for details about the cross-matching task creation.

Estimator task#

The estimator task enables you to perform batch processing of images using the specified policies.

In the request body, you can specify the handler_id of an already existing static or dynamic handler. For the dynamic handler_id, the ability to set the required policies is available. In addition, you can create a static handler specifying policies in the request.

The resource can accept five types of sources with images for processing:

  • ZIP archive
  • S3-like storage
  • Network disk
  • FTP server
  • Samba network file system

See the "task processing" > "estimator task" request in the API service reference manual for details about the estimator task creation.

Backports#

In LP 5, Backport is a mechanism of the new platform version mimicry to the older versions.

The backports are implemented for LUNA PLATFORM 3 and LUNA PLATFORM 4 by means of the Backport 3 and Backport 4 services respectively.

The backports enable you to send requests that are similar to the requests from LUNA PLATFORM 3 and LUNA PLATFORM 4 and receive responses in the appropriate format.

There are some nuances when using backports.

  • Data migration to Backports is complicated.

Database structures of LUNA PLATFORM 3 and LUNA PLATFORM 4 differ from the database structure of LUNA PLATFORM 5. Therefore, additional steps should be performed for data migration and errors may occur.

  • Backports have many restrictions.

Not all logic from LP 3 and LP 4 can be supported in Backports. See the "Restrictions for working with Backport services" section for more information.

  • Backports are limited by available features and are not developed.

New features and resources of LUNA PLATFORM 5 are not supported when using Backports and they will never be supported. Thus, these services should only be used if it is not possible to perform a full migration to LUNA PLATFORM 5 and no new features are required.

Use case:

For example, you have a frontend application that sends requests to LUNA PLATFORM 3.

When you use the Backport 3 service, the requests to LP 3 are received by the service. They are formatted and sent to LUNA PLATFORM 5 according to its API format. LP 5 processes the request and sends the response to Backport 3. The Backport 3 service formats all the received results to the LP 3 format and sends the response.

LP3 vs Backport 3 and LP 5
LP3 vs Backport 3 and LP 5

Restrictions for working with Backport services#

As the tables of the saved data and the data itself differs for LP versions there are features and restrictions on the execution of requests. The information about features and restrictions is given in "Backport 3" and "Backport 4" sections of this document.

You can use Backport service and API service of LP 5 simultaneously following these restrictions:

  • When you use Backport services, it is strongly recommended not to use the same account for requests to Backport service and the API service of LUNA PLATFORM 5.

  • Perform the admin requests that do not use account ID to the LUNA PLATFORM 5 only.