General concepts#
The following sections explain general LP concepts, describe existing services and created data types. It does not describe all the requests, database structures, and other technical nuances.
LP consists of several services. All the services communicate via HTTP requests: a service receives a request and always returns a response.
You can find information about LUNA PLATFORM 5 architecture in the "Interaction of LP Services" section.
All the services can be divided into general and optional. You can't launch and use LP without general services, whereas optional services provide extra features and are not mandatory. Most of the services have their database or file storage.
General services
Service | Description | Database |
---|---|---|
API | The main gateway to LP. Receives requests, distributes tasks to other LP services | |
Handlers | Detects faces in images, extracts face properties and creates samples. Extracts descriptors from samples. Extracts basic attributes of the images. Creates and stores handlers | PostgreSQL/ Oracle |
Python Matcher | Performs matching tasks | |
Faces | Creates faces, lists, and attributes. Stores these objects in the database. Allows other services to receive the required data from the database | PostgreSQL/ Oracle, Redis |
Image Store | Stores samples, reports about long tasks execution, created clusters and additional metadata | Local storage/ Amazon S3 |
Events | Stores data on the generated events in the database. This service can be disabled, but it is recommended to use it, if you are going to save events | PostgreSQL |
Licenses | Checks your license conditions and returns information about them | |
Admin | Enables to perform general administrative routines | PostgreSQL/ Oracle |
Configurator | Stores all configurations of all the services in a single place | PostgreSQL/ Oracle |
Tasks | Performs long tasks, such as garbage collection, extraction of descriptors with a new neural network version, clustering | PostgreSQL/ Oracle |
Tasks Worker | Performs the internal work of the Tasks service | PostgreSQL/ Oracle |
Optional services
Service | Description | Database |
---|---|---|
Sender | Sends notifications about created events via web-socket. | Redis |
Liveness | The service is used for Liveness V1 utilization. It enables LP to detect presentation attacks. This service requires an additional license. | |
Backport 3 | The service is used to process LUNA PLATFORM 3 requests using LUNA PLATFORM 5. | PostgreSQL/ Oracle |
Backport 4 | The service is used to process LUNA PLATFORM 4 requests using LUNA PLATFORM 5. | |
User Interface 3 | User Interface is used to visually represent the features provided with the Backport 3 service. It does not include all the functionality available in LP 3. | |
User Interface 4 | User Interface is used to visually represent the features provided with the Backport 4 service. It does not include all the functionality available in LP 4. |
Services for index building and search by index
Service | Description | Database |
---|---|---|
Index Manager | Forms tasks for index building and coordinates the process of index delivery to Indexed Matcher servers. | PostgreSQL |
Indexer | Creates indexes based on a descriptors' list. | |
Indexed Matcher | Searches by indexes. | |
Matcher Daemon | Copies the index from the server, on which Indexer is installed, to the server for search by an index (Indexed Matcher). Restarts Indexed Matcher with a new index generation. Matcher Daemon is installed on the Indexed Matcher server. | |
Message queue | A message queue is used for interaction with index services. by default, RabbitMQ is used. | |
Python Matcher Proxy | The service manages matching requests and routes them to Python Matcher or Indexed Matcher. |
There are several third-party services that are usually used with LP.
These services are not described in this document. See the corresponding documentation provided by the services vendors.
Third-party services
Function | Description | Supported services |
---|---|---|
Balancer | Balance requests between LP services when there are several similar services launched. For example, they can balance requests between API services or between two contours of LP upon scaling. | NGINX |
Monitoring system | The monitoring system is used to monitor and control the number of processes launched for LP. | Supervisor |
Monitoring database | The database is used for monitoring purposes. | Influx |
Monitoring visualization | Monitoring visualization is represented by a set of dashboards. You can evaluate the total load on the system or the load on individual services. | Grafana |
Log rotation service | All LP services write logs and their size may grow dramatically. Log rotation service is required to delete outdated log files and free disk space. | Logrotate, etc. |
Accounting#
All the data stored in LUNA PLATFORM databases belong to a particular account IDs, which are represented by the "Luna-Account-Id" field provided with each request changing the database. If a "Luna-Account-Id" header is not set in a request, the error with status code 403 will be returned.
Created objects (attributes, faces, lists, handlers, events) have an account mark (account id).
The account header limits all user actions (matching, removing, updating and other) to be done only with data having corresponding "Account ID". For example, it is impossible to attach a face with one "account ID" to a list with another "account ID".
"Luna-Account-Id" must be unique for each user.
"Luna-Account-Id" has a UUID format. You can generate "Luna-Account-Id" using any existing tool or even specify it manually according to the corresponding format.
You can use the account ID to build your own authorization system and provide access restriction for different users.
We assume that the question of authorization and security is resolved by an organization that installs LP. "Account ID" just provides a tool to restrict access to the data.
Source image processing and samples creation#
LP is designed to work with photos (either still photographs or individual video frames). LP receives a photo using the HTTP request to API service.
The photo must be in one of the standard formats (JPG, PNG, BMP, PORTABLE PIXMAP, TIFF) or encoded into BASE64.
The general image processing steps are:
- Face detection. You can configure Handlers for:
- the processing of images with several faces;
- the processing of images with a single face;
- the searching for a face of the best quality in the image.
- Face parameters estimation. The estimation of each group of parameters is specified in the request query parameters.
- Sample creation. The sample corresponds to the specific format and can be further processed by LP services.
All these steps are performed by the Handlers service.
After creating a sample, it is assigned a special "sample_id", which is used for further processing of the image.
The Handlers service can estimate face parameters for each detected face upon a request. The following parameters can be estimated:
- Gaze direction (yaw and pitch angles for both eyes);
- Bounding box (height, width, "X" and "Y" coordinates). Bounding box is an area with certain coordinates where the face was found;
- Head pose (yaw, pitch and roll angles);
- Five or sixty-eight landmarks. Landmarks are special characteristic points of a face. They are used to estimate face parameters. The number of required landmarks depends on the parameters you want to estimate;
- Image quality (light, dark, blurriness, illumination and specularity);
- Mouth attributes (occlusion, smile probability);
- Mask estimation (medical or fabric mask is on the face, mask is missing, the mouth is occluded);
- Emotion probability (anger, disgust, fear, happiness, neutral, sadness, surprise).
- Other.
A complete list of the estimated parameters and their description are given in the "Face and image parameters" section.
These parameters are not stored in the database. You can receive them in the response only.
See the "detect faces" request in "../ReferenceManuals/APIReferenceManual.html" for details about the detection request creation.
Read more about face detection and Handlers service in the "Handlers service" section.
External samples saving#
You can also download an external sample and use it for the further processing. See the "save samples" request in "../ReferenceManuals/APIReferenceManual.html".
See the "Image Store service" section for details about samples storage.
Descriptor extraction and attribute creation#
Samples are used for descriptors extraction. Descriptors are sets of features extracted from faces in the images. Descriptors are used to compare faces.
You cannot compare two faces if there are no extracted descriptors for them. E. g. when you need to compare the face from your database with the incoming face image you must extract a descriptor from this image.
In addition to descriptors, you can extract the basic attributes of the face: age, gender, ethnicity (see section "Atribute object").
All the extraction operations are performed by the Handlers service.
The descriptor and basic attributes received from the same image are saved in the database as an "attribute" object. The object can include both the descriptor and basic attributes or only one of them.
You can extract descriptors and basic attributes using several samples of the same face at once. Thus you receive aggregated descriptor and aggregated basic attributes.
The accuracy of comparison using the aggregated descriptor is higher. The estimation of basic attributes is more precise. Generally aggregation can be useful when working with images from web cameras.
An aggregated descriptor can be received from images, not from the already created descriptors.
See the "Basic attributes and descriptors extraction" section for details.
The "/extract" resource is used to estimate basic attributes and extract descriptors. See the "attributes" > "extract attributes" request in "../ReferenceManuals/APIReferenceManual.html".
You can store attributes in an external database outside LP and use them in LP when it is required only. See section "Create attribute using external data".
All the created attributes are temporary and are stored in the database for a limited period (see section "Temporary attributes"). Temporary attributes are deleted after the specified period exceeds. Thus it is not required to manually delete this data.
You should create a face using the existing attribute to save the attribute data in the database for permanent storage.
Stored and estimated data#
This section describes data estimated and stored by LUNA PLATFORM 5.
This information can be useful when utilizing LUNA PLATFORM 5 according to the European Union legal system and GDPR.
This section does not describe the legal aspects of personal data utilization. You should consider which stored data can be interpreted as personal data according to your local laws and regulations.
Note that combinations of LUNA PLATFORM data may be interpreted by law as personal data, even if data fields separately do not include personal data. For example, a Face object including "user_data" and descriptor can be considered personal data.
Objects and data are stored in LP upon performing certain requests. Hence it is required to disable unnecessary data storage upon the requests execution.
It is recommended to read and understand this section before making a decision on which data should be received and stored in LUNA PLATFORM.
This document considers usage of the resources listed in the APIReferenceManual document and creation of LUNA PLATFORM 5 objects only. The data created using Backport 3 and Backport 4 services is not considered in this section.
Source images#
General information#
Photo images are general data sources for LP. They are required for samples creation and Liveness check.
You can provide images themselves or URLs to images. The images should be in permissible formats only (JPG, PNG, BMP, PORTABLE PIXMAP, TIFF).
It is not recommended to send rotated images to LP as they are not processed properly and should be rotated. You can rotate the image using LP in two ways:
- by enabling the "use_exif_info" auto-orientation parameter by EXIF ​​data in the query parameters;
- by enabling the "LUNA_HANDLERS_USE_AUTO_ROTATION" auto-orientation setting in the Configurator settings.
More information about working with rotated images can be found in the Nuances of working with services section.
Source images usage#
Source images can be specified for processing when performing POST requests on the following resources:
- "/detector".
- "/handlers/{handler_id}/events".
- "/verifiers/{verifier_id}/verifications".
- "/liveness".
Source images saving#
Generally, it is not required to store source images after they are processed. They can be optionally stored for system testing purposes or business cases, for example, when the source image should be displayed in a GUI.
Source images can be stored in LP:
- using the POST request on "/images" resource
- during the the POST request on "/handlers/{handler_id}/events" resource. Source images are stored if the "store_image" option is enabled for "image_origin_policy" of the "storage_policy" during the handler creation using the "/handlers" resource.
Source images are stored in the "visionlabs-image-origin" bucket of the Image Store service.
Source images deletion#
Source images are stored in the bucket for an unlimited period.
You can delete source images from the bucket:
- using the DELETE request on the "/images/{image_id}" resource.
- manually, by deleting the corresponding files from the bucket.
Sample object#
Samples usage#
Separate samples are created for face and body.
Samples are required:
- for basic attributes estimation.
- for face and image parameters estimation.
- for face and body descriptors extraction.
- when changing the descriptors NN version.
When a neural network is changed, you cannot use the descriptors of the previous version. A descriptor of a new version can be extracted if the source sample is preserved only.
Samples can also be used as avatars for faces and events. For example, when it is required to display the avatar in GUI.
Samples are stored in buckets of the Image Store.
Samples should be stored until all the required requests for face parameters estimation, basic attributes estimation, and descriptors extraction are finished.
Samples creation and saving#
Samples are created upon the face and/or human body detection in the image. Samples are created during the execution of the following requests:
- POST on the "/detector" resource. Samples are created and stored implicitly. The user does not affect their creation.
- POST on the "/handlers/{handler_id}/events" resource. Samples are stored if the "store_sample" option is enabled for "face_sample_policy" and "body_sample_policy" of the "storage_policy" during the handler creation using the "/handlers" resource.
- POST on the "/verifiers/{verifier_id}/verifications" resource. Samples are stored if the "store_sample" option is enabled for "face_sample_policy" of the "storage_policy" during the verifier creation using the "/verifiers" resource. Human body samples are not stored using this resource.
External samples saving
Samples can be provided to LP directly from external VisionLabs software (e. g., FaceStream). The software itself performs sample creation from the source image.
You can manually store external samples in LP (an external sample should be provided in the requests):
- Using the POST request on the "/samples/{sample_type}" resource.
- When the "warped_image" option is set for the POST request on the "/detector" resource.
- When the "image_type" option is set to "1" ("face sample) or "2" (human body sample) in the query parameters for the POST request on the "/handlers/{handler_id}/events" resource. The "store_sample" option should be enabled for "face_sample_policy" and "body_sample_policy" of the "storage_policy".
Samples are stored in the Image Store storage for an unlimited period.
Samples are stored in buckets:
- "visionlabs-samples" is the name of the bucket for faces samples.
- "visionlabs-bodies-samples" is the name of the bucket for human bodies samples.
Paths to the buckets are specified in the "bucket" parameters of "LUNA_IMAGE_STORE_FACES_SAMPLES_ADDRESS" and "LUNA_IMAGE_STORE_BODIES_SAMPLES_ADDRESS" sections in the Configurator service.
Samples saving disabling#
There is no possibility to disable samples saving for the request on the "/detector" resource. You can delete the created samples manually after the request execution.
The "/handlers" resource provides "storage_policy" that enables you to disable the saving of the following objects:
- Face samples. Set "face_sample_policy" > "store_sample" to "0".
- Human body samples. Set "body_sample_policy" > "store_sample" to "0".
The "/verifiers" resource provides "storage_policy" that enables you to disable the saving of the following objects:
- Face samples. Set "face_sample_policy" > "store_sample" to "0".
Samples deletion#
You can use the following ways to delete face or body samples:
- Perform the DELETE request to "/samples/faces{sample_id}" resource to delete a face sample by its ID.
- Perform the DELETE request to "/samples/bodies{sample_id}" resource to delete a body sample by its ID.
- Manually delete the required face or body samples from their bucket.
- Use "remove_samples" parameter in the "gc task" when deleting events.
Getting information about samples#
You can get face or body sample by ID.
- Perform the GET request to "/samples/faces/{sample_id}" resource to get a face sample by its ID.
- Perform the GET request to "/samples/bodies/{sample_id}" resource to get a body sample by its ID.
If the face or body sample was deleted, then the system will return an error when making a GET request.
Attribute object#
General information about attributes#
Attributes are temporary objects that include basic attributes and descriptors. This data is received after the sample processing.
Basic attributes include the following personal data:
-
age. The estimated age is returned in years.
-
gender. The estimated gender: 0 - female, 1 - male.
-
ethnicity. The estimated ethnicity.
You can disable basic attributes extraction to avoid this personal data storage.
A descriptor cannot be considered personal data. You cannot restore the source face using a descriptor.
Attributes usage#
Attribute object can be used in the following cases:
-
Matching using attributes can be performed using:
- "/matcher/faces" resource.
- "/tasks/cross_match" resource.
- "/verifiers/{verifier_id}/verifications" resource.
As attributes have TTL (by default, it is set to 5 minutes), it is convenient to use them for verification or identification purpose. They will be deleted soon after you receive the result.
- Attributes may be used for ROC-curves creation using the "/tasks/roc" resource.
See the description of ROC-curves in the "ROC-curve calculating task" section.
- You can save the data of the existing attribute to a face using the "/faces" resource.
It is not the only way to save descriptors and basic attributes to a face. You can use "/handlers/{handler_id}/events" to create a face and add the extracted descriptor and basic attributes to it.
Basic attributes saved in faces or events objects can be used for filtration of the corresponding objects during requests execution.
Descriptors are required for matching operations. You cannot compare two faces or bodies without their descriptors.
Raw descriptors usage
LP provides the possibility to use external descriptors in requests. The descriptor should be received using the VisionLabs software and the required NN version.
In this case the descriptor is provided in the request as a file or Base64 encoded.
An external raw descriptor can be used as reference in the following resources:
- "/matcher/faces"
- "/matcher/bodies"
- "/matcher/raw"
- "/handlers/{handler_id}/events" when "multipart/form-data" request body schema is set
- "/verifiers/{verifier_id}/verifications" when "multipart/form-data" request body schema is set
- "/verifiers/{verifier_id}/raw"
An external raw descriptor can be used as a candidate in the following resources:
Attributes creation and saving#
Attributes can be created when sending requests on the following resources:
The "extract_basic_attributes" and "extract_descriptor" query parameters should be enabled for extraction of the corresponding data. Attributes are implicitly stored after the request execution.
The "extract_basic_attributes", "extract_face_descriptor", and "extract_body_descriptor" parameters should be enabled in the handler for extraction of the corresponding data. The "storage_policy" > "attribute_policy" > "store_attribute" option should be enabled in the handler for attributes storage.
The "extract_basic_attributes" parameter should be enabled in the verifier for extraction of the corresponding data. The "storage_policy" > "attribute_policy" > "store_attribute" option should be enabled in the verifier for attributes storage.
Attributes can be created using external descriptors and external basic attributes using the following resource:
Attributes time to live#
Attributes have a TTL. After the TTL expiration, attributes are automatically deleted. Hence, it is not required to delete attributes manually.
The default TTL value can be set in the "default_ttl" parameter in the Configurator service. The maximum TTL value can be set in the "max_ttl" parameter in the Configurator service.
TTL can be directly specified in the requests on the following resources:
- "/extractor" in the "ttl" query parameter.
- "/handlers" in the "storage_policy" > "attribute_storage" in the "ttl" parameter.
Attributes extraction disabling#
You can disable basic attributes extraction by setting the "extract_basic_attributes" parameter to "0" in the following resources:
You can disable descriptor extraction by setting the "extract_descriptor" parameter to "0" in the following resources:
Attributes saving disabling#
You can disable the "storage_policy" > "attribute_policy" > "store_attribute" parameter in the "/handlers" resource to disable attributes storage. When this handler is used for the "/handlers/{handler_id}/events" resource, attributes are not saved even for the specified TTL period.
You can disable the "storage_policy" > "attribute_policy" > "store_attribute" parameter in the "/verifiers" resource to disable attributes storage. When this verifier is used for the "/verifiers/{verifier_id}/verifications" resource, attributes are not saved even for the specified TTL period.
Attributes deletion#
Attributes are automatically deleted after the TTL expiration.
Perform the DELETE request to "/attributes/{attribute_id}" resource to delete an attribute by its ID.
Getting information about attributes#
You can receive information about existing temporary attributes before TTL has not expired.
- Perform the GET request to "/attributes/{attribute_id}" resource to receive information about a temporary attribute by its ID.
- Perform the GET request to "/attributes" resource to receive information about previously created attributes by their IDs.
- Perform the GET request to "/attributes/{attribute_id}/samples" resource to receive information about all the temporary attribute samples by the attribute ID.
If any attribute TTL has not expired, the attribute data is returned. Otherwise, no data is returned for this attribute in the response.
Face object#
General information#
Faces are changeable objects that include information about a single person.
The following general data is stored in the face object:
- Descriptor ("descriptor")
- Basic attributes ("basic_attributes")
- Avatar ("avatar")
- User data ("user_data")
- External ID ("external_id")
- Event ID ("event_id")
- Sample ID ("sample_id")
- List ID ("list_id")
- Account ID ("account_id")
See the "Faces database description" section for additional information about the face object and the data stored in it.
Attributes data can be stored in a face. Basic attributes data, descriptor data, and information about samples are saved to the Faces database and linked with the face object.
When you delete a face, the linked attributes data is also deleted.
- Descriptor. You should specify a descriptor for the face if you are going to use the face for comparison operations.
You cannot link a face with more than one descriptor.
- Basic attributes: Basic attributes can be used for displaying information in GUI.
The face object can also include IDs of samples used for the creation of attributes.
General event fields description:
-
"user_data". This field can include any information about the person.
-
"avatar". Avatar is a visual representation of the face that can be used in the user interface.
This field can include a URL to an external image or a sample that is used as an avatar for the face.
- "external_id". The external ID of the face.
You can use the external ID to work with external systems.
You can use the external ID to specify that several faces belong to the same person. You should set the same external ID to these faces upon their creation.
You can get all the faces that have the same external ID using the "faces" > "get faces" request.
-
"event_id". This field can include an ID of the event that gave birth to this face.
-
"list_id". This field can include an ID of the list to which the face is linked.
A face can be linked to one or several lists.
-
"account_id" - the account ID to which the face belongs.
-
"sample_id" - One or several samples can be linked to a face. It should be the samples used for attributes extraction. All the samples must belong to the same person. If no samples are save for a face, you cannot update its descriptor to a new NN version.
IDs do not usually include personal information. However, they can be related to the object with personal data.
Faces usage#
Faces usually include information about persons registered in the system. Hence, they are usually required for verification (the incoming descriptor is compared with the face descriptor) and identification (the incoming descriptor is compared with several face descriptors from the specified list) purposes.
If there are no faces stored in your system, you cannot perform the following operations with these objects:
- Matching by faces and lists, when faces are set as candidates or references in the request to the "/matcher/faces" resource.
- Matching by faces and lists, when faces are set as candidates in the matching policy of the request to the "/handlers" resource.
- Cross-matching tasks, when faces are set as candidates or references in the request to the "/tasks/cross_match" resource.
- Clustering tasks, when faces are set as filters for clustering in the request to the "/tasks/clustering" resource.
- Verification requests, when faces IDs are set in query parameters in the request to the "/verifiers/{verifier_id}/verifications" resource.
- ROC curves creation tasks ("/tasks/roc" resource).
Faces creation and saving#
Faces can be created using the following requests:
You can specify attributes for the face using one of several ways:
- by specifying the attribute ID of the temporary attribute;
- by specifying descriptors and basic attributes (with or without samples);
- by specifying descriptors (with or without samples);
- by specifying basic attributes (with or without samples).
The last three ways are used when you need to create a face using data stored in external storage.
- "/handlers/{handler_id}/events". The "storage_policy" > "face_policy" > "store_face" parameter should be enabled in the utilized handler.
Getting information about faces#
You can receive information about existing faces and lists.
- The "faces" > "get face" request enables you to receive information about a face by its ID.
- The "face attributes" > "get face attributes" request enables you to receive information about attributes linked to a face by the face ID.
- The "faces" > "get faces" request enables you to receive information about several faces. You can set filters to receive information about the faces of interest. You can also set targets, so only the required information about faces will be returned.
- The "lists" > "get list" request enables you to receive information about a list by its ID.
- The "lists" > "get lists" request enables you to receive information about several lists. You can set filters to receive information about the lists of interest.
You should specify a list ID as a filter for "faces" > "get faces" request to receive information about all the faces linked to the list.
Faces information updating#
You can update face information, e. g., face fields or attributes:
- Use the "faces" > "patch face" request to patch the face data.
- Use the "face attributes" > "put face attributes" request to update face attribute data.
List object#
General information#
A list is an object that can include faces that correspond to a similar category, for example, customers or employees.
Lists include the "description" field that can store any required data to distinguish lists from each other.
Lists usage#
You can add faces to lists for dividing the faces into groups.
Only faces can be added to lists.
Lists IDs are usually specified for matching operations as a filter for faces.
Lists creation and saving#
The "lists" > "create list" request enables you to create a new list.
Lists deletion#
The "lists" > "delete list" request enables you to delete a list by its ID.
The "lists" > "delete lists" request enables you to delete several lists by their ID.
Getting information about lists#
The "lists" > "get list" request enables you to receive information about a list by its ID.
If the list was deleted, then the system will return an error when making a GET request.
The "lists" > "get lists" request enables you to receive information about all previously created lists according to the filters.
The "lists" > "get list count" request enables you to receive information about the number of previously created lists according to the filters.
Lists information updating#
The "lists" > "update list" request enables you to update the "user_data" field of the list.
Event object#
General information about events#
Events are immutable objects that include information about a single face and/or body. They are received after images processing using handlers or created manually.
Unlike faces, events cannot be changed after creation. The only exception is the event descriptor. It can be updated to the new neural network version.
Generally an event is created for each face/body detected in the image. If the detected face and body belong to the same person they are saved to the same event.
LP also provides possibility to create new events manually without processing using handlers. It is used in the cases when a logic for filling in event fields should differ from the logic using handlers. For example, when you want to extract descriptors only for a part of the detections and not for all the detections.
An event can be linked with a descriptor stored in the Events database.
The following general data in stored in the event object:
- "source". This field can include the source from which the face or human body was received. The value is specified during the "generate events" request execution.
-
"location". This group of parameters includes information about the location where the event occurred. The values are specified during the "generate events" request execution. The following fields are included in this group:
- "city"
- "area"
- "district"
- "street"
- "house_number"
- "geo_position" - latitude and longitude.
-
"tag". This field includes a tag (or several tags) applied during the "conditional_tags_policy" execution. The tag(s) can be specified during the "create handler" request execution or the "generate events" request execution.
- "emotion". This field includes the predominant emotion estimated for the face. If necessary, you can disable the "estimate_emotions" parameter in the "/handlers" resource or "create verifier" request.
- "insert_time". This field includes the time when the face appeared in the video stream. This data is usually provided by external systems.
- "top_similar_object_id". This field includes the top similar object ID.
- "top_similar_external_id". This field includes the external ID of the top similar candidate (event or face) with which the face is matched.
- "top_matching_candidates_label". This field includes a label applied during the "match_policy" execution. The labels are specified in this policy in the "/handlers" resource.
- "face_id". Events include the ID of the face to which the event gave birth.
- "list_id". Events include the ID of the list to which the created face was attached.
- "external_id". This external ID will be specified for the face created during the event request processing. The value is specified during the "generate events" request execution.
- "user_data". This field can include any information. User data will be specified for the face created during the event request processing. The value is specified during the "generate events" request execution.
- "age", "gender", and "ethnic_group" can be stored in the event. The basic attributes extraction is specified in the "extract_policy" of the "/handlers" resource.
- face and body descriptors. Events can be used for matching operations as they include descriptors.
- information about faces and events that were used for matching with the events.
- information about faces and human bodies detection results that include:
- the IDs of created samples.
- the information about bounding boxes of detected faces/bodies.
- "detect_time" - the time of face/body event detection. The time is returned from an external system.
- "image_origin" - the URL of the source image where the face/body was detected.
IDs do not usually include personal information. However, they can be related to the object with personal data.
See the "Events database description" section for additional information about the event object and the data stored in it.
Attributes aggregation for events
It is possible to enable attributes aggregation for the incoming images when creating an event.
According to the specified settings faces and bodies are detected in the incoming images. If faces are detected and an aggregated descriptor creation is specified, then a single descriptor is created using all the found faces. The same logic is used for the aggregated body descriptor creation. Basic attributes, Liveness masks, and emotions estimation results are received.
All the information about the detected face/body and estimated properties is returned in the response separately for each image. The aggregated results are stored when an event is saved in the DB. The information about face/body detection is added to the Events database as a separate record for each image from the request.
When performing aggregation, the images in the request to the "/handlers/{handler_id}/events" resource should include only one face/body and the face/body should belong to the same person.
Events usage#
Events are required to store information about persons' occurrences in a video stream for further analysis. An external system processes a video stream and sends frames or samples with detected faces and bodies.
LP processes these frames or samples and creates events. As events include statistics about face detection time, location, basic attributes, etc., they can be used to collect statistics about the person of interest or gather general statistics using all the saved events.
Events provide the following features:
- Matching by events
As events store descriptors, they can be used for matching. You can match the descriptor of a person with the existing events to receive information about the person's movements.
To perform matching, you should set events as references or candidates for matching operations (see section "Descriptors matching").
- Notifications via web sockets
You can receive notifications about events using web sockets. Notifications are sent according to the specified filters. For example, when an employee is recognized, a notification can be sent to the turnstile application and the turnstile will be opened.
See the "Sending notifications via Sender" section.
- Statistics gathering
You can gather statistics on the existing events using a special request. It can be used to:
- Group events by frequency or time intervals.
- Filter events according to the values of their properties.
- Count the number of created events according to the filters.
- Find the minimum, maximum and average values of properties for the existing events.
- Group existing events according to the specified values of properties.
You should save generated events to the database if you need to collect statistics after the event is created.
See the "events" > "get statistics on events" request in "APIReferenceManual.html" for details.
Events creation and saving#
Events are created during the request to the "/handlers/{handler_id}/events" resource execution. The "event_policy" > "store_event" should be set to "1" when creating a handler using the "/handlers" resource.
For manual event creation and saving use the "/handles//events/raw" resource.
The format of the generated event is similar to the format returned by the "/handlers/{handler_id}/events" resource. The "event_id" and "url" fields are not specified when creating a request. They are returned in the response after the event is created.
Notifications using web sockets are sent when events are created using this resource.
Events deletion#
Events are only deleted using the garbage collection task. You should specify filters to delete all the events corresponding to the filters.
To delete an event execute the POST request to the "/tasks/gc" resource.
You can also manually delete the required events from the database.
Getting information about events#
You can receive information about existing events.
- The "events" > "get event" request enables you to receive information about an event by its ID.
If the event was deleted, then the system will return an error when making a GET request.
-
The "events" > "get events" request enables you to receive information about all previously created events according to the specified filters. By default, events are filtered for the last month from the current date. If any of the following filters are specified, then the default filtering will not be used.
- list of event IDs (event_ids);
- lower event ID boundary (event_id__gte);
- upper event ID boundary (event_id__lt);
- lower create time boundary (create_time__gte);
- upper create time boundary (create_time__lt).
Handler and verifier objects#
General information about handlers and verifiers#
Handlers are objects that include information about the incoming data processing. Each handler includes a set of policies for receiving the required information.
Handlers are used to generate events according to the specified policies.
Handlers are stored in the Handlers database.
Verifiers are handlers created for incoming images verification purposes. They include only part of standard handler policies and parameters and are stored in the same database. The threshold for successful verification is specified in the verifiers.
See the "Handlers database" description for detailed information about the data stored in the Handlers database.
Handlers and verifiers provides possibility to process images without saving any objects to the databases. You should disable objects saving in the "storage_policy".
Handlers usage provides the following features:
- Two requests for the basic images processing
You should create a handler using the "create handler" request and then specify it for the events creation in the "generate events" request. Thus only "generate events" request is required for images processing according to the predefined logic.
You should create a verifier using the "create verifier" request and then specify it for the performing verification in the "perform verification" request. Thus only "perform verification" request is required for images processing according to the predefined logic.
- All processing rules in a single place
You can easily edit the existing handler or create a new one for a new task. Hence there is no need in creating and configuring several different requests to perform basic operations with images.
See the detailed description of handlers and verifiers in the "Handlers service" section.
Handlers and verifiers creation and saving#
Handlers are created using the POST request on the "/handlers" resource.
Verifiers are created using the POST request on the "/verifiers" resource.
Handlers and verifiers deletion#
Handlers are deleted using the DELETE request on the "/handlers/{handler_id}" resource.
Verifiers are deleted using the DELETE request on the "/verifiers/{verifier_id}" resource.
Getting information about handlers and verifiers#
You can receive information about existing handlers and verifiers.
-
The "handlers" > "get handler" request enables you to receive information about a handler by its ID.
-
The "verifiers" > "get verifier" request enables you to receive information about a verifier by its ID.
If the handler or verifier was deleted, then the system will return an error when making a GET request.
-
The "handlers" > "get handlers" request enables you to receive information about all previously created handlers according to the specified filters.
-
The "verifiers" > "get verifiers" request enables you to receive information about all previously created verifiers according to the specified filters.
-
The "handlers" > "get handler count" request enables you to receive information about the number of previously created handlers according to the specified filters.
-
The "verifiers" > "get verifier count" request enables you to receive information about the number of previously created verifiers according to the specified filters.
Handler updating#
- The "handlers" > "replace handler" request enables you to update the parameters of an already created handler.
You cannot update a part of a handler, so you must specify all the fields for your handler.
Sdk resource#
The sdk resource allows you to detect faces and/or human bodies and estimate attributes in input images. After the request is performed, the received data is not saved to the database and Image Store, it only returned in the response.
The sdk resource combines the capabilities of other resources and also provides additional options. For example, with "/sdk" request you can do the following features:
- estimate presence of glasses in the image;
- estimate liveness in the image;
- create a face sample and return it in the Base64 format;
- create a human body sample and return it in the Base64 format;
- extract face and body descriptor(s) and return them in the response;
- aggregate face/body attribute(s);
- set descriptor quality score threshold to filter out images that are poorly suited for further processing;
- filter by mask states.
Descriptors matching#
Descriptor comparison operations are called matching. Given a descriptor, LP enables you to search similar-looking faces in the database by matching the given descriptor with the stored descriptors.
See the "Matching services" section for details about Matching services.
Matcher service enables you to compare descriptors and receive their similarity score. The similarity score value is between 0 and 1. High similarity score means that the two descriptors belong to the same person.
The sources of matched objects are represented as references (the objects you want to compare) and candidates (the set of objects that you want to compare with). Each reference will be matched with each of the specified candidates.
You can select the following objects as references and candidates.
References (specified using the array of IDs):
- Attributes
- Faces
- External IDs of faces and events
- Events (face or body)
- Event track IDs
- Descriptors
Candidates (specified using filters):
- Faces
- Events (face or body)
- Attributes
The references are specified using IDs of the corresponding objects. If a non-existent reference is set (for example, a non-existent ID is set in the "event_id" field or the "face_id" field), the corresponding error is returned.
The candidates are specified using filters. Matching results are returned for the candidates that correspond to the specified filters. If none of the candidates corresponds to the filters (for example, a non-existent ID is set in the "event_ids" field or the "face_ids" field), there will be no matching result and no error returned. The result field will be empty.
Matching cannot be performed if there is no descriptor for any of the compared faces.
Several examples for events and faces filters are given below. See the "matcher" > "matching faces" and "matcher" > "human body matching" sections in "../ReferenceManuals/APIReferenceManual.html" for details about all the available filters.
For events filtration one can:
- Specify camera ("source" field) and a period ("create_time__gte" and "create_time__lt" or "end_time__gte" and "end_time__lt" fields);
- Specify tags ("tags" field) for events as filters and perform matching for events with these tags only;
- Specify geographical area and perform matching with the events created in this area only. You can specify direct location (city, area, district, street, house) or a zone specified using geographical coordinates.
For faces filtration one can:
- Specify a list of external face IDs to perform matching with them;
- Specify a list ID to perform matching by list.
Raw matching#
You can set raw descriptors as references and candidates using the "raw matching" request.
All the descriptors data is provided with the request hence you can use this request when you need to match descriptors that are not stored in LUNA PLATFORM databases.
See the "matcher" > "raw matching" request in "../ReferenceManuals/APIReferenceManual.html" for details.
Verification#
You can use the "/verifiers" resource to create a special handler for verification. It includes several policies for the incoming images processing. See the "Verifiers description" section for details on the verifier.
Generally this request is used to match one given object with the incoming object. Use other matching requests for identification tasks.
Sending notifications via Sender#
You can send notifications about created events to the third-party party applications via web-sockets. For example, you can configure LP to send notifications about VIP guests' arrival to your mobile phone.
When LP creates a new event, the event is added to the special DB. Then the event can be sent to the Sender service if the service is enabled.
The third-party party application should be subscribed to Sender via web-sockets. The filter for the events of interest should be set for each application. Thus you will receive notifications only for the required events.
The notification about a new event is sent by Sender to the required applications.
See section "Sender service" for details about the Sender service.
Liveness#
The Liveness feature provides the possibility to check if the face in an image belongs to a real person or it's a fake.
There are two Liveness versions available. Liveness V1 is a separate service and used only in the "/liveness" resource, while Liveness V2 is part of the Handlers service and used in the "/liveness", "/sdk" and "/handlers" resources.
See section "Liveness service" for details.
Licenses#
The Licenses service provides information about license terms to LP services.
See section "Licenses service" for details.
Admin#
The Admin service implements tools for administrative routines.
All the requests for Admin service are described in AdminReferenceManual.html.
See section "Admin service" for details about the Admin service.
Configurator#
The configurator service includes the settings of all the LP services. Thus it provides the possibility to configure all the services in one place.
See section "Configurator service" for details about the Configurator service.
Tasks#
Tasks requests provide additional possibilities for large data amounts processing.
The greater data array is processed, the longer it takes to finish the task. When a task is created you receive the task ID as the result of the request.
See the "Tasks service" section for details about tasks processing.
Clustering task#
The clustering task provides the possibility to group events and faces that belong to the same person to clusters.
You can specify filters for choosing the objects to be processed.
Using the clustering task you can, for example, receive all the IDs of events that belong to the same person and which occurred during the specified period of time.
See the "task processing" > "clustering task" request for details about the Clustering task request creation.
Reporter task#
The reporter task enables you to receive a report in CSV format with extended information about the objects grouped to clusters.
You can select columns that should be added to the report. You can receive images corresponding to each of the IDs in each cluster in the response.
See the "task processing" > "reporter task" request for details about the reporter task request creation.
Exporter task#
The exporter task enables you to collect event and/or face data and export them from LP to a CSV file.
The file rows represent requested objects and corresponding samples (if they were requested).
See the "task processing" > "exporter task" request for details about the exporter task request creation.
Cross-matching task#
Cross-matching means that a large number of references can be matched with a large number of candidates. Thus every reference is matched with every candidate.
Both references and candidates are specified using filters for faces and events.
See the "task processing" > "cross-matching task" request in "../ReferenceManuals/APIReferenceManual.html" for details about the cross-matching task creation.
Linker task#
The linker task enables you to:
- link the existing faces to lists
- create faces from the existing events and link the faces to lists
The linked faces are selected according to the specified filters.
See the "task processing" > "linker task" request in "../ReferenceManuals/APIReferenceManual.html" for details about the cross-matching task creation.
Estimator task#
The estimator task enables you to perform batch processing of images using the specified policies.
In the request body, you can specify the handler_id
of an already existing static or dynamic handler. For the dynamic handler_id
, the ability to set the required policies is available. In addition, you can create a static handler specifying policies in the request.
The resource can accept three types of sources with images for processing:
- ZIP archive
- S3-like storage
- Network disk
See the "task processing" > "estimator task" request in "../ReferenceManuals/APIReferenceManual.html" for details about the estimator task creation.
Backports#
In LP 5, Backport is a mechanism of the new platform version mimicry to the older versions.
The backports are implemented for LUNA PLATFORM 3 and LUNA PLATFORM 4 by means of the Backport 3 and Backport 4 services respectively.
Due to the services, it is not required to write new integration for LP 5 from scratch when a user updates from previous versions. The backports enable you to send requests that are similar to the requests from LUNA PLATFORM 3 and LUNA PLATFORM 4 and receive responses in the appropriate format.
Use case:
For example, you have a frontend application that sends requests to LUNA PLATFORM 3.
When you use the Backport 3 service, the requests to LP 3 are received by the service. They are formatted and sent to LUNA PLATFORM 5 according to its API format. LP 5 processes the request and sends the response to Backport 3. The Backport 3 service formats all the received results to the LP 3 format and sends the response.
Restrictions for working with Backport services#
As the tables of the saved data and the data itself differs for LP versions there are features and restrictions on the execution of requests. The information about features and restrictions is given in "Backport 3" and "Backport 4" sections of this document.
You can use Backport service and API service of LP 5 simultaneously following these restrictions:
-
When you use Backport services, it is strongly recommended not to use the same account for requests to Backport service and the API service of LUNA PLATFORM 5.
-
Perform the admin requests that do not use account ID to the LUNA PLATFORM 5 only.