LP Services Description#
This section provides more details on functions of the LP services.
Databases and message queues can be omitted in the following figures.
API service#
API service description#
Luna API is a facial recognition web service. It provides a RESTful interface for interaction with other LUNA PLATFORM services.
Using the API service you can send requests to other LP services and solve the following problems:
-
Images processing and analysis:
-
face detection in photos;
-
face attributes (age, gender, ethnicity) and face parameters (head pose, emotions, gaze direction, eyes attributes, mouth attributes) estimation;
-
-
Search for similar faces in the database;
-
Storage of the received face attributes in databases;
-
Creation of lists to search in;
-
Statistics gathering;
-
Flexible request management to meet user data processing requirements.
Handlers service#
The Handlers service is used to:
- perform face detection and face parameters estimation,
- samples creation,
- perform extraction of basic attributes and descriptor, including aggregated ones,
- create and store handlers and verifiers.
- Process images using handlers and verifiers policies.
Face detection, descriptor extraction, estimation of parameters and attributes are performed using neural networks. The algorithm evolves with time and new neural networks appear. They may differ from each other by performance and precision. You should choose a neural network following the business case of your company.
Face detection and face parameters estimation#
Objects detections and parameters estimation are performed when the "detect_policy" is specified in a handler or using the "/detector" resourc. The following main steps can be performed:
- Face detection in a photo;
- Normalization of the photo image (obtaining a biometric sample);
- Obtaining face parameters;
- Image quality estimation.
See the "detect faces" request in "../ReferenceManuals/APIReferenceManual.html" for details.
Face detection#
LP tries to detect all human faces it can on each submitted photo. This process is performed by the Handlers service. For each detected face, the service outputs a bounding box and a set of key points (landmarks) for eyes, nose, and mouth. They are used to estimate a camera angle and to rotate the face to the optimal front position in the image plane. The image is centered using eye positions and cropped to the required size. This way all samples look the same. E. g., the left eye is always in a box, defined by certain coordinates.
In addition to faces, the Handlers service can find bodies in the images. When a body is found its sample can be created.
The simplified scheme of image processing using the Handlers service is given below.
Samples#
The image that is received after all these transformations, is called a sample (normalized image). The sample corresponds to the specific format and can be further processed by LP services.
Samples are used to create descriptors, restore or re-create face database when recovering or updating neural network models. They may be also used in the user interface as the preview for faces (avatars).
To create a sample, you should send the corresponding request "detect faces" to Handlers service using API service. Samples are saved to Image Store.
Input images#
The image may be in one of the standard formats (JPG, PNG, BMP, PORTABLE PIXMAP, TIFF) or BASE64 encoded. After the image is processed a sample is created and stored to the Image Store service.
In the request, you can send an image or specify the URL to the image.
You can use a multipart/form-data schema to send several images in the request. Each of the sent images must have a unique name. The content-disposition header must contain the actual filename.
You can also specify a bounding box parameters using the multipart/form-data schema. It enables you to specify the certain zone with a face on the image. You can specify the following properties for the bounding box:
- top left corner "x" and "y" coordinates,
- height,
- width.
Basic attributes and descriptors extraction#
The "/extract" resource and the "extract_policy" are used to estimate basic attributes and extract descriptors. See the "extract attributes" request in "APIReferenceManual.html" for details.
Each detected face is converted into a special set of unique features, called face descriptor. Descriptor stores the set of packed properties as well as some helper parameters that were used to extract these properties from the source image.
The descriptor requires much less storage memory in comparison to the source image. It is impossible to restore the original face image from the descriptor, which is important for personal data safety reasons.
All the data received after the "extract attributes" request execution is saved to the database by the Faces service. See the "Temporary attributes" section.
The simplified scheme of sample processing using the Handlers service is given below.
The Handlers service also returns information about age, gender and ethnicity of the face in the image.
Aggregation#
Based on all images transferred in one request, a single set of basic attributes and an aggregated descriptor can be obtained. In addition, during the creation of the event, the aggregation of the received values of Liveness, emotions and medical mask states is performed.
The matching results are more precise for aggregated descriptor. It is recommended to use aggregation when several images were received from the same camera. It is not guaranteed that aggregated descriptors provide improvements in other cases.
It is considered that each parameter is aggregated from sample. Use the "aggregate_attributes" parameter of the "extract attributes" and "sdk" requests to enable attributes aggregation. Aggregation of liveness, emotion, and mask states is available using the "aggregate_attributes" parameter in the "generate events", provided that these parameters were estimated earlier in the handler.
An array of "sample_ids" is returned in the response even if there was only a single sample used in the request. In this case, a single sample ID is included in the array.
Data for extraction#
The following information received during the "detect faces" request is required for the descriptor creation:
- a face bounding box inside the image;
- pre-computed landmarks.
Handlers with GPU#
Handlers service can utilize GPU instead of CPU for calculations. A single GPU is utilized per Handlers service instance.
Attributes extraction on the GPU is engineered for maximum throughput. The input images are processed in batches. This reduces computation cost per image but does not provide the shortest latency per image.
GPU acceleration is designed for high load applications where request counts per second consistently reach thousands. It won’t be beneficial to use GPU acceleration in non-extensively loaded scenarios where latency matters.
Descriptors version#
Descriptors extraction is performed using neural networks. The following versions of NN for extraction are available in the distribution package: 54, 56, 57, 58, 59. The default version of descriptors NN is 59.
See the "DEFAULT_FACE_DESCRIPTOR_VERSION" parameter in the Configurator service to check the current extraction neural network version.
Descriptors received using different neural network versions are not comparable with each other. That is why you should re-extract all the descriptors from the existing samples if you are going to use a new NN version.
You can store several descriptors for the same image with different NN versions linked to a single face.
Temporary attributes#
After the "extract attributes" request is processed, the Faces service saves the received data in the Redis DB as a temporary attribute.
The temporary attribute object includes the data of basic attributes, descriptor and the samples used for their extraction. See "Faces database description" for details.
Temporary attributes have a TTL (time to live) and will be removed from the database after the specified period. You can specify the period in the range from 1 to 86400 seconds in the request. The TTL is set to 5 minutes by default.
When a face is created the temporary attribute data used for the face creation is saved to the Faces database. The temporary attribute ID is not saved to the database.
You can receive the information about the temporary attribute by its ID until its TTL did not expire. You must create a face using the attribute data to save it for the long term.
Create attribute using external data#
You can create a temporary attribute by sending basic attributes and descriptors to LUNA PLATFORM. Thus you can store this data in external storage and send it to LP for the processing of requests only.
You can create an attribute using:
- basic attributes and their samples;
- descriptors (raw descriptor in Base64 or SDK descriptor in Base64) with descriptor version and samples;
- both basic attributes and descriptors with the corresponding data.
Samples are optional and are not required for an attribute creation.
See the "create temporary attribute" request in "APIReferenceManual.html" for details.
Handlers description#
The handler service processes the requests for handler and events creation.
As there can be millions of events in the Events database, a high-performance column-oriented database is recommended. You can also use a relational database, but it will require additional customization.
A handler is an object that stores entry points for image processing. The entry points are called policies. They describe how the image is processed hence define the LP services used for the processing. The handler is created using the "create handler" request.
The table below includes all the existing handler policies. Each policy corresponds to one of the LP services listed in the "Service" column.
Handler policies
Policy | Description | Service |
---|---|---|
detect_policy | Specifies face parameters to be estimated (head pose, gaze direction, emotions, liveness, etc.). | Handlers |
extract_policy | Specifies the necessity of descriptor and basic attributes (gender, age, ethnicity) extraction. It also determines the threshold for the descriptor quality. | Handlers |
match_policy | Specifies the array of lists for matching with the current face and additional filters for the matching process for each of the lists. The matching results can be used in create_face_policy and link_to_lists_policy. | Matcher |
storage_policy | Enables data storage in the database for samples, attributes, faces, and events. You can specify filters for the objects saving. You can perform automatic linking to lists for faces and set TTL for attributes. You can enable and disable notifications sending | Image Store, Faces, Events |
conditional_tags_policy | Specifies filters for assigning tags to events | Handlers |
You can skip or disable policies if they are not required for your case. Skipped policies are executed accorded to the specified default values. For example, you should disable samples storage in the corresponding policy if they are not required. If you just skip the "storage_policy" in the handler, samples will be saved according to the default settings.
All the available policies are described in the "../ReferenceManuals/APIReferenceManual.html".
Dynamic handlers
Handlers can be static or dynamic.
When a handler is static, you specify its parameters during the handler creation and then you specify the created handler ID during the event creation.
When a handler is dynamic, you can change its parameters during the event creation request. Dynamic handlers enable you to separate technical parameters (thresholds and quality parameters that should be hidden from frontend users) and business logic.
Only static verifiers are available.
Dynamic handlers usage example:
You need to separate head pose thresholds from other handler parameters.
You can save the thresholds to your external database and implement the logic of automatic substitution of this data when creating events (e. g. in your frontend application).
The frontend user sends requests for events creation and specifies the required lists and other parameters. The user does not know about the thresholds and cannot change them.
Verifiers description#
The Handlers service also processes requests to create verifiers required for the verification process. They are created using the "create verifier" request.
The verifier contains a limited number of handler policies - detect_policy, extract_policy, and storage_policy.
You can specify the "verification_threshold" in the verifier.
The created verifier should be used when sending requests to:
- "/verifiers/{verifier_id}/verifications" resource. You can specify IDs of the objects with which the verification should be performed.
- "/verifiers/{verifier_id}/raw" resource. You can specify raw descriptors as candidates and references for matching. As raw descriptors are processed "verification_threshold" is the general parameter used from the specified verifier.
The response includes the "status" field. It shows, if the verification was successful or not for each pair of matched objects. It is successful if the similarity of two objects is greater than the specified "verification_threshold".
Handlers usage#
Using handler you can:
- Specify face detection parameters and estimated face parameters (see "Face detection").
- Specify human body detection execution.
- Enable basic attributes and descriptors extraction (see "Descriptor extraction and attribute creation").
- Perform descriptors comparison (see "Descriptors matching") to receive matching results and use the result as filters for the policies.
- Configure automatic creation of faces using filters. You can specify filters according to the estimated basic attributes and matching results.
- Configure automatic attachment of the created faces to the lists using filters. You can specify filters according to the estimated basic attributes and matching results.
- Specify the objects to be saved after the image processing.
- Configure automatic tag adding for the event using filters. You can specify filters according to the estimated basic attributes and matching results.
In addition, the ability to process a batch of images using a handler is available (see the "Estimator tast" section).
See the detailed description of handlers in the "/handlers" resource.
Image Store service#
The Image Store service stores the following data:
- Face and body samples. Samples are stored in Image Store by the Handlers service or you can save an external sample using the "samples" > "save face/body sample" request.
- Reports about tasks. Reports are stored by the Tasks service workers.
- Clusterization information.
Image Store can save data to SSD or S3 compatible storage (Amazon S3, etc.).
Buckets description#
The data is stored in special directories called buckets. Each bucket has a unique name. Bucket names should be set in lower case.
The following buckets are used in LP:
- "visionlabs-samples" bucket stores face samples.
- "visionlabs-bodies-samples" bucket stores human bodies samples.
- "visionlabs-image-origin" bucket stores source images.
- "task-result" bucket stores the results received after tasks processing using the Tasks service.
- "portraits" - the bucket is required for the usage of Backport 3 service. The bucket stores portraits.
Buckets creation is described in "LP_Docker_Installation_Manual" in the "Buckets creation" section.
After running the Image Store container and the commands for containers creation, the buckets are saved to local storage or S3.
By default, local files are stored in the "/var/lib/luna/current/example-docker/image_store" directory on the server. They are saved in the "/srv/local_storage/" directory in the Image Store container.
Bucket includes directories with samples or other data. The names of the directories correspond to the first four letters of the sample ID. All the samples are distributed to these directories according to their first four ID symbols.
Next to the bucket object is a "*.meta.json" file containing the "account_id" used when performing the request. If the bucket object is not a sample (for example, the bucket object is a JSON file in the "task-result" bucket), then the "Content-Type" will also be specified in this file.
An example of the folders structure in the "visionlabs-samples", "task-result" and "visionlabs-bodies-samples" buckets is given below.
./local_storage/visionlabs-samples/8f4f/
8f4f0070-c464-460b-sf78-fac234df32e9.jpg
8f4f0070-c464-460b-sf78-fac234df32e9.meta.json
8f4f1253-d542-621b-9rf7-ha52111hm5s0.jpg
8f4f1253-d542-621b-9rf7-ha52111hm5s0.meta.json
./local_storage/task-result/1b03/
1b0359af-ecd8-4712-8fc0-08401612d39b
1b0359af-ecd8-4712-8fc0-08401612d39b.meta.json
./local_storage/visionlabs-bodies-samples/6e98/
6e987e9c-1c9c-4139-9ef4-4a78b8ab6eb6.jpg
6e987e9c-1c9c-4139-9ef4-4a78b8ab6eb6.meta.json
A significant amount of memory may be required when storing a large number of samples. A single sample takes about 30 Kbytes of the disk space.
It is also recommended to create backups of the samples. Samples are utilized when the NN version is changed or when you need to recover your database of faces.
External samples#
You can send an external sample to Image Store. The external sample is received using third-party software or the VisionLabs software (e. g., FaceStream).
See the POST request on the "/samples/{sample_type}" resource in "APIReferenceManual.html" for details.
The external sample should correspond to certain standards so that LP could process it. Some of them are listed in the "Sample requirements" section.
The samples received using the VisionLabs software satisfy this requirement.
In case of third-party software, it is not guaranteed that the result of the external sample processing will be the same as for the VisionLabs sample. The sample can be of low quality (too dark, blurry and so on). Low quality leads to incorrect image processing results.
Anyway, it is recommended to consult VisionLabs before using external samples.
Faces service#
Faces service is used for:
- Creating temporary attributes;
- Creating faces;
- Creating lists;
- Attaching faces to lists;
- Managing of the general database that stores faces with the attached data and lists;
- Receive information about the existing faces and lists.
Face creation#
To create a face use the "create face" request in "../ReferenceManuals/APIReferenceManual.html".
To create a list use the "create list" request. See the "../ReferenceManuals/APIReferenceManual.html" for more detail.
Each of the described objects has its table in the Faces database. You can find database tables for the objects and their description in the "Faces database description" section.
Each attribute, face, and list in the Faces database is associated with a specific Account ID. It enables you to restrict access to data for different users.
The external ID field enables you to set your own ID for the face. You can create persons with several attached faces in an external system using the external ID.
A list contains user data, creation time and last update time fields. You can find list table scheme in the "Faces database description" section.
Matching services#
Python Matcher has the following features:
- Matching according to the specified filters. This matching is performed directly on the Faces or the Events database. Matching by DB is beneficial when several filters are set.
- Matching by lists. In this case, it is recommended that descriptors are save in the Python Matcher cache.
Python Matcher Proxy is used to route requests to Python Matcher services and matching plugins.
Python Matcher description#
Python Matcher utilizes Faces DB for filtration and matching when faces are set as candidates for matching and filters for them are specified. This feature is always enabled for Python Matcher.
Python Matcher utilizes Events DB for filtration and matching when events are set as candidates for matching and filters for them are specified. The matching using the Events DB is optional, and it is not used when the Events service is not utilized.
A VLMatch matching function is required for matching by DB. It should be registered for the Faces DB and the Events DB. The function utilizes a library that should be compiled for your current DB version. You can find information about it in the installation manual in "VLMatch library compilation", "Create VLMatch function for Faces DB", and "Create VLMatch function for Events DB" sections.
When faces are set as candidates for matching, and list IDs are specified as filters, Python Matcher will perform matching by lists. In this case, it caches all the lists to improve performance.
The CACHE_ENABLED
parameter in the DESCRIPTORS_CACHE
setting should be set to "true" in the Python Matcher configurations to perform caching.
Python Matcher service additionally uses working processes that process requests.
Python Matcher Proxy description#
The API service sends requests to the Python Matcher Proxy if it is configured in the API configuration. Then the Python Matcher Proxy service redirects requests to the Python Matcher service or to matching plugins (if they are used).
If the matching plugins are not used, then the service route requests only to the Python Matcher service. Thus, you don't need to use Python Matcher Proxy unless you intend to use matching plugins. See the "Matching plugins" section for a description of how the matching plugins work.
Working processes cache#
When multiple worker processes are launched for the Python Matcher service, each of the worker processes uses the same descriptors cache.
This change can both speed up and slow down the service. If you need to ensure that the cache is stored in each of the Python Matcher processes, you should run each of the server instances separately.
Events service#
The Events service is used for:
- Storage of all the created events in the Events database.
- Returning all the events that satisfy filters.
- Gathering statistics on all the existing events according to the specified aggregation and frequency/period.
- Storage of descriptors created for events.
As the event is a report, you can't modify already existing events.
The Events service should be enabled in the API service configuration file. Otherwise, events will not be saved to the database.
Database for Events#
PostgreSQL is used as a database for the Events service.
The speed of request processing is primarily affected by:
- the number of events in the database
- lack of indexes for PostgreSQL
PostgreSQL shows acceptable requests processing speed with the number of events from 1 000 000 to 10 000 000. If the number of events exceeds 10 000 000, the request to PostgreSQL may fail.
The speed of the statistics requests processing in the PostgreSQL database can be increased by configuring the database and creating indexes.
Geo position#
You can add a geo position during event creation.
The geo position is represented as a JSON with GPS coordinates of the geographical point:
- longitude - geographical longitude in degrees
- latitude - geographical latitude in degrees
The geo position is specified in the "location" body parameter of the event creation request. See the "Create new events" section of the EventsReferenceManual.
You can use the geo position filter to receive all the events that occurred in the required area.
Geo position filter#
A geo position filter is a bounding box specified by coordinates of its center (origin) and some delta.
It is specified using the following parameters:
- origin_longitude
- origin_latitude
- longitude_delta
- latitude_delta
The geo position filter can be used when you get events, get statistics on events, and perform events matching.
Geo position filter is considered as properly specified if:
- both origin_longitude and origin_latitude are set.
- neither origin_longitude, origin_latitude, longitude_delta, or latitude_delta is set.
If both origin_longitude and origin_latitude are set and longitude_delta is not set - the default value is applied (see the default value in the OpenAPI documentation).
Read the following recommendations before using geo position filters.
The general recommendations and restrictions for geo position filters are:
- Do not create filters with a vertex or a border on the International Date Line (IDL), the North Pole or the South Pole. They are not fully supported due to the features of database spatial index. The filtering result may be unpredictable;
- Geo position filters with edges more than 180 degrees long are not allowed;
- It is highly recommended to use the geo position filter citywide only. If a larger area is specified, the filtration results on the borders of the area can be unexpected due to the spatial features.
- Avoid creating a filter that is too extended along longitude or latitude. It is recommended to set the values of deltas close to each other.
The last two recommendations exist due to the spatial features of the filter. According to these features, when one or two deltas are set to large values, the result may differ from the expected though it will be correct. See the "Filter features" section for details.
Filter performance#
Geo position filter performance depends on the spatial data type used to store event geo position in the database.
Two spatial data types are supported:
- GEOMETRY: a spatial object with coordinates expressed as (longitude, latitude) pairs, defined in the Cartesian plane. All calculations use Cartesian coordinates.
- GEOGRAPHY: a spatial object with coordinates expressed as (longitude, latitude) pairs, defined as on the surface of a perfect sphere, or a spatial object in the WGS84 coordinate system.
For a detailed description, see geometry vs geography.
Geo position filter is based on the ST_Covers PostGIS function supported for both geometry and geography type.
Filter features#
Geo position filter has some features caused by PostGIS.
When geography type is used and the geo position filter covers a wide portion of the planet surface, filter result may be unexpected but geographically correct due to some spatial features.
The following example illustrates this case.
An event with the following geo position was added in the database:
{
"longitude": 16.79,
"latitude": 64.92,
}
We apply a geo position filter and try to find the required point on the map. The filter is too extended along the longitude:
{
"origin_longitude": 16.79,
"origin_latitude": 64.92,
"longitude_delta": 2,
"latitude_delta": 0.01,
}
This filter will not return the expected event. The event will be filtered due to spatial features. Here is the illustration showing that the point is outside the filter.
You should consider this feature to create a correct filter.
For details, see Geography.
Events creation#
Events are created using handlers. Handlers are stored in the Handlers database. You should specify the required handler ID in the event creation request. All the data stored in the event will be received according to the handler parameters.
You should perform two separate requests for event creation.
The first request creates a handler. Handler includes policies that describes how the image is processed hence defining the LP services used for the processing.
The second request creates new events using the existing handler. An event is created for each image that has been processed.
You can specify the following additional data for each event creation request:
- external ID (for created faces),
- user data (for created faces),
- source (for created events),
- tags (for created events).
The handler is processed policy after policy. All the data from the request is processed by a policy before going to the next policy. The "detect" policy is performed for all the images from the request, then "multiface" policy is applied, then the "extract" policy is performed for all the received samples, etc. For more information about handlers, see the "Handlers description" section.
Sender service#
The Sender service is an additional service that is used to send events via web sockets. This service communicates with the Handlers service (in which events are created) through the pub/sub mechanism via the Redis DB channel.
You should configure web sockets connection using special request. It is recommended create web sockets connection using the "/ws" resource of the API service. See APIReferenceManual.html for details about the web socket connection creation configuration.
Configuring web sockets directly via Sender is also available ("/ws" ). It can be used to reduce the load on the API service.
When an event is created it can be:
-
saved to the Events database. The Events service should be enabled to save an event;
-
returned in the response without saving to the database.
In both cases, the event is sent via the Redis DB channel to the Sender service.
In this case, the Redis DB acts as a connection between Sender and Handlers services and does not store transferred events.
The Sender service is independent of the Events service. Events can be sent to Sender even if the Events service is disabled.
The general workflow is as follows:
- A user or an application sends requests for new event creation to the API service;
- The API service sends the request to the Handlers service;
- The Handlers service sends requests to the corresponding LP services;
- LP services process the requests and send results. New events are created;
- Handlers sends the events to the Redis DB. Redis has a channel, to which the Sender service is subscribed;
- Redis sends the received events to Sender by the channel;
- third-party party applications should be subscribed to the Sender service via web-sockets to receive events. If there is a subscribed third-party party application, Sender sends events to it according to the specified filters.
See the OpenAPI documentation for information about the JSON structure returned by the Sender service.
Tasks service#
The Tasks service is used for long tasks processing.
General information about tasks#
As tasks processing takes time, the task ID is returned in the response to the task creation.
After the task processing is finished, you can receive the task results using the "task " > "get task result" request. You should specify the task ID to receive its results.
You can find the examples of tasks processing results in the response section of "task " > "get task result" request. You should select the task type in the Response samples section of documentation.
You should make sure that the task was finished before requesting its results:
- You can check the task status by specifying the task ID in the "tasks" > "get task" request. There are the following task statuses:
tasks status | value |
---|---|
pending | 0 |
in progress | 1 |
cancelled | 2 |
failed | 3 |
collect results | 4 |
done | 5 |
- You can receive information about all the tasks using the "tasks" > "get tasks" request. You can set filter to receive information about tasks of interest only.
Types of tasks#
Clustering task#
As the result of the task a cluster with objects selected according to the specified filters for faces or events is created. Objects corresponding to all of the filters will be added to the cluster. Available filters depend on the object type: events or faces.
You can receive the task status or result using additional requests (see the "General information about tasks").
You can use the reporter task to receive the report about objects added to clusters.
Clustering is performed in several steps:
-
objects with descriptors are collected according to provided filters
-
every object is matched with all the other objects
-
create clusters as groups of "connected components" from the similarity graph, link:
Here "connected" means that similarity is greater than provided threshold or default "DEFAULT_CLUSTERING_THRESHOLD" from the config.
- if needed, download existing images corresponding to each object: avatar for a face, first sample for an event.
As a result of the task an array of clusters is returned. A cluster includes IDs of objects (faces or events) whose similarity is greater then the specified threshold. You can use the information for further data analysis.
{
"errors": [],
"result": {
"clusters": [
[
"6c721b90-f5a0-409a-ab70-bc339a70184c"
],
[
"8bc6e8df-410b-4065-b592-abc5f0432a1c"
],
[
"e4e3fc66-53b4-448c-9c88-f430c00cb7ea"
],
[
"02a3a1c4-93d7-4b69-99ec-21d5ef23852e",
"144244cb-e10e-478c-bdac-18cd2eb27ee6",
"1f4cdbcb-7b1e-40cc-873b-3ff7fa6a6cf0"
]
],
"total_objects": 6,
"total_clusters": 4
}
}
The clustering task result can also include information about errors occurred during the objects processing.
Reporter task#
As a result of the task, the report on the clustering task is created. You can select data that should be added to the report. The report has CSV format.
You can receive the task status or result using additional requests (see the "General information about tasks").
You should specify the clustering task ID and the columns that should be added to the report. The selected columns correspond to the general events and faces fields.
Make sure that the selected columns correspond to the objects selected in the clustering task.
You can also receive the images for all the objects in clusters if they are available.
Exporter task#
The task enables you to collect event and/or face data and export them from LP to a CSV file. The file rows represent requested objects and corresponding samples (if they were requested).
This task uses memory when collecting data. So, its possible that Task worker will be killed by OOM (Out-Of-Memory) killer if you request a lot of data.
You can export event or face data using the "/tasks/exporter" request. You should specify what type of object is required by setting objects_type
parameter when creating a request. You can also narrow your request by providing filters for faces and events objects. See the "exporter task" request in "../ReferenceManuals/AdminReferenceManual.html".
As a result of the task a zip archive containing a CSV file is returned.
You can receive the task status or result using additional requests (see the "General information about tasks").
Cross-matching task#
When the task is performed, all the references are matched with all the candidates. References and candidates are set using filters for faces and events.
Matching is performed only for objects that contain extracted descriptors.
You can specify the maximum number of matching candidates returned for every match using the limit
field.
You can set a threshold
to specify the minimal acceptable value of similarity. If the similarity of two descriptors is lower then the specified value, the matching result will be ignored and not returned in the response. References without matches with any candidates are also ignored.
Cross-matching is performed in several steps:
- collect objects having descriptors using provided filters
- match every reference object with every candidate object
- match results are sorted (lexicographically) and cropped (limit and threshold are applied)
You can receive the task status or results using additional requests (see the "General information about tasks").
As a result an array is returned. Each element of the array includes a reference and top similar candidates for it. Information about errors occurred during the task execution is also returned in the response.
{
"result": [
{
"reference_id": "e99d42df-6859-4ab7-98d4-dafd18f47f30",
"candidates": [
{
"candidate_id": "93de0ea1-0d21-4b67-8f3f-d871c159b740",
"similarity": 0.548252
},
{
"candidate_id": "54860fc6-c726-4521-9c7f-3fa354983e02",
"similarity": 0.62344
}
]
},
{
"reference_id": "345af6e3-625b-4f09-a54c-3be4c834780d",
"candidates": [
{
"candidate_id": "6ade1494-1138-49ac-bfd3-29e9f5027240",
"similarity": 0.7123213
},
{
"candidate_id": "e0e3c474-9099-4fad-ac61-d892cd6688bf",
"similarity": 0.9543
}
]
}
],
"errors": [
{
"error_id": 10,
"task_id": 123,
"subtask_id": 5,
"error_code": 0,
"description": "Faces not found",
"detail": "One or more faces not found, including face with id '8f4f0070-c464-460b-bf78-fac225df72e9'",
"additional_info": "8f4f0070-c464-460b-bf78-fac225df72e9",
"error_time": "2018-08-11T09:11:41.674Z"
}
]
}
Linker task#
The task enables you to attach faces to lists according to the specified filters.
You can specify creation of a new list or specify the already existing list in the requests.
You can specify filters for faces or events to perform the task. When an event is specified for linking to list a new face is created based on the event.
If the create_time_lt
filter is not specified, it will be set to the current time.
As the result of the task you receive IDs of faces linked to the list.
You can receive the task status or result using additional requests (see the "General information about tasks").
Task execution process for faces:
- A list is created (if create_list parameter is set to 1) or the specified
list_id
existence is checked. - Face ID boundaries are received. Then one or several subtasks are created with about 1000 face ids per each. The number depends on face ID spreading.
-
For each subtask:
- Face IDs are received. They are specified for the current subtask by filters in the subtask content.
- The request is sent to the Luna Faces to link specified faces to the specified list.
- The result for each subtask is saved to the Image Store service.
-
After the last subtask is finished, the worker collects results of all the subtasks, merges them and puts them to the Image Store service (as task result).
Task execution process for events:
- A list is created (if create_list parameter is set to 1) or the specified
list_id
existence is checked. - Events page numbers are received. Then one or several subtasks are created.
-
For each subtask:
- Event with their descriptors are received from the Events service.
- Faces are created using the Faces service. Attribute(s) and sample(s) are added to the faces.
- The request is sent to the Luna Faces to link specified faces to the specified list.
- The result for each subtask is saved to the Image Store service.
-
After the last subtask is finished, the worker collects results of all the subtasks, merges them and puts them to the Image Store service (as task result).
Garbage collection task#
During the task processing, descriptors or events can be deleted.
- when descriptors are set as a GC target, you should specify the descriptor version. All the descriptors of the specified version will be deleted.
- when events are set as a GC target, you should specify one or several of the following parameters:
- account ID.
- the upper excluded boundary of event creation time.
- the upper excluded boundary of the event appearance in the video stream.
- the ID of the handler used for the event creation.
If necessary, you can delete samples or image origins along with events.
Garbage collection task with events set as the target can be processed using the API service API, while the Admin or Task services API can be used to set both events and descriptors as the target. Thus the specified objects will be deleted for all the existing accounts.
You can receive the task status or result using additional requests (see the "General information about tasks").
Re-extraction task (additional extraction)#
The re-extraction tasks are used for the update to a new neural network for descriptors extraction. All the descriptors of the previous version will be re-extracted using a new NN.
The samples for these descriptors should be stored for the task execution. If any descriptors do not have source samples, they cannot be updated to a new NN version.
You should run the task with:
- extraction_target - "descriptor"
- missing - true
- descriptor_version - new descriptor version
During the task processing, a descriptor of a new neural network will be extracted for each object (face or event) which has a descriptor of the default version.
The old descriptors are not replaced. They can be deleted using the garbage collection task.
Extraction of missing descriptors is done in several steps:
- split a task into several subtasks, divided by different ranges
- get a list of faces, which have descriptors of the default descriptor version
- extract descriptors using the new neural network
- task result is a list with faces ids, their samples and generation (it is required for samples changes tracking)
You can receive the task status or result using additional requests (see the "General information about tasks").
The task can be created using the Admin service API only. See the "tasks" > "create additional extract task" request in "../ReferenceManuals/AdminReferenceManual.html" .
ROC-curve calculating task#
As a result of the task, the Receiver Operating Characteristic curve with TPR (True Positive Rate) against the FPR (False Positive Rate) is created.
See additional information about ROC-curve creation in "TasksDevelopmentManual".
ROC calculation task
ROC (or Receiver Operating Characteristic) is a performance measurement for classification tasks at various thresholds settings. The ROC-curve is plotted with TPR (True Positive Rate) against the FPR (False Positive Rate). TPR is a true positive match pair count divided by a count of total expected positive match pairs, and FPR is a false positive match pair count divided by a count of total expected negative match pairs. Each point (FPR, TPR) of the ROC-cure corresponds to a certain similarity threshold. See more at wiki.
Using ROC the model performance is determined by looking at:
- the area under the ROC-curve (or AUC);
- type I and type II error rates equal point, i.e. the ROC-curve and the secondary main diagonal intersection point.
The model performance also determined by hit into the top-N probability, i.e. probability of hit a positive match pair into the top-N for any match result group sorted by similarity.
It requires markup to make a ROC task. One can optionally specify threshold_hit_top (default 0) to calculate hit into the top-N probability, the match limit (default 5), key_FPRs - list of key FPR values to calculate ROC-curve key points, and filters with account_id. Also, it needs account_id for task creation.
You can receive the task status or result using additional requests (see the "General information about tasks").
Markup
Markup is expected in the following format:
[{'face_id': <face_id>, 'label': <label>}]
Label (or group id) can be a number or any string.
Example:
[{'face_id': '94ae2c69-277a-4e46-817d-543f7d3446e2', 'label': 0},
{'face_id': 'cd6b52be-cdc1-40a8-938b-a97a1f77d196', 'label': 1},
{'face_id': 'cb9bda07-8e95-4d71-98ee-5905a36ec74a', 'label': 2},
{'face_id': '4e5e32bb-113d-4c22-ac7f-8f6b48736378', 'label': 3},
{'face_id': 'c43c0c0f-1368-41c0-b51c-f78a96672900', 'label': 2}]
Estimator task#
The estimator task enables you to perform batch processing of images using the specified policies.
As a result of the task performing, JSON is returned with data for each of the processed images and information about the errors that have occurred.
In the request body, you can specify the handler_id
of an already existing static or dynamic handler. For the dynamic handler_id
, the ability to set the required policies is available. In addition, you can create a static handler specifying policies in the request.
The resource can accept four types of sources with images for processing:
- ZIP archive
- S3-like storage
- Network disk
- FTP server
To obtain correct results of image processing using the Estimator task, all processed images should be either in the source format or in the format of samples. The type of transferred images is specified in the request in the "image_type" parameter.
ZIP archive as image source of estimator task
The resource accepts for processing a link to a ZIP archive with images. The size of the archive is set using the "ARCHIVE_MAX_SIZE" parameter in the "config.py" configuration file of the Tasks service. The default size is 100 GB. An external URL or the URL to an archive saved in the Image Store can be used as a link to the archive. In the second case, the archive should first be saved to the LP using a POST request to the "/objects" resource.
When using an external URL, the ZIP archive is first downloaded to the Task Worker container storage, where the images are unpacked and processed. After the end of the task, the archive is deleted from the repository along with the unpacked images.
It is necessary to take into account the availability of free space for the above actions.
The archive can be password protected. The password can be passed in the request using the "authorization" -> "password" parameter.
S3-like storage as image source of estimator task
The following parameters can be set for this type of source:
- bucket_name - bucket name/Access Point ARN/Outpost ARN (required);
- endpoint - storage endpoint (only when specifying the bucket name);
- region - bucket region (only when specifying the bucket name);
- prefix - file key prefix. It can also be used to load images from a specific folder, such as "2022/January".;
The following parameters are used to configure authorization:
- Public access key (required);
- Secret access key (required);
- Authorization signature version ("s3v2"/"s3v4").
It is also possible to recursively download images from nested bucket folders and save original images.
For more information about working with S3-like repositories, see AWS User Guide.
Network disk as image source of estimator task
The following parameters can be set for this type of source:
- path - absolute path to the directory with images in the container (required).
- follow_links - enables/disables symbolic link processing;
- prefix - file key prefix;
- postfix - file key postfix.
See an example of using prefixes and postfixes in the "/tasks/estimator" resource description.
When using a network disk as an image source and launching Tasks and Tasks Workers services through Docker containers, it is necessary to mount the directory with images from the network disk to the local directory and synchronize it with the specified directory in the container. You can mount a directory from a network disk in any convenient way. After that, you can synchronize the mounted directory with the directory in the container using the following command when launching the Tasks and Tasks Worker services:
docker run \
...
-v /var/lib/luna/current/images:/srv/images
...
/var/lib/luna/current/images - path to the previously mounted directory with images from the network disk.
/srv/images - path to the directory with the images in the container where they will be moved from the network disk. This path should be specified in the request body of the Estimator task (the "path" parameter).
As for S3-like storage, the ability to recursively download images from nested bucket folders is available.
FTP server as image source of estimator task
For this type of source, the following parameters can be set in the request body for connecting to the FTP server:
- host - FTP server IP address or hostname (required);
- port - FTP server port;
- max_sessions - maximum number of allowed sessions on the FTP server;
- user, password - authorization parameters (required).
As in Estimator tasks using S3-like storage or network disk as image sources, it is possible to set the path to the directory with images, recursively receive images from nested directories, select the type of transferred images, and specify the prefix and postfix.
See an example of using prefixes and postfixes in the "/tasks/estimator" resource description.
General task processing description#
The Tasks service includes the Tasks service and Tasks workers. Tasks receives requests to the Tasks service, creates tasks in the DB and sends subtasks to Tasks workers. Tasks workers receive subtasks and perform all the required requests to other services to solve the subtasks.
The general approach for working with tasks is listed below.
- A user sends the request for creation of a new task;
- Tasks service creates a new task and sends subtasks to workers;
- The Tasks workers process subtasks and create reports;
- If several workers have processed subtasks and have created several reports, the worker, which finished the last subtask, gathers all the reports and creates a single report;
- When the task is finished, the last worker updated its status in the Tasks database;
- The user can send requests to receive information about tasks and subtasks and number of active tasks. The user can cancel or delete tasks;
- The user can receive information about errors that occurred during execution of the tasks;
- After the task is finished the user can send a request to receive results of the task.
See the "Tasks diagrams" section for details about tasks processing.
Admin service#
The Admin service is used to perform general administrative routines:
- Manage user accounts;
- Receive information about objects belonging to different accounts;
- Create garbage collection tasks;
- Create tasks to extract descriptors with a new neural network version;
- Receive reports and errors on processed tasks;
- Cancel and delete existing tasks.
Admin service has access to all the data attached to different accounts.
All the requests for Admin service are described in AdminReferenceManual.html.
Admin user interface#
The service has its own user interface to simplify administrative tasks.
Open the Admin interface in your browser: <Admin_server_address>:5010
This URL may differ. In this example, the Admin service interface is opened on the Admin service server. The Admin service is launched on the default port.
The default login and password to access the interface are "root/root".
You can change the default password for the Admin service using the "Change authorization" request.
You can create new account IDs or add existing account IDs to track their data using the UI.
You can manage account IDs using the following buttons:
- Add a new account ID or add an existing account ID for tracking.
- Delete the account ID.
- View the information provided for the account ID.
You can create a garbage collection task and task for the extraction of the new version of descriptors using the "Tasks" tab of the user interface.
You must press the start button to create a new task. After the button is pressed, you can specify additional parameters for the task.
The tasks are performed by the Tasks service after the request from the Admin service.
Get system info#
The Admin service provides "System info request" that includes technical information about LP:
- versions of the services,
- SDK version,
- number of descriptors,
- configuration files values,
- license information,
- requests and estimations statistics (see section "Requests and estimations statistics gathering").
This information is required for our technical support. When you send us an issue, please attach the received JSON file to your letter. Use one of the following ways to receive system information:
- Create a request to the Admin service using "System info request". The request is described in the Admin OpenAPI documentation;
- Open the Admin service UI and go to the "Help" tab. The button is in the top right corner of the interface. Press the "Get LUNA PLATFORM system info" button. The JSON file with information will be saved on your PC.
Configurator service#
The Configurator service simplifies the configuration of LP services.
The service stores all the required configurations for all the LP services in a single place. You can edit configurations through the user interface or special limitation files.
You can also store configurations for any third-party party software in Configurator.
The general workflow is as follows:
- The user edits configurations in the UI;
-
Configurator stores all changed configurations and other data in the database;
-
LP services request Configurator service during startup and receive all required configurations. All the services should be configured to use the Configurator service.
During Configurator installation, you can also use your limitation file with all the required fields to create limitations and fill in the Configurator database. You can find more details about this process in the "ConfiguratorDevopsManual" documentation.
Settings used by several services are updated for each of the services. For example, if you edit the "LUNA_FACES_ADDRESS" setting for the Handlers service in the Configurator user interface, the setting will be also updated for API, Admin and Python Matcher services.
General Configurator UI description#
Open the Configurator interface in your browser: <Configurator_server_address> :5070
This URL may differ. In this example, the Configurator service interface is opened on the Configurator service server.
LP includes the beta version of the Configurator UI. The UI was tested on Chrome and Yandex browser. The recommended screen resolution for working with the UI is 1920 x 1080.
The following tabs are available in the UI of Configurator:
- Settings. All the data in the Configurator service is stored on the Settings tab. The tab displays all the existing settings. It also allows to manage and filter them;
- Limitations. The tab is used to create new limitations for settings. The limitations are templates for JSON files that contain available data type and other rules for the definition of the parameters;
- Groups. The tab allows to group all the required settings. When you select a group on the Settings tab, only the settings corresponding to the group will be displayed. It is possible to get settings by filters and/or tags for a single specific service. For this purpose, the Groups tab is used.
- About. The tab includes information about the Configurator service interface.
Settings#
Each of the Configurator settings contain the following fields:
- Name - a name for the setting.
- Description - setting description;
- ID and Times - unique setting ID;
- Create time - setting create time;
- Last update time - setting last update time;
- Value - a body of the setting;
- Schema - a verification template for the schema body;
- Tag - tags for the setting used to filter settings for the services.
The "Tags" field is not available for the default settings. You should press the Duplicate button and create a new setting on the basis of the existing one.
The following options for the settings are available:
-
Create a new setting - press the Create new button, enter required values and press Create. You should also select an already existing limitation for the setting. The Configurator will try to check the value of a setting if the Check on save flag is enabled and there is a limitation selected for the setting;
-
Duplicate existing setting - press the Duplicate button on the right side of the setting, change required values and press Create. The Configurator will try to check the setting value if the Check on save flag is enabled on the lower left side of the screen and there is such a possibility;
-
Delete existing setting - press the Delete button on the right side of the setting.
-
Update existing setting - change name, description, tags, value and press Save button on the right side of the setting.
-
Filter existing settings by name, description, tags, service names, groups - use the filters on the left side of the screen and press Enter or click on Search button;
Show limitations - the flags are used to enable displaying of limitations for each of the settings.
JSON editors - the flag enables you to switch the mode of the value field representation. If the flag is disabled, the name of the parameter and a field for its value are displayed. If the flag is enabled, the Value field is displayed as a JSON.
The Filters section on the left side of the window enables you to display all the required settings according to the specified values. You may enter the required name manually or select it from the list:
- Setting. The filter enables you to display the setting with the specified name.
- Description. - The filter enables you to display all settings with the specified description or part of description.
- Tags. The filter enables you to display all settings with the specified tag.
- Service filter. The filter enables you to display all settings that belong to the selected service.
- Group. The filter enables you to display all settings that belong to the specified group. For example, you can select to display all the services belonging to LP.
Limitations#
Limitations are used as service settings validation schema.
Settings and limitations have the same names. A new setting is created upon limitation creation.
The limitations are set by default for each of the LP services. You cannot change them.
Each of the limitations includes the following fields:
- Name is the name of the limitation.
- Description is the description of the limitation.
- Service list is the list of services that can use settings of this limitation.
- Schema is the object with JSON schema to validate settings
- Default value is the default value created with the limitation.
The following actions are available for managing limitations:
- Create a new limitation - press the Create new button, enter required values and press "Create". Also, the setting with default value will be created;
- Duplicate existing limitation - press the Duplicate button on the right side of the limitation, change required values and press Create. Also, the setting with default value will be created;
- Update limitation values - change name/description/service list/validation schema/default values and press the Save button on the right side of the limitation;
- Filter existing limitations by names, descriptions, and groups;
- Delete existing limitation - press the Delete button on the right side of the limitation.
Groups#
Group has a name and a description.
It is possible to:
- Create a new group - press the Create new button, enter the group name and optionally description and press Create;
- Filter existing groups by group names and/or limitation names - use filters on the left side and press 'RETURN' or click on Search button;
- Update group description - update the existing description and press the Save button on the right side of the group;
- Update linked limitation list - to unlink limitation, press "-" button on the right side of the limitation name, to link limitation, enter its name in the field at the bottom of the limitation list and press the "+" button. To accept changes, press the Save button;
- Delete group - press the Delete button on the right side of the group.
Settings dump#
The dump file includes all the settings of all the LP services.
Receive settings dump#
You can fetch the existing service settings from the Configurator by creating a dump file. This may be useful for saving the current service settings.
To receive a dump file, enter the Configurator container and use the following options:
- wget:
wget -O settings_dump.json 127.0.0.1:5070/1/dump
; - curl:
curl 127.0.0.1:5070/1/dump > settings_dump.json
; - text editor.
The current values, specified in the Configurator service, are received.
Apply settings dump#
To apply the dumped settings use the db_create.py
script with the --dump-file
command line argument (followed with the created dump file name): base_scripts/db_create.py --dump-file settings_dump.json
:
You can apply full settings dump on an empty database only. If any settings already exist, you should use the drop-database flag before applying new dump.
If the settings update is required, you should delete the whole "limitations" group from the dump file before applying it.
"limitations":[
...
],
Follow these steps to apply the dump file:
-
Enter the Configurator container;
-
Run
./python3.9 base_scripts/db_create.py --dump-file settings_dump.json
Limitations from the existing limitations files are replaced with limitations from the dump file, if limitations names are the same.
Limitations file#
Receive limitation file#
Limitation file includes limitations of the specified service. It does not include existing settings and their values.
To download a limitations file for one or more services, use the following commands:
-
Enter the Configurator container;
-
Create the output base_scripts/results directory:
mkdir base_scripts/results
; -
Run the base_scripts/get_limitation.py script:
python3.9 base_scripts/get_limitation.py --service luna-image-store luna-handlers --output base_scripts/results/my_limitations.json
.
Note the base_scripts/get_limitation.py script parameters:
--service
for specifying one or more service names (required);--output
for specifying the directory or a file where to save the output. The default value: current_dir/_limitation.json (optional).
Database drop#
Users can wipe out the Configurator database data, when needed. After the script finished processing, a database structure is created in the Configurator DB.
This operation leads to the stored settings loss. Create settings dump file before executing the following commands!
To drop the Configurator database, use the base_scripts/db_create.py script with --recreate-database
command line argument:
-
Enter the Configurator container;
-
Run
python3.9 base_scripts/db_create.py --recreate-database
The
--recreate-database
command line argument can be combined with the--dump-file
command line argument to wipe out the data and apply the required settings at one time, when needed.
Existing settings migration#
You can migrate settings in the Configurator DB without changing already existing values of the settings. Hence, the names of the settings are changed according to the current LP build, but their values are not changed.
The migration updates LP parameters only. The parameters added by users and parameters not related to LP5 are not updated.
Settings revision is added to the database after the migration was finished. Starting with LP build 5.1.1, this migration is performed automatically during the Configurator database creation.
It is recommended to manually transfer settings for LP builds of version 5.1.0 and earlier to the updated Configurator database.
Licenses service#
General information#
The Licenses service stores information about the available licensed features and their limits.
Use the GET request to the "/license" resource to receive the license information.
Information about license#
LP license includes the following features:
- License expiration date.
- The maximum number of faces with descriptors available.
- The Liveness feature info.
- Info about the availability of functionality for checking the image according to ISO/IEC 19794-5 standard or checking the image with manual setting the thresholds (for details see the "Image Check" section).
Expiration date#
When the license expires, you cannot use LUNA PLATFORM.
By default, the notification about the end of the license is sent two weeks before the expiration date.
When the license ends, the following message is returned "License has expired. Please contact VisionLabs for a license extension.".
The Licenses service checks the license expiration date and sends notifications to logs and monitoring (in the "license_period_rest" field).
Faces limit#
The Faces service checks the number of faces left according to the maximum available number of faces received from the Licenses service. The faces with linked descriptors are counted only.
The percentage of the used limit for faces with descriptors is written in the Faces log and displayed in the Admin GUI.
The Faces service writes data about created faces with attached descriptors to the monitoring database in the "license_faces_limit_rate" field.
The created faces are written in the Faces log and displayed in the Admin GUI as a percentage of the database fullness. You should calculate the number of faces with descriptors left using the current percentage.
You start receiving notifications when there are 15% of available faces left. When you exceed the number of available faces, the message "License limit exceeded. Please contact VisionLabs for license upgrade or delete redundant faces" appears in logs. You cannot attach attributes to faces if the number of faces exceeds 110%.
Liveness#
The following values can be set in the license for the Liveness feature:
- 0 - Liveness feature is not used.
- 1 - Liveness v1 is used.
- 2 - Liveness v2 is used.
For Liveness V2, an unlimited license and a license with a limited number of transactions are available. Liveness V1 is provided with an unlimited license only.
Each use of Liveness in requests reduces the transaction count. It is impossible to use the Liveness score in requests after the transaction limit is exhausted. Requests that do not use Liveness and requests where the Liveness estimation is disabled are not affected by the exhaustion of the limit. They continue to work as usual.
The Licenses service stores information about the liveness V2 transactions left. The number of transactions left is returned in the response from the "/license" resource.
The Handlers service writes data on the number of available Liveness V2 transactions to the monitoring database in the "liveness_balance" field.
A warning about the exhaustion of the number of available transactions is sent to the monitoring and logs of the Handlers service when the remaining 2000 transactions of Liveness V2 are reached (this threshold is set in the system).
Backport 3#
The Backport 3 service is used to process the requests for LUNA PLATFORM 3 using LUNA PLATFORM 5.
Although most of the requests are performed in the same way as in LUNA PLATFORM 3, there are still some restrictions. See "Backport 3 features and restrictions" for details.
See "Backport3ReferenceManual.html" for details about the Backport 3 API.
Backport 3 new resources#
Liveness#
Backport 3 provides Liveness estimation in addition to the LUNA PLATFORM 3 features. See the "liveness > predict liveness" section in "Backport3ReferenceManual.html".
Handlers#
The Backport 3 service provides several handlers: extractor, identify, verify. The handlers enable to perform several actions in a single request:
-
"handlers" > "face extractor" - enables you to extract a descriptor from an image, create a person with this descriptor, attach the person to the predefined list.
-
"handlers" > "identify face" - enables you to extract a descriptor from an image and match the descriptor with the predefined list of candidates.
-
"handlers" > "verify face" - enables you to extract a descriptor from an image and match the descriptor with the person's descriptor.
The description of the handlers and all the parameters of the handlers can be found in the following sections:
- "handlers" > "patch extractor handler"
- "handlers" > "patch verify handler"
- "handlers" > "patch identify handler"
The requests are based on handlers and unlike the standard "descriptors" > "extract descriptors", "matching" > "identification", and "matching" > verification" requests the listed above request are more flexible.
You can patch the already existing handlers thus applying additional estimation to the requests. E. g. you can specify head angles thresholds or enable/disable basic attributes estimation.
The Handlers are created for every new account at the moment the account is created. The created handlers include default parameters.
Each of the handlers has the corresponding handler in the Handlers service. The parameters of the handlers are stored in the luna_backport3 database.
Each handler supports GET and PATCH requests thus it is possible to get and update parameters of each handler.
Each handler has its version. The version is incremented with every PATCH request. If the current handler is removed, the version will be reset to 1:
-
For the requests with POST and GET methods:
If the Handlers and/or Backport 3 service has no handler for the specified action, it will be created with default parameters.
-
For requests with PATCH methods:
If Handlers and/or Backport 3 service has no handler for the specified action, a new handler with a mix of default policies and policies from the request will be created.
Backport 3 architecture#
Backport 3 interacts with the API service and sends requests to LUNA PLATFORM 5 using it.
Backport 3 has its own database (see "Backport 3 database". Some of its tables are similar to the tables of the Faces database of LP 3. It enables you to create and use the same entities (persons, account tokens and accounts) as in LP 3.
The backport service uses Image Store to store portraits.
You can configure Backport 3 using the Configurator service.
Backport 3 features and restrictions#
The following features have core differences:
For the following resources on method POST default descriptor version to extract from image is 56:
/storage/descriptors
/handlers/extractor
/handlers/verify
/handlers/identify
/matching/search
You can still upload the existing descriptors of versions 52, 54, 56. The older descriptor versions are no longer supported. - For resource /storage/descriptors on method POST, estimation of saturation property is no longer supported, and the value is always set to 1. - For resource /storage/descriptors on method POST, estimation of eyeglasses attribute is no longer supported. The attributes structure in the response will lack the eyeglasses member. - For resource /storage/descriptors on method POST, head position angle thresholds can still be sent as float values in range [0, 180], but they will be internally rounded to integer values. As before, thresholds outside the range [0, 180] are not taken into account.
Garbage Collecting (GC) module#
According to LUNA Platform 3 logic, garbage is the descriptors
that are linked neither to a person
nor to a list
.
For normal system operation, one needs to regularly delete garbage from databases. For this, run the system cleaning script remove_not_linked_descriptors.py
from ./base_scripts/gc/ folder.
According to Backport 3 architecture, this script removes faces
, which do not have links with any lists
or persons
from the Luna Backport 3 database, from the Faces service.
Script execution pipeline#
The script execution pipeline consists of several stages:
1) A temporary table is created in the Faces database. See more info about temporary tables for oracle or postgres.
2) Ids
of faces that are not linked to lists are obtained. The ids
are stored in the temporary table.
3) While the temporary table is not empty, the following operations are performed:
- The batch of
ids
from the temporary table is obtained. First 10k (or less) face ids are received. Filtered ids
are obtained.Filtered ids
are ids that do not exist in theperson_face
table of the Backport 3 database.Filtered ids
are removed from the Faces database. If some of the faces cannot be removed, the script stops.Filtered ids
are removed from the Backport 3 database (foolcheck). A warning will be printed.Ids
are removed from the temporary table.
Script launching#
docker run --rm -t --network=host --entrypoint bash dockerhub.visionlabs.ru/luna/luna-backport-3:v.0.3.39 -c "python3.9 ./base_scripts/gc/remove_not_linked_descriptors.py"
The output will include information about the number of removed faces and the number of persons with faces.
Backport 3 database#
This section includes the description of the Backport 3 database.
Account table model#
The table model represents the account entity that was used in LUNA PLATFORM 3.
Each of the accounts is linked with a unique account ID.
Account table model
Name | primary_key | Type | Description |
---|---|---|---|
account_id | True | varchar(36) | The account ID to which all the data belongs. Stored in UUID4 format |
active | boolean | Represents if the account is active or not | |
password | varchar(128) | The account password | |
varchar(64) | An email linked to the account | ||
organization_name | varchar(128) | An organization name linked to the account |
Account_token table model#
The table model represents the token entity that was used in LUNA PLATFORM 3.
Account_token table model
Name | primary_key | Type | Description |
---|---|---|---|
token_id | True | varchar(36) | A unique ID of the token |
account_id | varchar(36) | The account ID to which all the data belongs. Stored in UUID4 format | |
token_info | varchar(128) | Information about the token |
Handler table model#
The table model is used to store handlers that are used for "face extractor", "verify face", and "identify face" requests.
Handler table model
Name | primary_key | Type | Description |
---|---|---|---|
account_id | True | varchar(36) | The account ID to which all the data belongs. Stored in UUID4 format |
type | True | integer | Handler type (extractor - 0, identify - 1, verify - 2) |
handler_id | varchar(36) | Handler id. Stored in UUID4 format | |
create_time | timestamp | Date and time of the handler creation | |
last_update_time | timestamp | Date and time of the last handler update | |
policies | text | JSON file with handler policies | |
version | integer | Handler version. Each time a handler is patched, its version is incremented. |
Person table model#
Database table model for persons
Name | primary_key | Type | Description |
---|---|---|---|
person_id | True | varchar(36) | A person ID. Stored in UUID4 format. |
account_id | varchar(36) | The account ID to which all the data belongs. Stored in UUID4 format | |
user_data | varchar(128) | Information provided with the person | |
create_time | timestamp | Date and time of the person creation | |
external_id | varchar(36) | An external ID of the person. Usually used in external systems |
Persons_list table model#
Database table model for the lists of persons
Name | primary_key | Type | Description |
---|---|---|---|
list_id | True | varchar(36) | The ID of the persons list. Stored in UUID4 format. |
account_id | varchar(36) | The account ID to which all the data belongs. Stored in UUID4 format | |
create_time | timestamp | Date and time of the list creation |
Person_face table model#
Database table model for links between persons and faces. The faces and all the information about them is stored in the Faces database of the LUNA PLATFORM 5.
A person may have several faces linked. A face must be linked to a single person only.
Database table model for the faces of persons
Name | primary_key | Type | Description |
---|---|---|---|
person_id | True | varchar(36) | A person ID. Stored in UUID4 format. |
face_id | True | varchar(36) | A face ID. Stored in UUID4 format. |
List_person table model#
Database table model for links between persons and lists
Name | primary_key | Type | Description |
---|---|---|---|
person_id | True | varchar(36) | A person ID. Stored in UUID4 format. |
list_id | True | varchar(36) | The ID of the persons list. Stored in UUID4 format. |
Descriptors_list table model#
Database table model for the lists of descriptors
Name | primary_key | Type | Description |
---|---|---|---|
list_id | True | varchar(36) | The ID of the descriptors list. Stored in UUID4 format. |
account_id | varchar(36) | The account ID to which all the data belongs. Stored in UUID4 format | |
create_time | timestamp | Date and time of the list creation |
Backport 4#
The Backport 4 service is used to process the requests for LUNA PLATFORM 4 using LUNA PLATFORM 5.
Although most of the requests are performed in the same way as in LUNA PLATFORM 4, there are still some restrictions. See "Backport 4 features and restrictions" for details.
See "Backport4ReferenceManual.html" for details about the Backport 4 API.
Backport 4 architecture#
Backport 4 interacts with the API service and sends requests to LUNA PLATFORM 5 using it.
Backport 4 directly interacts with the Faces service to receive the number of existing attributes.
Backport 4 directly interacts with the Sender service. All the requests to Sender are sent using the Backport 4 service. See the "ws" > "ws handshake" request in the "Backport4ReferenceManual.html".
You can configure Backport 4 using the Configurator service.
Backport 4 features and restrictions#
The following features have core differences:
The current versions for LUNA PLATFORM services are returned on the request to the /version resource. For example, the versions of the following services are returned:
- "luna-faces"
- "luna-events"
- "luna-image-store"
- "luna-python-matcher" or "luna-matcher-proxy"
- "luna-tasks"
- "luna-handlers"
- "luna-api"
- "LUNA PLATFORM"
- "luna-backport4" - the current service
Resources changelog:
- Resource
/attributes/count
is available without any query parameters and does not support accounting. The resource works with temporary attributes. - Resource
/attributes
on method GET:attribute_ids
query parameter is allowed instead ofpage
,page_size
,time__lt
andtime__gte
query parameters. Thus you can get attributes by their IDs not by filters. The resource works with temporary attributes. - Resource
/attributes/<attribute_id>
on methods GET, HEAD, DELETE and resource/attributes/<attribute_id>/samples
on method GET interact with temporary attributes and return attribute data if the attribute TTL has not expired. Otherwise, the "Not found" error is returned. - If you already used the attribute to create a face, use the
face_id
to receive the attribute data. In this case, theattribute_id
from the request is equal toface_id
. - Resource
/faces
enables you to create more than one face with the sameattribute_id
. - Resource
/faces/<face_id>
on method DELETE enables you to remove face without removing its attribute. - Resource
/faces/<face_id>
on method PATCH enables you to patch attribute of the face making the first request to patchevent_id
,external_id
,user_data
,avatar
(if required) and the second request to patch attribute (if required). - If face attribute_id is to be changed, the service will try to patch it with temporary
attribute data if the temporary attribute exists. Otherwise, the service tries to patch it with
attribute data from the face with
face_id
=attribute_id
. - The match policy of resource
/handlers
now has the default match limit that is configured using theMATCH_LIMIT
setting from the Backport 4 config.py file. - Resource
/events/stats
on method POST:attribute_id
usage infilters
object was prohibited as this field is no longer stored in the database. The response with the 403 status code will be returned. - Attribute_id in events is not null and is equal to
face_id
for back compatibility. GC task is unavailable because all the attributes are temporary and will be removed automatically. Status code 400 is returned on a request to the/tasks/gc
resource. - The column attribute_id is not added to the report of the Reporter task and this
column is ignored if specified in the request. Columns
top_similar_face_id
,top_similar_face_list
,top_similar_face_similarity
are replaced by thetop_match
column in the report if any of these columns is passed in the reporter task request. - Linker task always creates new faces from events and ignores faces created during the event processing request.
- Resource
/matcher
does not check the presence of provided faces thus errorFacesNotFound
is never returned. If the user has specified a non-existent candidate of type "faces", no error will be reported, and no actual matching against that face will be made. - Resource
/matcher
checks whether reference with type attribute has the ID of face attribute or the ID of temporary attribute and performs type substitution. Hence it provides sending references for matching in the way it was done in the previous version. - Resource
/matcher
takes matching limits into account. By default, the maximum number of references or candidates is limited to 30. If you need to overcome these limits, configureREFERENCE_LIMIT
andCANDIDATES_LIMIT
. - Resource
/ws
has been added. There was no/ws
resource in the LUNA PLATFORM 4 API as it was a separate resource of the Sender service. This added resource is similar to the Sender service resource, except thatattribute_id
of candidates faces is equal toface_id
. - Resource
/handlers
returns the error "Invalid handler with id ", if the handler was created in the LUNA PLATFORM 5 API and is not supported in LUNA Backport 4.
Luna Backport 4 User Interface#
The User Interface service is used for the visual representation of LP features. It does not include all the functionality available in LP. User Interface enables you to:
- Download photos and create faces using them;
- Create lists;
- Match existing faces;
- Show existing events;
- Show existing handlers.
All the information in User Interface is displayed according to the "Luna-Account-Id", specified in the configuration file of the User Interface service.
./luna-ui/browser/env.js
User Interface works with a single "Luna-Account-Id" at a time.
General pages#
You should open your browser and enter the User Interface address. The default value is:
You can select a page on the left side of the window.
Lists/faces page#
The starting page of User Interface is Lists/Faces. It includes all the faces and lists created using the "Luna_account_id".
The left column of the workspace displays existing lists. You can create a new list by pressing the Add list button. In the appeared window you can specify the user data for the list.
The right column shows all the created faces with pagination.
Use the Add new faces button to create new faces.
On the first step, you should select photos to create faces from. You can select one or several images with one or several faces in them.
After you select images, all the found faces will be shown in a new dialog window.
All the correctly preprocessed images will be marked as "Done". If the image does not correspond to any of the requirements, an error will be displayed for it.
Press the Next step button.
On the next step, you should select the attributes to extract for the faces.
Press the Next step button.
On the next step, you can specify user data and external ID for each of the faces. You can also select lists to which each of the faces will be added. Press Add Faces to create faces.
You can change pages using arrow buttons.
You can change the display of faces and filter them using buttons in the top right corner.
Filer faces. You can filter faces by ID, external ID or list ID;
/
Change view of the existing faces.
Handlers page#
Handlers page displays all handlers created using the "Luna_account_id".
All the information about specified handler policies is displayed when you select a handler.
You can edit or delete a handler using edit
and delete
icons.
Events page#
The events page displays all the events created using the "Luna_account_id".
It also includes filters for displaying of events
.
Common information#
You can edit
or delete
an item (face, list or handler) using special icons. The icons appear when you hover the cursor on an item.
Matching dialog#
The Matching button in the bottom left corner of the window enables you to perform matching.
After pressing the button you can select the number of the received results for each of the references.
On the first step, you should select references for matching. You can select faces and/or events as references.
On the second step, you should select candidates for matching. You can select faces or lists as candidates.
On the last step, you should press the Start matching button to receive results.