Skip to content

Services description#

This section provides more details on functions of the LP services.

Databases and message queues can be omitted in the following figures.

API service#

LUNA API is a facial recognition web service. It provides a RESTful interface for interaction with other LUNA PLATFORM services.

Using the API service you can send requests to other LP services and solve the following problems:

  • Images processing and analysis:

    • face/body detection in photos;

    • face attributes (age, gender, ethnicity) and face parameters (head pose, emotions, gaze direction, eyes attributes, mouth attributes) estimation;

    • body parameters (age, gender, accessories, headwear, color of outerwear, type of sleeves) estimation;

  • Search for similar faces/bodies in the database;

  • Storage of the received face attributes in databases;

  • Creation of lists to search in;

  • Statistics gathering;

  • Flexible request management to meet user data processing requirements.

Handlers service#

The Handlers service is used to:

  • perform face detection and face parameters estimation,
  • perform body detection and body parameters estimation,
  • samples creation,
  • perform extraction of basic attributes and descriptor, including aggregated ones,
  • create and store handlers and verifiers.
  • Process images using handlers and verifiers policies.

Face, body detection, descriptor extraction, estimation of parameters and attributes are performed using neural networks. The algorithm evolves with time and new neural networks appear. They may differ from each other by performance and precision. You should choose a neural network following the business case of your company.

Face and body detection and estimation of their parameters#

Objects detections and parameters estimation are performed when the "detect_policy" is specified in a handler. Face parameters can also be estimated using the "/detector" resource. The following main steps can be performed:

  • Face detection in the image;
  • Body detection in the image ("detect_policy" only);
  • Normalization of the image (obtaining a sample);
  • Obtaining face parameters;
  • Obtaining body parameters;
  • Estimating the face quality in the image;

See the "detect faces" request in the API service reference manual for details.

Face detection#

LP tries to detect all human faces it can on each submitted photo. This process is performed by the Handlers service. For each detected face, the service outputs a bounding box and a set of key points (landmarks) for eyes, nose, and mouth. They are used to estimate a camera angle and to rotate the face to the optimal front position in the image plane. The image is centered using eye positions and cropped to the required size. This way all samples look the same. E. g., the left eye is always in a box, defined by certain coordinates.

In addition to faces, the Handlers service can find bodies in the images. When a body is found its sample can be created.

The simplified scheme of image processing using the Handlers service is given below.

Handlers workflow
Handlers workflow

Checking and estimating image parameters#

The Handlers service enables you to check images against user-specified thresholds ("face_quality" parameter group of the "detect_policy" policy). For example, it is necessary to check whether the image is of the appropriate format, specifying "JPEG" and "JPEG2000" as a satisfactory condition. If the image matches this condition, the system will return the value "1", but if the format of the processed image is different from the specified condition, the system will return the value "0". If no conditions are specified, the system will return the estimated image format value.

The list of estimations and checks performed is described in the "Image check" section.

Note. The ability to perform check and estimation of image parameters is controlled by a special parameter in the license file.

Samples#

The image that is received after all transformations, is called a sample (normalized image). The sample corresponds to the specific format and can be further processed by LP services.

Samples are used to create descriptors, restore or re-create face database when recovering or updating neural network models. They may be also used in the user interface as the preview for faces (avatars).

To create a sample, you should send the corresponding request "detect faces" to Handlers service using API service. Samples are saved to Image Store.

Input images#

The image may be in one of the standard formats (PNG, JPG, BMP, PORTABLE PIXMAP, TIFF) with RBG color model or Base64 encoded (see "Image format requirements" section). After the image is processed a sample is created and stored to the Image Store service.

In the request, you can send an image or specify the URL to the image.

You can use a multipart/form-data schema to send several images in the request. Each of the sent images must have a unique name. The content-disposition header must contain the actual filename.

You can also specify a bounding box parameters using the multipart/form-data schema. It enables you to specify the certain zone with a face on the image. You can specify the following properties for the bounding box:

  • top left corner "x" and "y" coordinates,
  • height,
  • width.

Basic attributes and descriptors extraction#

For faces:

The "/extract" resource and the "extract_policy" are used to estimate basic attributes and extract descriptors. See the "extract attributes" request in "APIReferenceManual.html" for details.

Each detected face is converted into a special set of unique features, called face descriptor. Descriptor stores the set of packed properties as well as some helper parameters that were used to extract these properties from the source image.

The descriptor requires much less storage memory in comparison to the source image. It is impossible to restore the original face image from the descriptor, which is important for personal data safety reasons.

All the data received after the "extract attributes" request execution is saved to the database by the Faces service. See the "Temporary attributes" section.

The simplified scheme of sample processing using the Handlers service is given below.

Extraction workflow
Extraction workflow

The Handlers service also returns information about age, gender and ethnicity of the face in the image.

For bodies:

The process of extracting gender, age and descriptor of bodies is performed using the "detect_policy" policy of the Handlers service or the "/sdk" resource.

Aggregation#

Based on all images transferred in one request, a single set of basic attributes and an aggregated descriptor can be obtained. In addition, during the creation of the event, the aggregation of the received values of Liveness, emotions, medical mask states for faces and upper body (headwear, sleeves, color of outerwear), gender, age and the body accessories for bodies is performed.

The matching results are more precise for aggregated descriptor. It is recommended to use aggregation when several images were received from the same camera. It is not guaranteed that aggregated descriptors provide improvements in other cases.

It is considered that each parameter is aggregated from sample. Use the "aggregate_attributes" parameter of the "extract attributes" (only for faces) and "sdk" requests to enable attributes aggregation. Aggregation of liveness, emotion, and mask states for faces and upper body, gender, age and the body accessories for bodies is available using the "aggregate_attributes" parameter in the "generate events", provided that these parameters were estimated earlier in the handler, as well as in the "sdk" request.

An array of "sample_ids" is returned in the response even if there was only a single sample used in the request. In this case, a single sample ID is included in the array.

Data for extraction#

The following information received during the "detect faces" request is required for the descriptor creation:

  • a face bounding box inside the image;
  • pre-computed landmarks.

Handlers with GPU#

Handlers service can utilize GPU instead of CPU for calculations. A single GPU is utilized per Handlers service instance.

Attributes extraction on the GPU is engineered for maximum throughput. The input images are processed in batches. This reduces computation cost per image but does not provide the shortest latency per image.

GPU acceleration is designed for high load applications where request counts per second consistently reach thousands. It won’t be beneficial to use GPU acceleration in non-extensively loaded scenarios where latency matters.

Descriptors version#

Descriptors extraction is performed using neural networks. See "Neural networks" section for detailed description.

Temporary attributes#

After the "extract attributes" request is processed, the Faces service saves the received data in the Redis DB as a temporary attribute.

The temporary attribute object includes the data of basic attributes, descriptor and the samples used for their extraction. See "Faces database description" for details.

Temporary attributes have a TTL (time to live) and will be removed from the database after the specified period. You can specify the period in the range from 1 to 86400 seconds in the request. The TTL is set to 5 minutes by default.

When a face is created the temporary attribute data used for the face creation is saved to the Faces database. The temporary attribute ID is not saved to the database.

You can receive the information about the temporary attribute by its ID until its TTL did not expire. You must create a face using the attribute data to save it for the long term.

Create attribute using external data#

You can create a temporary attribute by sending basic attributes and descriptors to LUNA PLATFORM. Thus you can store this data in external storage and send it to LP for the processing of requests only.

You can create an attribute using:

  • basic attributes and their samples;
  • descriptors (raw descriptor in Base64 or SDK descriptor in Base64) with descriptor version and samples;
  • both basic attributes and descriptors with the corresponding data.

Samples are optional and are not required for an attribute creation.

See the "create temporary attribute" request in "APIReferenceManual.html" for details.

Handlers description#

The handler service processes the requests for handler and events creation.

As there can be millions of events in the Events database, a high-performance column-oriented database is recommended. You can also use a relational database, but it will require additional customization.

A handler is an object that stores entry points for image processing. The entry points are called policies. They describe how the image is processed hence define the LP services used for the processing. The handler is created using the "create handler" request.

The table below includes all the existing handler policies. Each policy corresponds to one of the LP services listed in the "Service" column.

Handler policies

Policy Description Service
detect_policy Specifies face, body, image parameters to be estimated. Handlers
extract_policy Specifies the necessity of descriptor and basic attributes (gender, age, ethnicity) extraction. It also determines the threshold for the descriptor quality. Handlers
match_policy Specifies the array of lists for matching with the current face and additional filters for the matching process for each of the lists. The matching results can be used in create_face_policy and link_to_lists_policy. Matcher
storage_policy Enables data storage in the database for samples ("face_sample_policy"/"body_sample_policy"), origin images ("image_origin_policy"), attributes ("attribute_policy"), faces ("face_policy"), and events ("events_policy"). You can specify filters for the objects saving. You can perform automatic linking to lists for faces and set TTL for attributes. You can enable and disable notifications sending ("notification_policy") Image Store, Faces, Events
conditional_tags_policy Specifies filters for assigning tags to events Handlers

You can skip or disable policies if they are not required for your case. Skipped policies are executed accorded to the specified default values. For example, you should disable samples storage in the corresponding policy if they are not required. If you just skip the "storage_policy" in the handler, samples will be saved according to the default settings.

All the available policies are described in the "API service reference manual".

Dynamic handlers

Handlers can be static or dynamic.

When a handler is static, you specify its parameters during the handler creation and then you specify the created handler ID during the event creation.

When a handler is dynamic, you can change its parameters during the event creation request. Dynamic handlers enable you to separate technical parameters (thresholds and quality parameters that should be hidden from frontend users) and business logic.

Only static verifiers are available.

Dynamic handlers usage example:

You need to separate head pose thresholds from other handler parameters.

You can save the thresholds to your external database and implement the logic of automatic substitution of this data when creating events (e. g. in your frontend application).

The frontend user sends requests for events creation and specifies the required lists and other parameters. The user does not know about the thresholds and cannot change them.

Verifiers description#

The Handlers service also processes requests to create verifiers required for the verification process. They are created using the "create verifier" request.

The verifier contains a limited number of handler policies - detect_policy, extract_policy, and storage_policy.

You can specify the "verification_threshold" in the verifier.

The created verifier should be used when sending requests to:

  • "/verifiers/{verifier_id}/verifications" resource. You can specify IDs of the objects with which the verification should be performed.
  • "/verifiers/{verifier_id}/raw" resource. You can specify raw descriptors as candidates and references for matching. As raw descriptors are processed "verification_threshold" is the general parameter used from the specified verifier.

The response includes the "status" field. It shows, if the verification was successful or not for each pair of matched objects. It is successful if the similarity of two objects is greater than the specified "verification_threshold".

Handlers usage#

Using handler you can:

  • Specify face/body detection parameters and estimated face/body parameters (see "Face detection").
  • Specify human body detection execution.
  • Enable basic attributes and descriptors extraction (see "Descriptor extraction and attribute creation").
  • Perform descriptors comparison (see "Descriptors matching") to receive matching results and use the result as filters for the policies.
  • Configure automatic creation of faces using filters. You can specify filters according to the estimated basic attributes and matching results.
  • Configure automatic attachment of the created faces to the lists using filters. You can specify filters according to the estimated basic attributes and matching results.
  • Specify the objects to be saved after the image processing.
  • Configure automatic tag adding for the event using filters. You can specify filters according to the estimated basic attributes and matching results.

In addition, the ability to process a batch of images using a handler is available (see the "Estimator tast" section).

See the detailed description of handlers in the "/handlers" resource.

Image Store service#

The Image Store service stores the following data:

Image Store can save data to SSD or S3 compatible storage (Amazon S3, etc.).

Buckets description#

The data is stored in special directories called buckets. Each bucket has a unique name. Bucket names should be set in lower case.

The following buckets are used in LP:

  • "visionlabs-samples" bucket stores face samples.
  • "visionlabs-bodies-samples" bucket stores human bodies samples.
  • "visionlabs-image-origin" bucket stores source images.
  • "visionlabs-objects" - bucket stores objects.
  • "task-result" bucket stores the results received after tasks processing using the Tasks service.
  • "portraits" - the bucket is required for the usage of Backport 3 service. The bucket stores portraits.

Buckets creation is described in LP 5 installation manual in the "Buckets creation" section.

After running the Image Store container and the commands for containers creation, the buckets are saved to local storage or S3.

By default, local files are stored in the "/var/lib/luna/current/example-docker/image_store" directory on the server. They are saved in the "/srv/local_storage/" directory in the Image Store container.

Bucket includes directories with samples or other data. The names of the directories correspond to the first four letters of the sample ID. All the samples are distributed to these directories according to their first four ID symbols.

Next to the bucket object is a "*.meta.json" file containing the "account_id" used when performing the request. If the bucket object is not a sample (for example, the bucket object is a JSON file in the "task-result" bucket), then the "Content-Type" will also be specified in this file.

An example of the folders structure in the "visionlabs-samples", "task-result" and "visionlabs-bodies-samples" buckets is given below.

./local_storage/visionlabs-samples/8f4f/
            8f4f0070-c464-460b-sf78-fac234df32e9.jpg
            8f4f0070-c464-460b-sf78-fac234df32e9.meta.json
            8f4f1253-d542-621b-9rf7-ha52111hm5s0.jpg
            8f4f1253-d542-621b-9rf7-ha52111hm5s0.meta.json
./local_storage/task-result/1b03/
            1b0359af-ecd8-4712-8fc0-08401612d39b
            1b0359af-ecd8-4712-8fc0-08401612d39b.meta.json
./local_storage/visionlabs-bodies-samples/6e98/
            6e987e9c-1c9c-4139-9ef4-4a78b8ab6eb6.jpg
            6e987e9c-1c9c-4139-9ef4-4a78b8ab6eb6.meta.json

A significant amount of memory may be required when storing a large number of samples. A single sample takes about 30 Kbytes of the disk space.

It is also recommended to create backups of the samples. Samples are utilized when the NN version is changed or when you need to recover your database of faces.

External samples#

You can send an external sample to Image Store. The external sample is received using third-party software or the VisionLabs software (e. g., FaceStream).

See the POST request on the "/samples/{sample_type}" resource in "APIReferenceManual.html" for details.

The external sample should correspond to certain standards so that LP could process it. Some of them are listed in the "Sample requirements" section.

The samples received using the VisionLabs software satisfy this requirement.

In case of third-party software, it is not guaranteed that the result of the external sample processing will be the same as for the VisionLabs sample. The sample can be of low quality (too dark, blurry and so on). Low quality leads to incorrect image processing results.

Anyway, it is recommended to consult VisionLabs before using external samples.

Accounts service#

The Accounts service is intended for:

  • Creation, management and storage of accounts
  • Creation, management and storage of tokens and their permissions
  • Verification of accounts and tokens

See "Accounts, tokens and authorization types" section for more information about the authorization system in LUNA PLATFORM 5.

All created accounts, tokens and their permissions are saved in the Accounts service database.

Faces service#

Faces service is used for:

  • Creating temporary attributes;
  • Creating faces;
  • Creating lists;
  • Attaching faces to lists;
  • Managing of the general database that stores faces with the attached data and lists;
  • Receive information about the existing faces and lists.

Face creation#

To create a face use the "create face" request in the API service reference manual.

To create a list use the "create list" request. See the API service reference manual for more detail.

Faces and lists creation
Faces and lists creation

Each of the described objects has its table in the Faces database. You can find database tables for the objects and their description in the "Faces database description" section.

Each attribute, face, and list in the Faces database is associated with a specific Account ID. It enables you to restrict access to data for different users.

The external ID field enables you to set your own ID for the face. You can create persons with several attached faces in an external system using the external ID.

A list contains user data, creation time and last update time fields. You can find list table scheme in the "Faces database description" section.

Matching services#

Python Matcher has the following features:

  • Matching according to the specified filters. This matching is performed directly on the Faces or the Events database. Matching by DB is beneficial when several filters are set.
  • Matching by lists. In this case, it is recommended that descriptors are save in the Python Matcher cache.

Python Matcher Proxy is used to route requests to Python Matcher services and matching plugins.

Python Matcher#

Python Matcher utilizes Faces DB for filtration and matching when faces are set as candidates for matching and filters for them are specified. This feature is always enabled for Python Matcher.

Python Matcher utilizes Events DB for filtration and matching when events are set as candidates for matching and filters for them are specified. The matching using the Events DB is optional, and it is not used when the Events service is not utilized.

A VLMatch matching function is required for matching by DB. It should be registered for the Faces DB and the Events DB. The function utilizes a library that should be compiled for your current DB version. You can find information about it in the installation manual in "VLMatch library compilation", "Create VLMatch function for Faces DB", and "Create VLMatch function for Events DB" sections.

When faces are set as candidates for matching, and list IDs are specified as filters, Python Matcher will perform matching by lists. In this case, it caches all the lists to improve performance.

The CACHE_ENABLED parameter in the DESCRIPTORS_CACHE setting should be set to "true" in the Python Matcher configurations to perform caching.

Python Matcher service additionally uses working processes that process requests.

Python Matcher Proxy#

The API service sends requests to the Python Matcher Proxy if it is configured in the API configuration. Then the Python Matcher Proxy service redirects requests to the Python Matcher service or to matching plugins (if they are used).

If the matching plugins are not used, then the service route requests only to the Python Matcher service. Thus, you don't need to use Python Matcher Proxy unless you intend to use matching plugins. See the "Matching plugins" section for a description of how the matching plugins work.

Working processes cache#

When multiple worker processes are launched for the Python Matcher service, each of the worker processes uses the same descriptors cache.

This change can both speed up and slow down the service. If you need to ensure that the cache is stored in each of the Python Matcher processes, you should run each of the server instances separately.

Events service#

The Events service is used for:

  • Storage of all the created events in the Events database.
  • Returning all the events that satisfy filters.
  • Gathering statistics on all the existing events according to the specified aggregation and frequency/period.
  • Storage of descriptors created for events.

As the event is a report, you can't modify already existing events.

The Events service should be enabled in the API service configuration file. Otherwise, events will not be saved to the database.

Database for Events#

PostgreSQL is used as a database for the Events service.

The speed of request processing is primarily affected by:

  • the number of events in the database
  • lack of indexes for PostgreSQL

PostgreSQL shows acceptable requests processing speed with the number of events from 1 000 000 to 10 000 000. If the number of events exceeds 10 000 000, the request to PostgreSQL may fail.

The speed of the statistics requests processing in the PostgreSQL database can be increased by configuring the database and creating indexes.

Geo position#

You can add a geo position during event creation.

The geo position is represented as a JSON with GPS coordinates of the geographical point:

  • longitude - geographical longitude in degrees
  • latitude - geographical latitude in degrees

The geo position is specified in the "location" body parameter of the event creation request. See the "Create new events" section of the Events service reference manual.

You can use the geo position filter to receive all the events that occurred in the required area.

Geo position filter#

A geo position filter is a bounding box specified by coordinates of its center (origin) and some delta.

It is specified using the following parameters:

  • origin_longitude
  • origin_latitude
  • longitude_delta
  • latitude_delta

The geo position filter can be used when you get events, get statistics on events, and perform events matching.

Geo position filter is considered as properly specified if:

  • both origin_longitude and origin_latitude are set.
  • neither origin_longitude, origin_latitude, longitude_delta, or latitude_delta is set.

If both origin_longitude and origin_latitude are set and longitude_delta is not set - the default value is applied (see the default value in the OpenAPI documentation).

Read the following recommendations before using geo position filters.

The general recommendations and restrictions for geo position filters are:

  • Do not create filters with a vertex or a border on the International Date Line (IDL), the North Pole or the South Pole. They are not fully supported due to the features of database spatial index. The filtering result may be unpredictable;
  • Geo position filters with edges more than 180 degrees long are not allowed;
  • It is highly recommended to use the geo position filter citywide only. If a larger area is specified, the filtration results on the borders of the area can be unexpected due to the spatial features.
  • Avoid creating a filter that is too extended along longitude or latitude. It is recommended to set the values of deltas close to each other.

The last two recommendations exist due to the spatial features of the filter. According to these features, when one or two deltas are set to large values, the result may differ from the expected though it will be correct. See the "Filter features" section for details.

Filter performance#

Geo position filter performance depends on the spatial data type used to store event geo position in the database.

Two spatial data types are supported:

  • GEOMETRY: a spatial object with coordinates expressed as (longitude, latitude) pairs, defined in the Cartesian plane. All calculations use Cartesian coordinates.
  • GEOGRAPHY: a spatial object with coordinates expressed as (longitude, latitude) pairs, defined as on the surface of a perfect sphere, or a spatial object in the WGS84 coordinate system.

For a detailed description, see geometry vs geography.

Geo position filter is based on the ST_Covers PostGIS function supported for both geometry and geography type.

Filter features#

Geo position filter has some features caused by PostGIS.

When geography type is used and the geo position filter covers a wide portion of the planet surface, filter result may be unexpected but geographically correct due to some spatial features.

The following example illustrates this case.

An event with the following geo position was added in the database:

{
    "longitude": 16.79,
    "latitude": 64.92,
}

We apply a geo position filter and try to find the required point on the map. The filter is too extended along the longitude:

{
    "origin_longitude": 16.79,
    "origin_latitude": 64.92,
    "longitude_delta": 2,
    "latitude_delta": 0.01,
}

This filter will not return the expected event. The event will be filtered due to spatial features. Here is the illustration showing that the point is outside the filter.

Too wide zone
Too wide zone

You should consider this feature to create a correct filter.

For details, see Geography.

Events creation#

Events are created using handlers. Handlers are stored in the Handlers database. You should specify the required handler ID in the event creation request. All the data stored in the event will be received according to the handler parameters.

You should perform two separate requests for event creation.

The first request creates a handler. Handler includes policies that describes how the image is processed hence defining the LP services used for the processing.

The second request creates new events using the existing handler. An event is created for each image that has been processed.

You can specify the following additional data for each event creation request:

  • external ID (for created faces),
  • user data (for created faces),
  • source (for created events),
  • tags (for created events).

The handler is processed policy after policy. All the data from the request is processed by a policy before going to the next policy. The "detect" policy is performed for all the images from the request, then "multiface" policy is applied, then the "extract" policy is performed for all the received samples, etc. For more information about handlers, see the "Handlers description" section.

Sender service#

The Sender service is an additional service that is used to send events via web sockets. This service communicates with the Handlers service (in which events are created) through the pub/sub mechanism via the Redis DB channel.

You should configure web sockets connection using special request. It is recommended create web sockets connection using the "/ws" resource of the API service. See APIReferenceManual.html for details about the web socket connection creation configuration.

Configuring web sockets directly via Sender is also available ("/ws" ). It can be used to reduce the load on the API service.

When an event is created it can be:

  • saved to the Events database. The Events service should be enabled to save an event;

  • returned in the response without saving to the database.

In both cases, the event is sent via the Redis DB channel to the Sender service.

In this case, the Redis DB acts as a connection between Sender and Handlers services and does not store transferred events.

The Sender service is independent of the Events service. Events can be sent to Sender even if the Events service is disabled.

Sender workflow
Sender workflow

The general workflow is as follows:

  1. A user or an application sends requests for new event creation to the API service;
  2. The API service sends the request to the Handlers service;
  3. The Handlers service sends requests to the corresponding LP services;
  4. LP services process the requests and send results. New events are created;
  5. Handlers sends the events to the Redis DB. Redis has a channel, to which the Sender service is subscribed;
  6. Redis sends the received events to Sender by the channel;
  7. third-party party applications should be subscribed to the Sender service via web-sockets to receive events. If there is a subscribed third-party party application, Sender sends events to it according to the specified filters.

See the OpenAPI documentation for information about the JSON structure returned by the Sender service.

Tasks service#

The Tasks service is used for long tasks processing.

General information about tasks#

As tasks processing takes time, the task ID is returned in the response to the task creation.

After the task processing is finished, you can receive the task results using the "task " > "get task result" request. You should specify the task ID to receive its results.

You can find the examples of tasks processing results in the response section of "task " > "get task result" request. You should select the task type in the Response samples section of documentation.

Select required example
Select required example

You should make sure that the task was finished before requesting its results:

  • You can check the task status by specifying the task ID in the "tasks" > "get task" request. There are the following task statuses:
tasks status value
pending 0
in progress 1
cancelled 2
failed 3
collect results 4
done 5
  • You can receive information about all the tasks using the "tasks" > "get tasks" request. You can set filter to receive information about tasks of interest only.

Types of tasks#

Clustering task#

As the result of the task a cluster with objects selected according to the specified filters for faces or events is created. Objects corresponding to all of the filters will be added to the cluster. Available filters depend on the object type: events or faces.

You can receive the task status or result using additional requests (see the "General information about tasks").

You can use the reporter task to receive the report about objects added to clusters.

Clustering is performed in several steps:

  • objects with descriptors are collected according to provided filters

  • every object is matched with all the other objects

  • create clusters as groups of "connected components" from the similarity graph, link:

Here "connected" means that similarity is greater than provided threshold or default "DEFAULT_CLUSTERING_THRESHOLD" from the config.

  • if needed, download existing images corresponding to each object: avatar for a face, first sample for an event.

As a result of the task an array of clusters is returned. A cluster includes IDs of objects (faces or events) whose similarity is greater then the specified threshold. You can use the information for further data analysis.

{
    "errors": [],
    "result": {
        "clusters": [
            [
                "6c721b90-f5a0-409a-ab70-bc339a70184c"
            ],
            [
                "8bc6e8df-410b-4065-b592-abc5f0432a1c"
            ],
            [
                "e4e3fc66-53b4-448c-9c88-f430c00cb7ea"
            ],
            [
                "02a3a1c4-93d7-4b69-99ec-21d5ef23852e",
                "144244cb-e10e-478c-bdac-18cd2eb27ee6",
                "1f4cdbcb-7b1e-40cc-873b-3ff7fa6a6cf0"
            ]
        ],
        "total_objects": 6,
        "total_clusters": 4
    }
}

The clustering task result can also include information about errors occurred during the objects processing.

Reporter task#

As a result of the task, the report on the clustering task is created. You can select data that should be added to the report. The report has CSV format.

You can receive the task status or result using additional requests (see the "General information about tasks").

You should specify the clustering task ID and the columns that should be added to the report. The selected columns correspond to the general events and faces fields.

Make sure that the selected columns correspond to the objects selected in the clustering task.

You can also receive the images for all the objects in clusters if they are available.

Exporter task#

The task enables you to collect event and/or face data and export them from LP to a CSV file. The file rows represent requested objects and corresponding samples (if they were requested).

This task uses memory when collecting data. So, its possible that Task worker will be killed by OOM (Out-Of-Memory) killer if you request a lot of data.

You can export event or face data using the "/tasks/exporter" request. You should specify what type of object is required by setting objects_type parameter when creating a request. You can also narrow your request by providing filters for faces and events objects. See the "exporter task" request in the API service reference manual.

As a result of the task a zip archive containing a CSV file is returned.

You can receive the task status or result using additional requests (see the "General information about tasks").

Cross-matching task#

When the task is performed, all the references are matched with all the candidates. References and candidates are set using filters for faces and events.

Matching is performed only for objects that contain extracted descriptors.

You can specify the maximum number of matching candidates returned for every match using the limit field.

You can set a threshold to specify the minimal acceptable value of similarity. If the similarity of two descriptors is lower then the specified value, the matching result will be ignored and not returned in the response. References without matches with any candidates are also ignored.

Cross-matching is performed in several steps:

  • collect objects having descriptors using provided filters
  • match every reference object with every candidate object
  • match results are sorted (lexicographically) and cropped (limit and threshold are applied)

You can receive the task status or results using additional requests (see the "General information about tasks").

As a result an array is returned. Each element of the array includes a reference and top similar candidates for it. Information about errors occurred during the task execution is also returned in the response.

{
"result": [
    {
        "reference_id": "e99d42df-6859-4ab7-98d4-dafd18f47f30",
        "candidates": [
            {
                "candidate_id": "93de0ea1-0d21-4b67-8f3f-d871c159b740",
                "similarity": 0.548252
            },
            {
                "candidate_id": "54860fc6-c726-4521-9c7f-3fa354983e02",
                "similarity": 0.62344
            }
        ]
    },
    {
        "reference_id": "345af6e3-625b-4f09-a54c-3be4c834780d",
        "candidates": [
            {
                "candidate_id": "6ade1494-1138-49ac-bfd3-29e9f5027240",
                "similarity": 0.7123213
            },
            {
                "candidate_id": "e0e3c474-9099-4fad-ac61-d892cd6688bf",
                "similarity": 0.9543
            }
        ]
    }
],
"errors": [
    {
        "error_id": 10,
        "task_id": 123,
        "subtask_id": 5,
        "error_code": 0,
        "description": "Faces not found",
        "detail": "One or more faces not found, including face with id '8f4f0070-c464-460b-bf78-fac225df72e9'",
        "additional_info": "8f4f0070-c464-460b-bf78-fac225df72e9",
        "error_time": "2018-08-11T09:11:41.674Z"
    }
]
}

Linker task#

The task enables you to attach faces to lists according to the specified filters.

You can specify creation of a new list or specify the already existing list in the requests.

You can specify filters for faces or events to perform the task. When an event is specified for linking to list a new face is created based on the event.

If the create_time_lt filter is not specified, it will be set to the current time.

As the result of the task you receive IDs of faces linked to the list.

You can receive the task status or result using additional requests (see the "General information about tasks").

Task execution process for faces:

  • A list is created (if create_list parameter is set to 1) or the specified list_id existence is checked.
  • Face ID boundaries are received. Then one or several subtasks are created with about 1000 face ids per each. The number depends on face ID spreading.
  • For each subtask:

    • Face IDs are received. They are specified for the current subtask by filters in the subtask content.
    • The request is sent to the Luna Faces to link specified faces to the specified list.
    • The result for each subtask is saved to the Image Store service.
  • After the last subtask is finished, the worker collects results of all the subtasks, merges them and puts them to the Image Store service (as task result).

Task execution process for events:

  • A list is created (if create_list parameter is set to 1) or the specified list_id existence is checked.
  • Events page numbers are received. Then one or several subtasks are created.
  • For each subtask:

    • Event with their descriptors are received from the Events service.
    • Faces are created using the Faces service. Attribute(s) and sample(s) are added to the faces.
    • The request is sent to the Luna Faces to link specified faces to the specified list.
    • The result for each subtask is saved to the Image Store service.
  • After the last subtask is finished, the worker collects results of all the subtasks, merges them and puts them to the Image Store service (as task result).

Garbage collection task#

During the task processing, descriptors or events can be deleted.

  • when descriptors are set as a GC target, you should specify the descriptor version. All the descriptors of the specified version will be deleted.
  • when events are set as a GC target, you should specify one or several of the following parameters:
    • account ID.
    • the upper excluded boundary of event creation time.
    • the upper excluded boundary of the event appearance in the video stream.
    • the ID of the handler used for the event creation.

If necessary, you can delete samples or image origins along with events.

Garbage collection task with events set as the target can be processed using the API service API, while the Admin or Task services API can be used to set both events and descriptors as the target. Thus the specified objects will be deleted for all the existing accounts.

You can receive the task status or result using additional requests (see the "General information about tasks").

Re-extraction task (additional extraction)#

The re-extraction tasks are used for the update to a new neural network for descriptors extraction. All the descriptors of the previous version will be re-extracted using a new NN.

The samples for these descriptors should be stored for the task execution. If any descriptors do not have source samples, they cannot be updated to a new NN version.

You should run the task with:

  • extraction_target - "descriptor"
  • missing - true
  • descriptor_version - new descriptor version

During the task processing, a descriptor of a new neural network will be extracted for each object (face or event) which has a descriptor of the default version.

The old descriptors are not replaced. They can be deleted using the garbage collection task.

Extraction of missing descriptors is done in several steps:

  • split a task into several subtasks, divided by different ranges
  • get a list of faces, which have descriptors of the default descriptor version
  • extract descriptors using the new neural network
  • task result is a list with faces ids, their samples and generation (it is required for samples changes tracking)

You can receive the task status or result using additional requests (see the "General information about tasks").

The task can be created using the Admin service API only. See the "tasks" > "create additional extract task" request in the Admin service reference manual.

ROC-curve calculating task#

As a result of the task, the Receiver Operating Characteristic curve with TPR (True Positive Rate) against the FPR (False Positive Rate) is created.

See additional information about ROC-curve creation in "TasksDevelopmentManual".

ROC calculation task

ROC (or Receiver Operating Characteristic) is a performance measurement for classification tasks at various thresholds settings. The ROC-curve is plotted with TPR (True Positive Rate) against the FPR (False Positive Rate). TPR is a true positive match pair count divided by a count of total expected positive match pairs, and FPR is a false positive match pair count divided by a count of total expected negative match pairs. Each point (FPR, TPR) of the ROC-cure corresponds to a certain similarity threshold. See more at wiki.

Using ROC the model performance is determined by looking at:

  • the area under the ROC-curve (or AUC);
  • type I and type II error rates equal point, i.e. the ROC-curve and the secondary main diagonal intersection point.

The model performance also determined by hit into the top-N probability, i.e. probability of hit a positive match pair into the top-N for any match result group sorted by similarity.

It requires markup to make a ROC task. One can optionally specify threshold_hit_top (default 0) to calculate hit into the top-N probability, the match limit (default 5), key_FPRs - list of key FPR values to calculate ROC-curve key points, and filters with account_id. Also, it needs account_id for task creation.

You can receive the task status or result using additional requests (see the "General information about tasks").

Markup

Markup is expected in the following format:

[{'face_id': <face_id>, 'label': <label>}]

Label (or group id) can be a number or any string.

Example:

[{'face_id': '94ae2c69-277a-4e46-817d-543f7d3446e2', 'label': 0},
 {'face_id': 'cd6b52be-cdc1-40a8-938b-a97a1f77d196', 'label': 1},
 {'face_id': 'cb9bda07-8e95-4d71-98ee-5905a36ec74a', 'label': 2},
 {'face_id': '4e5e32bb-113d-4c22-ac7f-8f6b48736378', 'label': 3},
 {'face_id': 'c43c0c0f-1368-41c0-b51c-f78a96672900', 'label': 2}]

Estimator task#

The estimator task enables you to perform batch processing of images using the specified policies.

As a result of the task performing, JSON is returned with data for each of the processed images and information about the errors that have occurred.

In the request body, you can specify the handler_id of an already existing static or dynamic handler. For the dynamic handler_id, the ability to set the required policies is available. In addition, you can create a static handler specifying policies in the request.

The resource can accept four types of sources with images for processing:

  • ZIP archive
  • S3-like storage
  • Network disk
  • FTP server

To obtain correct results of image processing using the Estimator task, all processed images should be either in the source format or in the format of samples. The type of transferred images is specified in the request in the "image_type" parameter.

ZIP archive as image source of estimator task

The resource accepts for processing a link to a ZIP archive with images. The size of the archive is set using the "ARCHIVE_MAX_SIZE" parameter in the "config.py" configuration file of the Tasks service. The default size is 100 GB. An external URL or the URL to an archive saved in the Image Store can be used as a link to the archive. In the second case, the archive should first be saved to the LP using a POST request to the "/objects" resource.

When using an external URL, the ZIP archive is first downloaded to the Task Worker container storage, where the images are unpacked and processed. After the end of the task, the archive is deleted from the repository along with the unpacked images.

It is necessary to take into account the availability of free space for the above actions.

The archive can be password protected. The password can be passed in the request using the "authorization" -> "password" parameter.

S3-like storage as image source of estimator task

The following parameters can be set for this type of source:

  • bucket_name - bucket name/Access Point ARN/Outpost ARN (required);
  • endpoint - storage endpoint (only when specifying the bucket name);
  • region - bucket region (only when specifying the bucket name);
  • prefix - file key prefix. It can also be used to load images from a specific folder, such as "2022/January".;

The following parameters are used to configure authorization:

  • Public access key (required);
  • Secret access key (required);
  • Authorization signature version ("s3v2"/"s3v4").

It is also possible to recursively download images from nested bucket folders and save original images.

For more information about working with S3-like repositories, see AWS User Guide.

Network disk as image source of estimator task

The following parameters can be set for this type of source:

  • path - absolute path to the directory with images in the container (required).
  • follow_links - enables/disables symbolic link processing;
  • prefix - file key prefix;
  • postfix - file key postfix.

See an example of using prefixes and postfixes in the "/tasks/estimator" resource description.

When using a network disk as an image source and launching Tasks and Tasks Workers services through Docker containers, it is necessary to mount the directory with images from the network disk to the local directory and synchronize it with the specified directory in the container. You can mount a directory from a network disk in any convenient way. After that, you can synchronize the mounted directory with the directory in the container using the following command when launching the Tasks and Tasks Worker services:

docker run \
...
-v /var/lib/luna/current/images:/srv/images
...

/var/lib/luna/current/images - path to the previously mounted directory with images from the network disk.

/srv/images - path to the directory with the images in the container where they will be moved from the network disk. This path should be specified in the request body of the Estimator task (the "path" parameter).

As for S3-like storage, the ability to recursively download images from nested bucket folders is available.

FTP server as image source of estimator task

For this type of source, the following parameters can be set in the request body for connecting to the FTP server:

  • host - FTP server IP address or hostname (required);
  • port - FTP server port;
  • max_sessions - maximum number of allowed sessions on the FTP server;
  • user, password - authorization parameters (required).

As in Estimator tasks using S3-like storage or network disk as image sources, it is possible to set the path to the directory with images, recursively receive images from nested directories, select the type of transferred images, and specify the prefix and postfix.

See an example of using prefixes and postfixes in the "/tasks/estimator" resource description.

Task processing#

The Tasks service includes the Tasks service and Tasks workers. Tasks receives requests to the Tasks service, creates tasks in the DB and sends subtasks to Tasks workers. Tasks workers receive subtasks and perform all the required requests to other services to solve the subtasks.

The general approach for working with tasks is listed below.

  • A user sends the request for creation of a new task;
  • Tasks service creates a new task and sends subtasks to workers;
  • The Tasks workers process subtasks and create reports;
  • If several workers have processed subtasks and have created several reports, the worker, which finished the last subtask, gathers all the reports and creates a single report;
  • When the task is finished, the last worker updated its status in the Tasks database;
  • The user can send requests to receive information about tasks and subtasks and number of active tasks. The user can cancel or delete tasks;
  • The user can receive information about errors that occurred during execution of the tasks;
  • After the task is finished the user can send a request to receive results of the task.

See the "Tasks diagrams" section for details about tasks processing.

Admin service#

The Admin service is used to perform general administrative routines:

  • Manage user accounts;
  • Receive information about objects belonging to different accounts;
  • Create garbage collection tasks;
  • Create tasks to extract descriptors with a new neural network version;
  • Receive reports and errors on processed tasks;
  • Cancel and delete existing tasks.

Admin service has access to all the data attached to different accounts.

All the requests for Admin service are described in Admin service reference manual.

Admin user interface#

The service has its own user interface to simplify administrative tasks.

Admin user interface window
Admin user interface window

Open the Admin interface in your browser: <Admin_server_address>:5010

This URL may differ. In this example, the Admin service interface is opened on the Admin service server. The Admin service is launched on the default port.

The default login and password to access the interface are root@visionlabs.ai/root. You can also use default login and password in Base64 format - cm9vdEB2aXNpb25sYWJzLmFpOnJvb3Q=.

You can change the default password for the Admin service using the "Change authorization" request.

You can create new account IDs or add existing account IDs to track their data using the UI.

You can manage account IDs using the following buttons:

  • Add a new account.
  • Delete the account ID.
  • View the information provided for the account ID.

You can create a garbage collection task and task for the extraction of the new version of descriptors using the "Tasks" tab of the user interface.

You must press the start button to create a new task. After the button is pressed, you can specify additional parameters for the task.

The tasks are performed by the Tasks service after the request from the Admin service.

Account creation using Admin service#

Three types of accounts can be created in the Admin service - "user", "advanced_user" and "admin". The first two types are created using an account creation request to the API service, but the third type can only be created using the Admin service.

Using the "admin" account type, you can log in to the interface and perform the above tasks. An account with the "admin" type can be created either in the user interface (see above) or by requesting the "/4/accounts" resource of the Admin service. To create an account in the last way, you need to specify a username and password.

If you are creating an account for the first time, you must use the default login and password.

Example of CURL request to the "/4/accounts" resource of the Admin service:

curl --location --request POST 'http://127.0.0.1:5010/4/accounts' \
--header 'Authorization: Basic cm9vdEB2aXNpb25sYWJzLmFpOnJvb3Q=' \
--header 'Content-Type: application/json' \
--data '{
  "login": "mylogin@gmail.com",
  "password": "password",
  "account_type": "admin",
  "description": "description"
}' 

Get system info#

The Admin service provides "System info request" that includes technical information about LP:

This information is required for our technical support. When you send us an issue, please attach the received JSON file to your letter. Use one of the following ways to receive system information:

  • Create a request to the Admin service using "System info request". The request is described in the Admin OpenAPI documentation;
  • Open the Admin service UI and go to the "Help" tab. The button is in the top right corner of the interface. Press the "Get LUNA PLATFORM system info" button. The JSON file with information will be saved on your PC.

Configurator service#

The Configurator service simplifies the configuration of LP services.

The service stores all the required configurations for all the LP services in a single place. You can edit configurations through the user interface or special limitation files.

You can also store configurations for any third-party party software in Configurator.

The general workflow is as follows:

  • The user edits configurations in the UI;
  • Configurator stores all changed configurations and other data in the database;

  • LP services request Configurator service during startup and receive all required configurations. All the services should be configured to use the Configurator service.

Configurator workflow
Configurator workflow

During Configurator installation, you can also use your limitation file with all the required fields to create limitations and fill in the Configurator database. You can find more details about this process in the "ConfiguratorDevopsManual" documentation.

Settings used by several services are updated for each of the services. For example, if you edit the "LUNA_FACES_ADDRESS" setting for the Handlers service in the Configurator user interface, the setting will be also updated for API, Admin and Python Matcher services.

Configurator UI#

Open the Configurator interface in your browser: <Configurator_server_address> :5070

This URL may differ. In this example, the Configurator service interface is opened on the Configurator service server.

LP includes the beta version of the Configurator UI. The UI was tested on Chrome and Yandex browser. The recommended screen resolution for working with the UI is 1920 x 1080.

The following tabs are available in the UI of Configurator:

  • Settings. All the data in the Configurator service is stored on the Settings tab. The tab displays all the existing settings. It also allows to manage and filter them;
  • Limitations. The tab is used to create new limitations for settings. The limitations are templates for JSON files that contain available data type and other rules for the definition of the parameters;
  • Groups. The tab allows to group all the required settings. When you select a group on the Settings tab, only the settings corresponding to the group will be displayed. It is possible to get settings by filters and/or tags for a single specific service. For this purpose, the Groups tab is used.
  • About. The tab includes information about the Configurator service interface.

Settings#

Each of the Configurator settings contain the following fields:

  • Name - a name for the setting.
  • Description - setting description;
  • ID and Times - unique setting ID;
  • Create time - setting create time;
  • Last update time - setting last update time;
  • Value - a body of the setting;
  • Schema - a verification template for the schema body;
  • Tag - tags for the setting used to filter settings for the services.
Configurator interface
Configurator interface

The "Tags" field is not available for the default settings. You should press the Duplicate button and create a new setting on the basis of the existing one.

The following options for the settings are available:

  • Create a new setting - press the Create new button, enter required values and press Create. You should also select an already existing limitation for the setting. The Configurator will try to check the value of a setting if the Check on save flag is enabled and there is a limitation selected for the setting;

  • Duplicate existing setting - press the Duplicate button on the right side of the setting, change required values and press Create. The Configurator will try to check the setting value if the Check on save flag is enabled on the lower left side of the screen and there is such a possibility;

Duplicate setting window
Duplicate setting window
  • Delete existing setting - press the Delete button on the right side of the setting.

  • Update existing setting - change name, description, tags, value and press Save button on the right side of the setting.

  • Filter existing settings by name, description, tags, service names, groups - use the filters on the left side of the screen and press Enter or click on Search button;

Show limitations - the flags are used to enable displaying of limitations for each of the settings.

JSON editors - the flag enables you to switch the mode of the value field representation. If the flag is disabled, the name of the parameter and a field for its value are displayed. If the flag is enabled, the Value field is displayed as a JSON.

The Filters section on the left side of the window enables you to display all the required settings according to the specified values. You may enter the required name manually or select it from the list:

  • Setting. The filter enables you to display the setting with the specified name.
  • Description. - The filter enables you to display all settings with the specified description or part of description.
  • Tags. The filter enables you to display all settings with the specified tag.
  • Service filter. The filter enables you to display all settings that belong to the selected service.
  • Group. The filter enables you to display all settings that belong to the specified group. For example, you can select to display all the services belonging to LP.

Limitations#

Limitations are used as service settings validation schema.

Settings and limitations have the same names. A new setting is created upon limitation creation.

The limitations are set by default for each of the LP services. You cannot change them.

Each of the limitations includes the following fields:

  • Name is the name of the limitation.
  • Description is the description of the limitation.
  • Service list is the list of services that can use settings of this limitation.
  • Schema is the object with JSON schema to validate settings
  • Default value is the default value created with the limitation.

The following actions are available for managing limitations:

  • Create a new limitation - press the Create new button, enter required values and press "Create". Also, the setting with default value will be created;
  • Duplicate existing limitation - press the Duplicate button on the right side of the limitation, change required values and press Create. Also, the setting with default value will be created;
  • Update limitation values - change name/description/service list/validation schema/default values and press the Save button on the right side of the limitation;
  • Filter existing limitations by names, descriptions, and groups;
  • Delete existing limitation - press the Delete button on the right side of the limitation.

Groups#

Group has a name and a description.

It is possible to:

  • Create a new group - press the Create new button, enter the group name and optionally description and press Create;
  • Filter existing groups by group names and/or limitation names - use filters on the left side and press 'RETURN' or click on Search button;
  • Update group description - update the existing description and press the Save button on the right side of the group;
  • Update linked limitation list - to unlink limitation, press "-" button on the right side of the limitation name, to link limitation, enter its name in the field at the bottom of the limitation list and press the "+" button. To accept changes, press the Save button;
  • Delete group - press the Delete button on the right side of the group.

Settings dump#

The dump file includes all the settings of all the LP services.

Receive settings dump#

You can fetch the existing service settings from the Configurator by creating a dump file. This may be useful for saving the current service settings.

To receive a dump file, enter the Configurator container and use the following options:

  • wget: wget -O settings_dump.json 127.0.0.1:5070/1/dump;
  • curl: curl 127.0.0.1:5070/1/dump > settings_dump.json;
  • text editor.

The current values, specified in the Configurator service, are received.

Apply settings dump#

To apply the dumped settings use the db_create.py script with the --dump-file command line argument (followed with the created dump file name): base_scripts/db_create.py --dump-file settings_dump.json:

You can apply full settings dump on an empty database only. If any settings already exist, you should use the drop-database flag before applying new dump.

If the settings update is required, you should delete the whole "limitations" group from the dump file before applying it.

    "limitations":[
      ...
    ],

Follow these steps to apply the dump file:

  1. Enter the Configurator container;

  2. Run ./python3 base_scripts/db_create.py --dump-file settings_dump.json

Limitations from the existing limitations files are replaced with limitations from the dump file, if limitations names are the same.

Limitations file#

Receive limitation file#

Limitation file includes limitations of the specified service. It does not include existing settings and their values.

To download a limitations file for one or more services, use the following commands:

  1. Enter the Configurator container;

  2. Create the output base_scripts/results directory: mkdir base_scripts/results;

  3. Run the base_scripts/get_limitation.py script: python3 base_scripts/get_limitation.py --service luna-image-store luna-handlers --output base_scripts/results/my_limitations.json.

Note the base_scripts/get_limitation.py script parameters:

  • --service for specifying one or more service names (required);
  • --output for specifying the directory or a file where to save the output. The default value: current_dir/_limitation.json (optional).

Database drop#

Users can wipe out the Configurator database data, when needed. After the script finished processing, a database structure is created in the Configurator DB.

This operation leads to the stored settings loss. Create settings dump file before executing the following commands!

To drop the Configurator database, use the base_scripts/db_create.py script with --recreate-database command line argument:

  1. Enter the Configurator container;

  2. Run python3 base_scripts/db_create.py --recreate-database

The --recreate-database command line argument can be combined with the --dump-file command line argument to wipe out the data and apply the required settings at one time, when needed.

Existing settings migration#

You can migrate settings in the Configurator DB without changing already existing values of the settings. Hence, the names of the settings are changed according to the current LP build, but their values are not changed.

The migration updates LP parameters only. The parameters added by users and parameters not related to LP5 are not updated.

Settings revision is added to the database after the migration was finished. Starting with LP build 5.1.1, this migration is performed automatically during the Configurator database creation.

It is recommended to manually transfer settings for LP builds of version 5.1.0 and earlier to the updated Configurator database.

Licenses service#

General information#

The Licenses service stores information about the available licensed features and their limits.

Use the GET request to the "/license" resource to receive the license information.

Information about license#

LP license includes the following features:

  • License expiration date.
  • The maximum number of faces with descriptors available.
  • Info about the availability of functionality for determining whether the person in the photo is real or if there is the presentation attack (see the "Liveness description" section).
  • Info about the availability of functionality for checking the image according to ISO/IEC 19794-5 standard or checking the image with manual setting the thresholds (see the "Image Check" section).
  • Info about the availability of functionality for estimating the parameters of bodies (see the "Body parameters" section).
  • Info about the possibility of using the Index Matcher service in the LUNA Index Module.

Expiration date#

When the license expires, you cannot use LUNA PLATFORM.

By default, the notification about the end of the license is sent two weeks before the expiration date.

When the license ends, the following message is returned "License has expired. Please contact VisionLabs for a license extension.".

The Licenses service checks the license expiration date and sends notifications to logs and monitoring (in the "license_period_rest" field).

Faces limit#

The Faces service checks the number of faces left according to the maximum available number of faces received from the Licenses service. The faces with linked descriptors are counted only.

The percentage of the used limit for faces with descriptors is written in the Faces log and displayed in the Admin GUI.

The Faces service writes data about created faces with attached descriptors to the monitoring database in the "license_faces_limit_rate" field.

The created faces are written in the Faces log and displayed in the Admin GUI as a percentage of the database fullness. You should calculate the number of faces with descriptors left using the current percentage.

You start receiving notifications when there are 15% of available faces left. When you exceed the number of available faces, the message "License limit exceeded. Please contact VisionLabs for license upgrade or delete redundant faces" appears in logs. You cannot attach attributes to faces if the number of faces exceeds 110%.

Liveness#

The following values can be set in the license for the Liveness feature:

  • 0 - Liveness feature is not used.
  • 1 - Liveness v1 is used.
  • 2 - Liveness v2 is used.

For Liveness V2, an unlimited license and a license with a limited number of transactions are available. Liveness V1 is provided with an unlimited license only.

Each use of Liveness in requests reduces the transaction count. It is impossible to use the Liveness score in requests after the transaction limit is exhausted. Requests that do not use Liveness and requests where the Liveness estimation is disabled are not affected by the exhaustion of the limit. They continue to work as usual.

The Licenses service stores information about the liveness V2 transactions left. The number of transactions left is returned in the response from the "/license" resource.

The Handlers service writes data on the number of available Liveness V2 transactions to the monitoring database in the "liveness_balance" field.

A warning about the exhaustion of the number of available transactions is sent to the monitoring and logs of the Handlers service when the remaining 2000 transactions of Liveness V2 are reached (this threshold is set in the system).

Backport 3#

The Backport 3 service is used to process the requests for LUNA PLATFORM 3 using LUNA PLATFORM 5.

Although most of the requests are performed in the same way as in LUNA PLATFORM 3, there are still some restrictions. See "Backport 3 features and restrictions" for details.

See "Backport3ReferenceManual.html" for details about the Backport 3 API.

Backport 3 new resources#

Liveness#

Backport 3 provides Liveness estimation in addition to the LUNA PLATFORM 3 features. See the "liveness > predict liveness" section in "Backport3ReferenceManual.html".

Handlers#

The Backport 3 service provides several handlers: extractor, identify, verify. The handlers enable to perform several actions in a single request:

  • "handlers" > "face extractor" - enables you to extract a descriptor from an image, create a person with this descriptor, attach the person to the predefined list.

  • "handlers" > "identify face" - enables you to extract a descriptor from an image and match the descriptor with the predefined list of candidates.

  • "handlers" > "verify face" - enables you to extract a descriptor from an image and match the descriptor with the person's descriptor.

The description of the handlers and all the parameters of the handlers can be found in the following sections:

The requests are based on handlers and unlike the standard "descriptors" > "extract descriptors", "matching" > "identification", and "matching" > verification" requests the listed above request are more flexible.

You can patch the already existing handlers thus applying additional estimation to the requests. E. g. you can specify head angles thresholds or enable/disable basic attributes estimation.

The Handlers are created for every new account at the moment the account is created. The created handlers include default parameters.

Each of the handlers has the corresponding handler in the Handlers service. The parameters of the handlers are stored in the luna_backport3 database.

Each handler supports GET and PATCH requests thus it is possible to get and update parameters of each handler.

Each handler has its version. The version is incremented with every PATCH request. If the current handler is removed, the version will be reset to 1:

  • For the requests with POST and GET methods:

    If the Handlers and/or Backport 3 service has no handler for the specified action, it will be created with default parameters.

  • For requests with PATCH methods:

    If Handlers and/or Backport 3 service has no handler for the specified action, a new handler with a mix of default policies and policies from the request will be created.

Backport 3 architecture#

Interaction of Backport 3 and LP 5 services
Interaction of Backport 3 and LP 5 services

Backport 3 interacts with the API service and sends requests to LUNA PLATFORM 5 using it. In turn, the API service interacts with the Accounts service to check the authentication data.

Backport 3 has its own database (see "Backport 3 database". Some of its tables are similar to the tables of the Faces database of LP 3. It enables you to create and use the same entities (persons, account tokens and accounts) as in LP 3.

The backport service uses Image Store to store portraits.

You can configure Backport 3 using the Configurator service.

Backport 3 features and restrictions#

The following features have core differences:

For the following resources on method POST default descriptor version to extract from image is 56:

  • /storage/descriptors
  • /handlers/extractor
  • /handlers/verify
  • /handlers/identify
  • /matching/search

You can still upload the existing descriptors of versions 52, 54, 56. The older descriptor versions are no longer supported. - For resource /storage/descriptors on method POST, estimation of saturation property is no longer supported, and the value is always set to 1. - For resource /storage/descriptors on method POST, estimation of eyeglasses attribute is no longer supported. The attributes structure in the response will lack the eyeglasses member. - For resource /storage/descriptors on method POST, head position angle thresholds can still be sent as float values in range [0, 180], but they will be internally rounded to integer values. As before, thresholds outside the range [0, 180] are not taken into account.

Garbage Collecting (GC) module#

According to LUNA Platform 3 logic, garbage is the descriptors that are linked neither to a person nor to a list.

For normal system operation, one needs to regularly delete garbage from databases. For this, run the system cleaning script remove_not_linked_descriptors.py from ./base_scripts/gc/ folder.

According to Backport 3 architecture, this script removes faces, which do not have links with any lists or persons from the Luna Backport 3 database, from the Faces service.

Script execution pipeline#

The script execution pipeline consists of several stages:

1) A temporary table is created in the Faces database. See more info about temporary tables for oracle or postgres. 2) Ids of faces that are not linked to lists are obtained. The ids are stored in the temporary table. 3) While the temporary table is not empty, the following operations are performed:

  • The batch of ids from the temporary table is obtained. First 10k (or less) face ids are received.
  • Filtered ids are obtained. Filtered ids are ids that do not exist in the person_face table of the Backport 3 database.
  • Filtered ids are removed from the Faces database. If some of the faces cannot be removed, the script stops.
  • Filtered ids are removed from the Backport 3 database (foolcheck). A warning will be printed.
  • Ids are removed from the temporary table.

Script launching#

docker run --rm -t --network=host --entrypoint bash dockerhub.visionlabs.ru/luna/luna-backport-3:v.0.4.8 -c "python3 ./base_scripts/gc/remove_not_linked_descriptors.py"

The output will include information about the number of removed faces and the number of persons with faces.

Backport 4#

The Backport 4 service is used to process the requests for LUNA PLATFORM 4 using LUNA PLATFORM 5.

Although most of the requests are performed in the same way as in LUNA PLATFORM 4, there are still some restrictions. See "Backport 4 features and restrictions" for details.

See "Backport4ReferenceManual.html" for details about the Backport 4 API.

Backport 4 architecture#

Interaction of Backport 4 and LP 5 services
Interaction of Backport 4 and LP 5 services

Backport 4 interacts with the API service and sends requests to LUNA PLATFORM 5 using it.

Backport 4 directly interacts with the Faces service to receive the number of existing attributes.

Backport 4 directly interacts with the Sender service. All the requests to Sender are sent using the Backport 4 service. See the "ws" > "ws handshake" request in the "Backport4ReferenceManual.html".

You can configure Backport 4 using the Configurator service.

Backport 4 features and restrictions#

The following features have core differences:

The current versions for LUNA PLATFORM services are returned on the request to the /version resource. For example, the versions of the following services are returned:

  • "luna-faces"
  • "luna-events"
  • "luna-image-store"
  • "luna-python-matcher" or "luna-matcher-proxy"
  • "luna-tasks"
  • "luna-handlers"
  • "luna-api"
  • "LUNA PLATFORM"
  • "luna-backport4" - the current service

Resources changelog:

  • Resource /attributes/count is available without any query parameters and does not support accounting. The resource works with temporary attributes.
  • Resource /attributes on method GET: attribute_ids query parameter is allowed instead of page, page_size, time__lt and time__gte query parameters. Thus you can get attributes by their IDs not by filters. The resource works with temporary attributes.
  • Resource /attributes/<attribute_id> on methods GET, HEAD, DELETE and resource /attributes/<attribute_id>/samples on method GET interact with temporary attributes and return attribute data if the attribute TTL has not expired. Otherwise, the "Not found" error is returned.
  • If you already used the attribute to create a face, use the face_id to receive the attribute data. In this case, the attribute_id from the request is equal to face_id.
  • Resource /faces enables you to create more than one face with the same attribute_id.
  • Resource /faces/<face_id> on method DELETE enables you to remove face without removing its attribute.
  • Resource /faces/<face_id> on method PATCH enables you to patch attribute of the face making the first request to patch event_id, external_id, user_data, avatar (if required) and the second request to patch attribute (if required).
  • If face attribute_id is to be changed, the service will try to patch it with temporary attribute data if the temporary attribute exists. Otherwise, the service tries to patch it with attribute data from the face with face_id = attribute_id.
  • The match policy of resource /handlers now has the default match limit that is configured using the MATCH_LIMIT setting from the Backport 4 config.py file.
  • Resource /events/stats on method POST: attribute_id usage in filters object was prohibited as this field is no longer stored in the database. The response with the 403 status code will be returned.
  • Attribute_id in events is not null and is equal to face_id for back compatibility. GC task is unavailable because all the attributes are temporary and will be removed automatically. Status code 400 is returned on a request to the /tasks/gc resource.
  • The column attribute_id is not added to the report of the Reporter task and this column is ignored if specified in the request. Columns top_similar_face_id, top_similar_face_list, top_similar_face_similarity are replaced by the top_match column in the report if any of these columns is passed in the reporter task request.
  • Linker task always creates new faces from events and ignores faces created during the event processing request.
  • Resource /matcher does not check the presence of provided faces thus error FacesNotFound is never returned. If the user has specified a non-existent candidate of type "faces", no error will be reported, and no actual matching against that face will be made.
  • Resource /matcher checks whether reference with type attribute has the ID of face attribute or the ID of temporary attribute and performs type substitution. Hence it provides sending references for matching in the way it was done in the previous version.
  • Resource /matcher takes matching limits into account. By default, the maximum number of references or candidates is limited to 30. If you need to overcome these limits, configure REFERENCE_LIMIT and CANDIDATES_LIMIT.
  • Resource /ws has been added. There was no /ws resource in the LUNA PLATFORM 4 API as it was a separate resource of the Sender service. This added resource is similar to the Sender service resource, except that attribute_id of candidates faces is equal to face_id.
  • Resource /handlers returns the error "Invalid handler with id ", if the handler was created in the LUNA PLATFORM 5 API and is not supported in LUNA Backport 4.

Backport 4 User Interface#

The User Interface service is used for the visual representation of LP features. It does not include all the functionality available in LP. User Interface enables you to:

  • Download photos and create faces using them;
  • Create lists;
  • Match existing faces;
  • Show existing events;
  • Show existing handlers.

All the information in User Interface is displayed according to the "Luna-Account-Id", specified in the configuration file of the User Interface service ()./luna-ui/browser/env.js).

User Interface works with a single "Luna-Account-Id" at a time.

General pages#

You should open your browser and enter the User Interface address. The default value is: :4200.

You can select a page on the left side of the window.

Lists/faces page#

The starting page of User Interface is Lists/Faces. It includes all the faces and lists created using the "Luna_account_id".

Lists/Faces Page
Lists/Faces Page

The left column of the workspace displays existing lists. You can create a new list by pressing the Add list button. In the appeared window you can specify the user data for the list.

The right column shows all the created faces with pagination.

Use the Add new faces button to create new faces.

On the first step, you should select photos to create faces from. You can select one or several images with one or several faces in them.

After you select images, all the found faces will be shown in a new dialog window.

All the correctly preprocessed images will be marked as "Done". If the image does not correspond to any of the requirements, an error will be displayed for it.

Press the Next step button.

Select images
Select images

On the next step, you should select the attributes to extract for the faces.

Press the Next step button.

Select attributes
Select attributes

On the next step, you can specify user data and external ID for each of the faces. You can also select lists to which each of the faces will be added. Press Add Faces to create faces.

Add user data, external ID and specify lists
Add user data, external ID and specify lists

You can change pages using arrow buttons.

You can change the display of faces and filter them using buttons in the top right corner.

Filters_icon
Filters_icon

Filer faces. You can filter faces by ID, external ID or list ID;

View_icon
View_icon

/

View_icon_2
View_icon_2

Change view of the existing faces.

Handlers page#

Handlers page displays all handlers created using the "Luna_account_id".

All the information about specified handler policies is displayed when you select a handler.

You can edit or delete a handler using edit

Edit
Edit

and delete
Delete
Delete

icons.

Handlers page
Handlers page

Events page#

The events page displays all the events created using the "Luna_account_id".

Events Page
Events Page

It also includes filters for displaying of events

Filters_icon
Filters_icon

.

Common information#

You can edit

Edit
Edit

or delete
Delete
Delete

an item (face, list or handler) using special icons. The icons appear when you hover the cursor on an item.

Icons for element
Icons for element

Matching dialog#

The Matching button in the bottom left corner of the window enables you to perform matching.

After pressing the button you can select the number of the received results for each of the references.

Select number of results
Select number of results

On the first step, you should select references for matching. You can select faces and/or events as references.

Select references
Select references

On the second step, you should select candidates for matching. You can select faces or lists as candidates.

Select candidates
Select candidates

On the last step, you should press the Start matching button to receive results.

Start matching
Start matching