Skip to content

Services description#

This section provides more details on functions of the LP services.

Databases and message queues can be omitted in the following figures.

API service#

LUNA API is a facial recognition web service. It provides a RESTful interface for interaction with other LUNA PLATFORM services.

Using the API service you can send requests to other LP services and solve the following problems:

  • Images processing and analysis:

    • face/body detection in photos;

    • face attributes (age, gender, ethnicity) and face parameters (head pose, emotions, gaze direction, eyes attributes, mouth attributes) estimation;

    • body parameters (age, gender, accessories, headwear, colors of upper and lower clothing, type of sleeves) estimation;

  • Search for similar faces/bodies in the database;

  • Storage of the received face attributes in databases;

  • Creation of lists to search in;

  • Statistics gathering;

  • Flexible request management to meet user data processing requirements.

Remote SDK service#

The Remote SDK service is used to:

  • perform face detection and face parameters estimation,
  • perform body detection and body parameters estimation,
  • samples creation,
  • perform extraction of basic attributes and descriptor, including aggregated ones,
  • process images using handlers and verifiers policies.

Face, body detection, descriptor extraction, estimation of parameters and attributes are performed using neural networks. The algorithm evolves with time and new neural networks appear. They may differ from each other by performance and precision. You should choose a neural network following the business case of your company.

Remote SDK with GPU#

Remote SDK service can utilize GPU instead of CPU for calculations. A single GPU is utilized per Remote SDK service instance.

Attributes extraction on the GPU is engineered for maximum throughput. The input images are processed in batches. This reduces computation cost per image but does not provide the shortest latency per image.

GPU acceleration is designed for high load applications where request counts per second consistently reach thousands. It won’t be beneficial to use GPU acceleration in non-extensively loaded scenarios where latency matters.

Aggregation#

Based on all images transferred in one request, a single set of basic attributes and an aggregated descriptor can be obtained. In addition, during the creation of the event, the aggregation of the received values of Liveness, emotions, medical mask states for faces and upper/lower body, gender, age and the body accessories for bodies is performed.

The matching results are more precise for aggregated descriptor. It is recommended to use aggregation when several images were received from the same camera. It is not guaranteed that aggregated descriptors provide improvements in other cases.

It is considered that each parameter is aggregated from sample. Use the "aggregate_attributes" parameter of the "extract attributes" (only for faces) and "sdk" requests to enable attributes aggregation. Aggregation of liveness, emotion, and mask states for faces and upper body, gender, age and the body accessories for bodies is available using the "aggregate_attributes" parameter in the "generate events", provided that these parameters were estimated earlier in the handler, as well as in the "sdk" request.

An array of "sample_ids" is returned in the response even if there was only a single sample used in the request. In this case, a single sample ID is included in the array.

Descriptor formats#

LUNA PLATFORM supports the following descriptor formats:

Descriptor format File content Size
SDK set of bytes (descriptor itself) size depends on neural network version (see "Neural networks")
set of bytes indicating the version 4 bytes
set of signature bytes 4 bytes
Raw a set of bytes (descriptor itself) encoded in Base64 size depends on neural network version (see "Neural networks")
XPK files files that store descriptor in SDK format depends on the number of descriptors inside the file

SDK and Raw formats can be directly linked to a face or stored in a temporary attribute (see "Create objects using external data" below).

In most extraction requests, the descriptor is saved to the database as set of bytes, without being returned in the response body.

There are several requests that can be used to get descriptor in SDK format:

With LUNA PLATFORM, it is not possible to get descriptors in Raw and SDK formats. You can use other VisionLabs software to get these formats (eg LUNA SDK). Descriptors obtained using the above resources or using the VisionLabs software are referred to as raw descriptors.

Use raw descriptors for matching

The descriptor formats described above can be used in requests for the use of raw descriptors.

An external raw descriptor can be used as reference in the following resources:

An external raw descriptor can be used as a candidate in the following resources:

Create objects using external data#

You can create a temporary attribute of face by sending basic attributes and descriptors to LUNA PLATFORM. Thus you can store this data in external storage and send it to LP for the processing of requests only.

You can create an attribute or face using:

  • basic attributes and their samples;
  • descriptors (raw descriptor in Base64 or SDK descriptor in Base64);
  • both basic attributes and descriptors with the corresponding data.

Samples are optional and are not required for an attribute or face creation.

See the "create temporary attribute" request and "create face request" for details.

Checking images for compliance with standards#

The Remote SDK service enables you to check images according to the ISO/IEC 19794-5:2011 standard or user-specified thresholds using three ways:

For example, it is necessary to check whether the image is of a suitable format, specifying the "JPEG" and "JPEG2000" formats as a satisfactory condition. If the image fits this condition, the system will return the value "1", if the format of the processed image is different from the specified condition, the system will return the value "0". If the conditions are not set, the system will return the estimated value of the image format.

The list of estimations and checks performed is described in the "Image check" section.

The ability to perform check and estimation of image parameters is regulated by a special parameter in the license file.

Enable/disable several estimators and detectors#

By default, the Remote SDK service is launched with all estimators and detectors enabled. If necessary, you can disable the use of some estimators or detectors when launching the Remote SDK container. Disabling unnecessary estimators enables you to save RAM or GPU memory, since when the Remote SDK service launches, the possibility of performing these estimates is checked and neural networks are loaded into memory.

If you disable the estimator or detector, you can also remove its neural network from the Remote SDK container.

Disabling estimators or detectors is possible by transferring documents with the names of estimators to the launch command of the Remote SDK service. Arguments are passed to the container using the "EXTEND_CMD" variable.

List of available estimators:

Argument Description
--enable-all-estimators-by-default enable all estimators by default
--enable-human-detector simultaneous detector of bodies and bodies
--enable-face-detector face detector
--enable-body-detector body detector
--enable-people-count-estimator people count estimator
--enable-face-landmarks5-estimator face landmarks5 estimator
--enable-face-landmarks68-estimator face landmarks68 estimator
--enable-head-pose-estimator head pose estimator
--enable-liveness-estimator Liveness estimator
--enable-fisheye-estimator FishEye effect estimator
--enable-face-detection-background-estimator image background estimator
--enable-face-warp-estimator face sample estimator
--enable-body-warp-estimator body sample estimator
--enable-quality-estimator image quality estimator
--enable-image-color-type-estimator face color type estimator
--enable-face-natural-light-estimator natural light estimator
--enable-eyes-estimator eyes estimator
--enable-gaze-estimator gaze estimator
--enable-mouth-attributes-estimator mouth attributes estimator
--enable-emotions-estimator emotions estimator
--enable-mask-estimator mask estimator
--enable-glasses-estimator glasses estimator
--enable-eyebrow-expression-estimator eyebrow estimator
--enable-red-eyes-estimator red eyes estimator
--enable-headwear-estimator headwear estimator
--enable-basic-attributes-estimator basic attributes estimator
--enable-face-descriptor-estimator face descriptor extraction estimator
--enable-body-descriptor-estimator body descriptor extraction estimator
--enable-body-attributes-estimator body attributes estimator

You can explicitly specify which estimators and detectors are enabled or disabled by passing the appropriate arguments to the "EXTEND_CMD" variable, or you can enable (by default) or disable them all with the enable-all-estimators-by-default argument. You can turn off the use of all estimators and detectors, and then turn on specific estimators by passing the appropriate arguments.

Example of a command to start the Remote SDK service using only a face detector and estimators of a face sample and emotions.

docker run \
...
--env=EXTEND_CMD="--enable-all-estimators-by-default=0 --enable-face-detector=1 --enable-face-warp-estimator=1 --enable-emotions-estimator=1" \
...

Handlers service#

The Handlers service is used to create and store handlers and verifiers.

The data of handlers and verifiers are stored in the Handlers database.

Image Store service#

The Image Store service stores the following data:

Image Store can save data either on a local storage device or in S3-compatible cloud storage (Amazon S3, etc.).

Buckets description#

The data is stored in special directories called buckets. Each bucket has a unique name. Bucket names should be set in lower case.

The following buckets are used in LP:

  • "visionlabs-samples" bucket stores face samples.
  • "visionlabs-bodies-samples" bucket stores human bodies samples.
  • "visionlabs-image-origin" bucket stores source images.
  • "visionlabs-objects" - bucket stores objects.
  • "task-result" bucket stores the results received after tasks processing using the Tasks service.
  • "portraits" - the bucket is required for the usage of Backport 3 service. The bucket stores portraits.

Buckets creation is described in LP 5 installation manual in the "Buckets creation" section.

After running the Image Store container and the commands for containers creation, the buckets are saved to local storage or S3.

By default, local files are stored in the "/var/lib/luna/current/example-docker/image_store" directory on the server. They are saved in the "/srv/local_storage/" directory in the Image Store container.

Bucket includes directories with samples or other data. The names of the directories correspond to the first four letters of the sample ID. All the samples are distributed to these directories according to their first four ID symbols.

Next to the bucket object is a "*.meta.json" file containing the "account_id" used when performing the request. If the bucket object is not a sample (for example, the bucket object is a JSON file in the "task-result" bucket), then the "Content-Type" will also be specified in this file.

An example of the folders structure in the "visionlabs-samples", "task-result" and "visionlabs-bodies-samples" buckets is given below.

./local_storage/visionlabs-samples/8f4f/
            8f4f0070-c464-460b-sf78-fac234df32e9.jpg
            8f4f0070-c464-460b-sf78-fac234df32e9.meta.json
            8f4f1253-d542-621b-9rf7-ha52111hm5s0.jpg
            8f4f1253-d542-621b-9rf7-ha52111hm5s0.meta.json
./local_storage/task-result/1b03/
            1b0359af-ecd8-4712-8fc0-08401612d39b
            1b0359af-ecd8-4712-8fc0-08401612d39b.meta.json
./local_storage/visionlabs-bodies-samples/6e98/
            6e987e9c-1c9c-4139-9ef4-4a78b8ab6eb6.jpg
            6e987e9c-1c9c-4139-9ef4-4a78b8ab6eb6.meta.json

A significant amount of memory may be required when storing a large number of samples. A single sample takes about 30 Kbytes of the disk space.

It is also recommended to create backups of the samples. Samples are utilized when the NN version is changed or when you need to recover your database of faces.

Use S3-compatible storage#

To enable the use of S3-compatible storage, you must perform the following steps:

  • make sure that the access key has sufficient authority to access the buckets of the S3-compatible storage;
  • launch the Image Store service (see "Image Store" section in the installation manual);
  • set the "S3" value for the "storage_type" setting in the "OTHER" section of the Image Store service settings;
  • fill in the settings for connecting to an S3-compatible storage (host, Access Key and Secret Key, etc.) in the "S3" section of the Image Store service settings;
  • run the script for creating buckets lis_bucket_create.py (see the "Create buckets" section in the installation manual)

If necessary, you can disable SSL certificate verification using the "verify_ssl" setting in the "S3" section of the Image Store service settings. This enables you to use a self-signed SSL certificate.

External samples#

You can send an external sample to Image Store. The external sample is received using third-party software or the VisionLabs software (e. g., FaceStream).

See the POST request on the "/samples/{sample_type}" resource in "APIReferenceManual.html" for details.

The external sample should correspond to certain standards so that LP could process it. Some of them are listed in the "Sample requirements" section.

The samples received using the VisionLabs software satisfy this requirement.

In case of third-party software, it is not guaranteed that the result of the external sample processing will be the same as for the VisionLabs sample. The sample can be of low quality (too dark, blurry and so on). Low quality leads to incorrect image processing results.

Anyway, it is recommended to consult VisionLabs before using external samples.

Accounts service#

The Accounts service is intended for:

  • Creation, management and storage of accounts
  • Creation, management and storage of tokens and their permissions
  • Verification of accounts and tokens

See "Accounts, tokens and authorization types" section for more information about the authorization system in LUNA PLATFORM 5.

All created accounts, tokens and their permissions are saved in the Accounts service database.

Faces service#

Faces service is used for:

  • Creating temporary attributes;
  • Creating faces;
  • Creating lists;
  • Attaching faces to lists;
  • Managing of the general database that stores faces with the attached data and lists;
  • Receive information about the existing faces and lists.

Matching services#

Python Matcher has the following features:

  • Matching according to the specified filters. This matching is performed directly on the Faces or the Events database. Matching by DB is beneficial when several filters are set.
  • Matching by lists. In this case, it is recommended that descriptors are save in the Python Matcher cache.

Python Matcher Proxy is used to route requests to Python Matcher services and matching plugins.

Python Matcher#

Python Matcher utilizes Faces DB for filtration and matching when faces are set as candidates for matching and filters for them are specified. This feature is always enabled for Python Matcher.

Python Matcher utilizes Events DB for filtration and matching when events are set as candidates for matching and filters for them are specified. The matching using the Events DB is optional, and it is not used when the Events service is not utilized.

A VLMatch matching function is required for matching by DB. It should be registered for the Faces DB and the Events DB. The function utilizes a library that should be compiled for your current DB version. You can find information about it in the installation manual in "VLMatch library compilation", "Create VLMatch function for Faces DB", and "Create VLMatch function for Events DB" sections.

When faces are set as candidates for matching, and list IDs are specified as filters, Python Matcher will perform matching by lists. In this case, it caches all the lists to improve performance.

The CACHE_ENABLED parameter in the DESCRIPTORS_CACHE setting should be set to "true" in the Python Matcher configurations to perform caching.

Python Matcher service additionally uses working processes that process requests.

Python Matcher Proxy#

The API service sends requests to the Python Matcher Proxy if it is configured in the API configuration. Then the Python Matcher Proxy service redirects requests to the Python Matcher service or to matching plugins (if they are used).

If the matching plugins are not used, then the service route requests only to the Python Matcher service. Thus, you don't need to use Python Matcher Proxy unless you intend to use matching plugins. See the "Matching plugins" section for a description of how the matching plugins work.

Working processes cache#

When multiple worker processes are launched for the Python Matcher service, each of the worker processes uses the same descriptors cache.

This change can both speed up and slow down the service. If you need to ensure that the cache is stored in each of the Python Matcher processes, you should run each of the server instances separately.

Events service#

The Events service is used for:

  • Storage of all the created events in the Events database.
  • Returning all the events that satisfy filters.
  • Gathering statistics on all the existing events according to the specified aggregation and frequency/period.
  • Storage of descriptors created for events.

As the event is a report, you can't modify already existing events.

The Events service should be enabled in the API service configuration file. Otherwise, events will not be saved to the database.

Database for Events#

PostgreSQL is used as a database for the Events service.

The speed of request processing is primarily affected by:

  • the number of events in the database
  • lack of indexes for PostgreSQL

PostgreSQL shows acceptable requests processing speed with the number of events from 1 000 000 to 10 000 000. If the number of events exceeds 10 000 000, the request to PostgreSQL may fail.

The speed of the statistics requests processing in the PostgreSQL database can be increased by configuring the database and creating indexes.

Geo position#

You can add a geo position during event creation.

The geo position is represented as a JSON with GPS coordinates of the geographical point:

  • longitude - geographical longitude in degrees
  • latitude - geographical latitude in degrees

The geo position is specified in the "location" body parameter of the event creation request. See the "Create new events" section of the Events service reference manual.

You can use the geo position filter to receive all the events that occurred in the required area.

Geo position filter#

A geo position filter is a bounding box specified by coordinates of its center (origin) and some delta.

It is specified using the following parameters:

  • origin_longitude
  • origin_latitude
  • longitude_delta
  • latitude_delta

The geo position filter can be used when you get events, get statistics on events, and perform events matching.

Geo position filter is considered as properly specified if:

  • both origin_longitude and origin_latitude are set.
  • neither origin_longitude, origin_latitude, longitude_delta, or latitude_delta is set.

If both origin_longitude and origin_latitude are set and longitude_delta is not set - the default value is applied (see the default value in the OpenAPI documentation).

Read the following recommendations before using geo position filters.

The general recommendations and restrictions for geo position filters are:

  • Do not create filters with a vertex or a border on the International Date Line (IDL), the North Pole or the South Pole. They are not fully supported due to the features of database spatial index. The filtering result may be unpredictable;
  • Geo position filters with edges more than 180 degrees long are not allowed;
  • It is highly recommended to use the geo position filter citywide only. If a larger area is specified, the filtration results on the borders of the area can be unexpected due to the spatial features.
  • Avoid creating a filter that is too extended along longitude or latitude. It is recommended to set the values of deltas close to each other.

The last two recommendations exist due to the spatial features of the filter. According to these features, when one or two deltas are set to large values, the result may differ from the expected though it will be correct. See the "Filter features" section for details.

Filter performance#

Geo position filter performance depends on the spatial data type used to store event geo position in the database.

Two spatial data types are supported:

  • GEOMETRY: a spatial object with coordinates expressed as (longitude, latitude) pairs, defined in the Cartesian plane. All calculations use Cartesian coordinates.
  • GEOGRAPHY: a spatial object with coordinates expressed as (longitude, latitude) pairs, defined as on the surface of a perfect sphere, or a spatial object in the WGS84 coordinate system.

For a detailed description, see geometry vs geography.

Geo position filter is based on the ST_Covers PostGIS function supported for both geometry and geography type.

Filter features#

Geo position filter has some features caused by PostGIS.

When geography type is used and the geo position filter covers a wide portion of the planet surface, filter result may be unexpected but geographically correct due to some spatial features.

The following example illustrates this case.

An event with the following geo position was added in the database:

{
    "longitude": 16.79,
    "latitude": 64.92,
}

We apply a geo position filter and try to find the required point on the map. The filter is too extended along the longitude:

{
    "origin_longitude": 16.79,
    "origin_latitude": 64.92,
    "longitude_delta": 2,
    "latitude_delta": 0.01,
}

This filter will not return the expected event. The event will be filtered due to spatial features. Here is the illustration showing that the point is outside the filter.

Too wide zone
Too wide zone

You should consider this feature to create a correct filter.

For details, see Geography.

Events creation#

Events are created using handlers. Handlers are stored in the Handlers database. You should specify the required handler ID in the event creation request. All the data stored in the event will be received according to the handler parameters.

You should perform two separate requests for event creation.

The first request creates a handler. Handler includes policies that describes how the image is processed hence defining the LP services used for the processing.

The second request creates new events using the existing handler. An event is created for each image that has been processed.

You can specify the following additional data for each event creation request:

  • external ID (for created faces),
  • user data (for created faces),
  • source (for created events),
  • tags (for created events).

The handler is processed policy after policy. All the data from the request is processed by a policy before going to the next policy. The "detect" policy is performed for all the images from the request, then "multiface" policy is applied, then the "extract" policy is performed for all the received samples, etc. For more information about handlers, see the "Handlers description" section.

Events meta-information#

If any additional data needs to be stored along with the event, the "meta" field should be used. The "meta" field stores data in the JSON format. The total size of the data stored in the "meta" field for one event cannot exceed 2 MB. It is assumed that with the help of this functionality, the user will create his own data model (event structure) and will use it to store the necessary data.

Data in the "meta" field can be set in the following ways:

  • in the "generate events" request body with the content type application/json or multipart/form-data
  • in the "save event" request body
  • using a custom plugin or client application.

In the "generate events" request body, it is possible to set the "meta" field both for specific images and for all images at once (mutual meta-information). For requests with aggregation enabled, only mutual meta-information will be used for the aggregated event, and meta-information for specific images will be ignored. See the detailed information in the "generate events" request body in the OpenAPI specification.

Example of recording the "meta" field:

{
    "meta": {
        "user_info": {
            "temperature": 36.6
        }
    }
}

In order to store multiple structures, it is necessary to explicitly separate them to avoid overlapping fields. For example, as follows:

{
    "struct1": {
        ...
    },
    "struct2": {
        ...
    }
}

Search by "meta" field#

You can get the contents of the "meta" field using the appropriate filter in the "get events" request.

The filter should be entered using a specific syntax - meta.<path.to.field>__<operator>:<type>, where:

  • meta. - an indication that the "meta" field of the Events database is being accessed;
  • <path.to.field> - path to the object. A dot (.) is used to navigate nested objects. For example, in the string {"user_info":{"temperature":"36.6"}} to refer to the temperature object, use the following filter meta.user_info.temperature
  • __<operator> - one of the following operators - eq (default), neq, like, nlike, in, nin, gt, gte, lt, lte. For example, meta.user_info.temperature__gte;
  • :type - one of the following data types - string, integer, numeric. For example, meta.user_info.temperature__gte:numeric.

For each operator, the use of certain data types is available. See the table of operator dependency on data types in the OpenAPI specification.

If necessary, you can build an index to improve the search. See the Events developer manual for details on building an index.

Important notes#

When working with the "meta" field, remember the following:

  • you need to keep data consistent with given schemes; in case of a mismatch, PostgreSQL will not allow inserting a row with a type that cannot be added to the existing index (if any);
  • if necessary, you can migrate data;
  • if necessary, you can build an index;
  • specify the data type when performing a request (by default, all values are assumed to be strings);
  • you need to pay attention to the names of the fields; fields to be filtered by must not contain reserved keywords like :int, double underscores, special symbols, and so on.

Sender service#

The Sender service is an additional service that is used to send events via web sockets. This service communicates with the Handlers service (in which events are created) through the pub/sub mechanism via the Redis DB channel.

Events are created based on handlers. To receive notifications, the "notification_policy" must be enabled. This policy has filters that enable you to send notifications only under certain conditions, for example, to send only if the candidate is very similar to the reference (the "similarity__lte" parameter).

You should configure web sockets connection using special request. It is recommended create web sockets connection using the "/ws" resource of the API service. You can specify filters (query parameters) in the request, i.e. you can configure the Sender service to receive only certain events. See OpenAPI specification for detailed information about the configuration of creating a connection to a web socket.

Configuring web sockets directly via Sender is also available (see "/ws" of the Sender service). It can be used to reduce the load on the API service.

When an event is created it can be:

  • saved to the Events database. The Events service should be enabled to save an event;

  • returned in the response without saving to the database.

In both cases, the event is sent via the Redis DB channel to the Sender service.

In this case, the Redis DB acts as a connection between Sender and Handlers services and does not store transferred events.

The Sender service is independent of the Events service. Events can be sent to Sender even if the Events service is disabled.

Creating handlers and specifying filters for sending notifications

  1. The user sends the "create handler" request to the API service, where it enables the "notification_policy" and sets filters according to which events will be sent to the Sender service;
  2. The API service sends a request to the Handlers service;
  3. The Handlers service sends a response to the API service;
  4. The API service sends the "handler_id" to the user.

The user saves the ID "handler_id", which is necessary for creating events.

Creating handlers and specifying filters for sending notifications
Creating handlers and specifying filters for sending notifications

Activation of subscription to events and filtering of their sending

  1. The user or application sends a request "ws handshake" to the API service and sets filters through which it will be possible to filter the received data from the Handlers service;
  2. The API service sends a request to the Sender service;
  3. The Sender service establishes a connection via web sockets with the user application.

Now, when an event is generated, it will be automatically redirected to the Sender service (see below) in accordance with the specified filters.

Activating event subscriptions and filtering their sending
Activating event subscriptions and filtering their sending

Event generation and sending to Sender

The general workflow is as follows:

  1. A user or an application sends the "generate events" request to the API service;
  2. The API service sends the request to the Handlers service;
  3. The Handlers service sends requests to the corresponding LP services;
  4. LP services process the requests and send results. New events are created;
  5. The Handlers service sends an event to the Redis database using the pub/sub model. Redis has a channel to which the Sender service is subscribed, and it is waiting for messages to be received from this channel;
  6. Redis sends the received events to Sender by the channel;
  7. Third-party party applications should be subscribed to the Sender service via web-sockets to receive events. If there is a subscribed third-party party application, Sender sends events to it according to the specified filters.
Sender workflow
Sender workflow

See the OpenAPI documentation for information about the JSON structure returned by the Sender service.

Tasks service#

The Tasks service is used for long tasks processing.

General information about tasks#

As tasks processing takes time, the task ID is returned in the response to the task creation.

After the task processing is finished, you can receive the task results using the "task " > "get task result" request. You should specify the task ID to receive its results.

You can find the examples of tasks processing results in the response section of "task " > "get task result" request. You should select the task type in the Response samples section of documentation.

Select required example
Select required example

You should make sure that the task was finished before requesting its results:

  • You can check the task status by specifying the task ID in the "tasks" > "get task" request. There are the following task statuses:
tasks status value
pending 0
in progress 1
cancelled 2
failed 3
collect results 4
done 5
  • You can receive information about all the tasks using the "tasks" > "get tasks" request. You can set filter to receive information about tasks of interest only.

Clustering task#

As the result of the task a cluster with objects selected according to the specified filters for faces or events is created. Objects corresponding to all of the filters will be added to the cluster. Available filters depend on the object type: events or faces.

You can receive the task status or result using additional requests (see the "General information about tasks").

You can use the reporter task to receive the report about objects added to clusters.

Clustering is performed in several steps:

  • objects with descriptors are collected according to provided filters

  • every object is matched with all the other objects

  • create clusters as groups of "connected components" from the similarity graph.

Here "connected" means that similarity is greater than provided threshold or default "DEFAULT_CLUSTERING_THRESHOLD" from the config.

  • if needed, download existing images corresponding to each object: avatar for a face, first sample for an event.

As a result of the task an array of clusters is returned. A cluster includes IDs of objects (faces or events) whose similarity is greater then the specified threshold. You can use the information for further data analysis.

{
    "errors": [],
    "result": {
        "clusters": [
            [
                "6c721b90-f5a0-409a-ab70-bc339a70184c"
            ],
            [
                "8bc6e8df-410b-4065-b592-abc5f0432a1c"
            ],
            [
                "e4e3fc66-53b4-448c-9c88-f430c00cb7ea"
            ],
            [
                "02a3a1c4-93d7-4b69-99ec-21d5ef23852e",
                "144244cb-e10e-478c-bdac-18cd2eb27ee6",
                "1f4cdbcb-7b1e-40cc-873b-3ff7fa6a6cf0"
            ]
        ],
        "total_objects": 6,
        "total_clusters": 4
    }
}

The clustering task result can also include information about errors occurred during the objects processing.

Reporter task#

As a result of the task, the report on the clustering task is created. You can select data that should be added to the report. The report has CSV format.

You can receive the task status or result using additional requests (see the "General information about tasks").

You should specify the clustering task ID and the columns that should be added to the report. The selected columns correspond to the general events and faces fields.

Make sure that the selected columns correspond to the objects selected in the clustering task.

You can also receive the images for all the objects in clusters if they are available.

Exporter task#

The task enables you to collect event and/or face data and export them from LP to a CSV file. The file rows represent requested objects and corresponding samples (if they were requested).

This task uses memory when collecting data. So, its possible that Tasks Worker will be killed by OOM (Out-Of-Memory) killer if you request a lot of data.

You can export event or face data using the "/tasks/exporter" request. You should specify what type of object is required by setting objects_type parameter when creating a request. You can also narrow your request by providing filters for faces and events objects. See the "exporter task" request in the API service reference manual.

As a result of the task a zip archive containing a CSV file is returned.

You can receive the task status or result using additional requests (see the "General information about tasks").

When executing the Exporter task with a large number of faces in the Faces database (for example, 90,000,000 faces), the execution time of requests to the Faces service can be significantly increased. To speed up request execution, you can set the PostgreSQL setting "parallel_setup_cost" to 500. However, be aware that changing this setting may have other consequences, so you should be careful when changing the setting.

Cross-matching task#

When the task is performed, all the references are matched with all the candidates. References and candidates are set using filters for faces and events.

Matching is performed only for objects that contain extracted descriptors.

You can specify the maximum number of matching candidates returned for every match using the limit field.

You can set a threshold to specify the minimal acceptable value of similarity. If the similarity of two descriptors is lower then the specified value, the matching result will be ignored and not returned in the response. References without matches with any candidates are also ignored.

Cross-matching is performed in several steps:

  • collect objects having descriptors using provided filters
  • match every reference object with every candidate object
  • match results are sorted (lexicographically) and cropped (limit and threshold are applied)

You can receive the task status or results using additional requests (see the "General information about tasks").

As a result an array is returned. Each element of the array includes a reference and top similar candidates for it. Information about errors occurred during the task execution is also returned in the response.

{
"result": [
    {
        "reference_id": "e99d42df-6859-4ab7-98d4-dafd18f47f30",
        "candidates": [
            {
                "candidate_id": "93de0ea1-0d21-4b67-8f3f-d871c159b740",
                "similarity": 0.548252
            },
            {
                "candidate_id": "54860fc6-c726-4521-9c7f-3fa354983e02",
                "similarity": 0.62344
            }
        ]
    },
    {
        "reference_id": "345af6e3-625b-4f09-a54c-3be4c834780d",
        "candidates": [
            {
                "candidate_id": "6ade1494-1138-49ac-bfd3-29e9f5027240",
                "similarity": 0.7123213
            },
            {
                "candidate_id": "e0e3c474-9099-4fad-ac61-d892cd6688bf",
                "similarity": 0.9543
            }
        ]
    }
],
"errors": [
    {
        "error_id": 10,
        "task_id": 123,
        "subtask_id": 5,
        "error_code": 0,
        "description": "Faces not found",
        "detail": "One or more faces not found, including face with id '8f4f0070-c464-460b-bf78-fac225df72e9'",
        "additional_info": "8f4f0070-c464-460b-bf78-fac225df72e9",
        "error_time": "2018-08-11T09:11:41.674Z"
    }
]
}

Linker task#

The task enables you to attach faces to lists according to the specified filters.

You can specify creation of a new list or specify the already existing list in the requests.

You can specify filters for faces or events to perform the task. When an event is specified for linking to list a new face is created based on the event.

If the create_time_lt filter is not specified, it will be set to the current time.

As the result of the task you receive IDs of faces linked to the list.

You can receive the task status or result using additional requests (see the "General information about tasks").

Task execution process for faces:

  • A list is created (if create_list parameter is set to 1) or the specified list_id existence is checked.
  • Face ID boundaries are received. Then one or several subtasks are created with about 1000 face ids per each. The number depends on face ID spreading.
  • For each subtask:

    • Face IDs are received. They are specified for the current subtask by filters in the subtask content.
    • The request is sent to the Luna Faces to link specified faces to the specified list.
    • The result for each subtask is saved to the Image Store service.
  • After the last subtask is finished, the worker collects results of all the subtasks, merges them and puts them to the Image Store service (as task result).

Task execution process for events:

  • A list is created (if create_list parameter is set to 1) or the specified list_id existence is checked.
  • Events page numbers are received. Then one or several subtasks are created.
  • For each subtask:

    • Event with their descriptors are received from the Events service.
    • Faces are created using the Faces service. Attribute(s) and sample(s) are added to the faces.
    • The request is sent to the Luna Faces to link specified faces to the specified list.
    • The result for each subtask is saved to the Image Store service.
  • After the last subtask is finished, the worker collects results of all the subtasks, merges them and puts them to the Image Store service (as task result).

Garbage collection task#

During the task processing, faces, events or descriptors can be deleted.

  • when descriptors are set as a GC target, you should specify the descriptor version. All the descriptors of the specified version will be deleted.
  • when events are set as a GC target, you should specify one or several of the following parameters:
    • account ID;
    • the upper excluded boundary of event creation time;
    • the upper excluded boundary of the event appearance in the video stream;
    • the ID of the handler used for the event creation.
  • when faces are set as a GC target, you should specify one or several of the following parameters:
    • the upper excluded boundary of face creation time;
    • the lower included boundary of face creation time;
    • user data;
    • list ID.

If necessary, you can delete samples along with faces or events. You can also delete image origins for events.

Garbage collection task with faces or events set as the target can be processed using the API service API, while the Admin or Task services API can be used to set faces, events and descriptors as the target. Thus the specified objects will be deleted for all the existing accounts.

You can receive the task status or result using additional requests (see the "General information about tasks").

Additional extraction task#

The Additional extraction task re-extracts descriptors extracted using the previous neural network model using a new version of the neural network. This enables you to save previously used descriptors when updating the neural network model. If there is no need to use the old descripors, then you can not perform this task and only update the neural network model in the Configurator settings.

This section describes how to work with the Additional extraction task. See detailed information about neural networks, the process of updating a neural network to a new model and relevant examples in the "Neural networks" section.

Re-extraction can be performed for face and event objects. You can re-extract the descriptors of faces, descriptors of bodies (for events) or basic attributes if they were not extracted earlier.

The samples for descriptors should be stored for the task execution. If any descriptors do not have source samples, they cannot be updated to a new NN version.

The re-extraction tasks are used for the update to a new neural network for descriptors extraction. All the descriptors of the previous version will be re-extracted using a new NN.

It is highly recommended not to perform any requests changing the state of databases during the descriptor version updates. It can lead to data loss.

Create backups of LP databases and the Image Store storage before launching the additional extraction task.

When processing the task, a new neural network descriptor is extracted for each object (face or event) whose descriptor version matches the version specified in the "DEFAULT_FACE_DESCRIPTOR_VERSION" (for faces) or "DEFAULT_HUMAN_DESCRIPTOR_VERSION" (for bodies) settings. Descriptors whose version does not match the version specified in these settings are not re-extracted. They can be removed using the Garbage collection task.

Request to the Admin service

You need to make a request to the "additional_extract" resource, specifying the following parameters in the request body:

  • content > extraction_target – face descriptors, body descriptors, basic attributes
  • content > options > descriptor_version – new neural network version (not applicable for basic attributes)
  • content > filters > object_type – faces or events

If necessary, you can additionally filter the object type by "account_id", "face_id__lt", etc.

See the "create additional extract task" request in the Admin service OpenAPI specification for more information.

You can receive the task status or result using additional requests (see the "General information about tasks").

Admin user interface

You need to do the following:

  • Go to the Admin user interface: http://<admin_server_ip>:5010/tasks;

  • Run the additional extraction task using the corresponding button;

  • In the window that appears, set the object type (face or event), the extraction type (face descriptor, body descriptor or basic attributes), new neural network model (not applicable for basic attributes) and click "Start", confirming the start of the task.

Set required settings
Set required settings

If necessary, you can additionally filter the object type by "account_id".

See the detailed information about the Admin user interface in the "Admin user interface" section.

ROC-curve calculating task#

As a result of the task, the Receiver Operating Characteristic curve with TPR (True Positive Rate) against the FPR (False Positive Rate) is created.

See additional information about ROC-curve creation in "TasksDevelopmentManual".

ROC calculation task

ROC (or Receiver Operating Characteristic) is a performance measurement for classification tasks at various thresholds settings. The ROC-curve is plotted with TPR (True Positive Rate) against the FPR (False Positive Rate). TPR is a true positive match pair count divided by a count of total expected positive match pairs, and FPR is a false positive match pair count divided by a count of total expected negative match pairs. Each point (FPR, TPR) of the ROC-cure corresponds to a certain similarity threshold. See more at wiki.

Using ROC the model performance is determined by looking at:

  • the area under the ROC-curve (or AUC);
  • type I and type II error rates equal point, i.e. the ROC-curve and the secondary main diagonal intersection point.

The model performance also determined by hit into the top-N probability, i.e. probability of hit a positive match pair into the top-N for any match result group sorted by similarity.

It requires markup to make a ROC task. One can optionally specify threshold_hit_top (default 0) to calculate hit into the top-N probability, the match limit (default 5), key_FPRs - list of key FPR values to calculate ROC-curve key points, and filters with account_id. Also, it needs account_id for task creation.

You can receive the task status or result using additional requests (see the "General information about tasks").

Markup

Markup is expected in the following format:

[{'face_id': <face_id>, 'label': <label>}]

Label (or group id) can be a number or any string.

Example:

[{'face_id': '94ae2c69-277a-4e46-817d-543f7d3446e2', 'label': 0},
 {'face_id': 'cd6b52be-cdc1-40a8-938b-a97a1f77d196', 'label': 1},
 {'face_id': 'cb9bda07-8e95-4d71-98ee-5905a36ec74a', 'label': 2},
 {'face_id': '4e5e32bb-113d-4c22-ac7f-8f6b48736378', 'label': 3},
 {'face_id': 'c43c0c0f-1368-41c0-b51c-f78a96672900', 'label': 2}]

Estimator task#

The estimator task enables you to perform batch processing of images using the specified policies.

As a result of the task performing, JSON is returned with data for each of the processed images and information about the errors that have occurred.

In the request body, you can specify the handler_id of an already existing static or dynamic handler. For the dynamic handler_id, the ability to set the required policies is available. In addition, you can create a static handler specifying policies in the request.

The resource can accept five types of sources with images for processing:

  • ZIP archive
  • S3-like storage
  • Network disk
  • FTP server
  • Samba network file system

To obtain correct results of image processing using the Estimator task, all processed images should be either in the source format or in the format of samples. The type of transferred images is specified in the request in the "image_type" parameter.

ZIP archive as image source of estimator task

The resource accepts for processing a link to a ZIP archive with images. The size of the archive is set using the "ARCHIVE_MAX_SIZE" parameter in the "config.py" configuration file of the Tasks service. The default size is 100 GB. An external URL or the URL to an archive saved in the Image Store can be used as a link to the archive. In the second case, the archive should first be saved to the LP using a POST request to the "/objects" resource.

When using an external URL, the ZIP archive is first downloaded to the Tasks Worker container storage, where the images are unpacked and processed. After the end of the task, the archive is deleted from the repository along with the unpacked images.

It is necessary to take into account the availability of free space for the above actions.

The archive can be password protected. The password can be passed in the request using the "authorization" -> "password" parameter.

S3-like storage as image source of estimator task

The following parameters can be set for this type of source:

  • bucket_name - bucket name/Access Point ARN/Outpost ARN (required);
  • endpoint - storage endpoint (only when specifying the bucket name);
  • region - bucket region (only when specifying the bucket name);
  • prefix - file key prefix. It can also be used to load images from a specific folder, such as "2022/January".;

The following parameters are used to configure authorization:

  • Public access key (required);
  • Secret access key (required);
  • Authorization signature version ("s3v2"/"s3v4").

It is also possible to recursively download images from nested bucket folders and save original images.

For more information about working with S3-like repositories, see AWS User Guide.

Network disk as image source of estimator task

The following parameters can be set for this type of source:

  • path - absolute path to the directory with images in the container (required).
  • follow_links - enables/disables symbolic link processing;
  • prefix - file key prefix;
  • postfix - file key postfix.

See an example of using prefixes and postfixes in the "/tasks/estimator" resource description.

When using a network disk as an image source and launching Tasks and Tasks Worker services through Docker containers, it is necessary to mount the directory with images from the network disk to the local directory and synchronize it with the specified directory in the container. You can mount a directory from a network disk in any convenient way. After that, you can synchronize the mounted directory with the directory in the container using the following command when launching the Tasks and Tasks Worker services:

docker run \
...
-v /var/lib/luna/current/images:/srv/images
...

/var/lib/luna/current/images - path to the previously mounted directory with images from the network disk.

/srv/images - path to the directory with the images in the container where they will be moved from the network disk. This path should be specified in the request body of the Estimator task (the "path" parameter).

As for S3-like storage, the ability to recursively download images from nested bucket folders is available.

FTP server as image source of estimator task

For this type of source, the following parameters can be set in the request body for connecting to the FTP server:

  • host - FTP server IP address or hostname (required);
  • port - FTP server port;
  • max_sessions - maximum number of allowed sessions on the FTP server;
  • user, password - authorization parameters (required).

As in Estimator tasks using S3-like storage or network disk as image sources, it is possible to set the path to the directory with images, recursively receive images from nested directories, select the type of transferred images, and specify the prefix and postfix.

See an example of using prefixes and postfixes in the "/tasks/estimator" resource description.

Samba as image source of estimator task

For this type of source, the parameters are similar to those of an FTP server, except for the "max_sessions" parameter. Also, if authorization data is not specified, the connection to Samba will be performed as a guest.

Task processing#

The Tasks service includes the Tasks service and Tasks workers. Tasks receives requests to the Tasks service, creates tasks in the DB and sends subtasks to Tasks workers. The workers are implemented as a separate Tasks Worker container. Tasks workers receive subtasks and perform all the required requests to other services to solve the subtasks.

The general approach for working with tasks is listed below.

  • A user sends the request for creation of a new task;
  • Tasks service creates a new task and sends subtasks to workers;
  • The Tasks workers process subtasks and create reports;
  • If several workers have processed subtasks and have created several reports, the worker, which finished the last subtask, gathers all the reports and creates a single report;
  • When the task is finished, the last worker updated its status in the Tasks database;
  • The user can send requests to receive information about tasks and subtasks and number of active tasks. The user can cancel or delete tasks;
  • The user can receive information about errors that occurred during execution of the tasks;
  • After the task is finished the user can send a request to receive results of the task.

See the "Tasks diagrams" section for details about tasks processing.

Running scheduled tasks#

In LUNA PLATFORM, it is possible to set a schedule for Garbage collection and Linker tasks.

The schedule is created using the request "create tasks schedule" to the API service, which specifies the contents of the task being created and the time interval for its launch. To specify the time interval, Cron expressions are used.

Cron expressions are used to determine the task execution schedule. They consist of five fields separated by spaces. Each field defines a specific time interval in which the task should be completed.

For tasks that can only be performed using the Admin service (for example, the task of removing some objects using the GC task), you can assign a schedule only in the Admin service.

In response to the request, a "schedule_id" is issued, which can be used to get information about the status of the task, the time of the next task, etc. (requests "get tasks schedule and "get tasks schedules). The id and all additional information are stored in the "schedule" table of the Tasks database.

If necessary, you can create a delayed schedule, and then activate it using the "action" = "start" parameter of the "patch tasks schedule" request. Similarly, you can stop the scheduled task using "action" = "stop". To delete a schedule, you can use the "delete tasks schedule" request.

Permissions to work with schedules are specified in the token with the "task" permission. This means that if the user has permission to work with tasks, then he will also be able to use the schedule.

Examples of Cron expressions#

This section describes various examples of Cron expressions.

  1. Run the task every day at 3 a.m.:
0 3 * * *
  1. Run the task every Friday at 18:30:
30 18 * * 5
  1. Run the task every first day of the month at noon:
0 12 1 * *
  1. Run the task every 15 minutes:
*/15 * * * *
  1. Run the task every morning at 8:00, except weekends (Saturday and Sunday):
0 8 * * 1-5
  1. Run the task at 9:00 am on the first and 15th day of each month, but only if it is Monday:
0 9 1,15 * 1

Admin service#

The Admin service is used to perform general administrative routines:

  • Manage user accounts;
  • Receive information about objects belonging to different accounts;
  • Create garbage collection tasks;
  • Create tasks to extract descriptors with a new neural network version;
  • Receive reports and errors on processed tasks;
  • Cancel and delete existing tasks.

Admin service has access to all the data attached to different accounts.

Three types of accounts can be created in the Admin service - "user", "advanced_user" and "admin". The first two types are created using an account creation request to the API service, but the third type can only be created using the Admin service.

Using the "admin" account type, you can log in to the interface and perform the above tasks. An account with the "admin" type can be created either in the user interface (see above) or by requesting the "/4/accounts" resource of the Admin service. To create an account in the last way, you need to specify a username and password.

If you are creating an account for the first time, you must use the default login and password.

Example of CURL request to the "/4/accounts" resource of the Admin service:

curl --location --request POST 'http://127.0.0.1:5010/4/accounts' \
--header 'Authorization: Basic cm9vdEB2aXNpb25sYWJzLmFpOnJvb3Q=' \
--header 'Content-Type: application/json' \
--data '{
  "login": "mylogin@gmail.com",
  "password": "password",
  "account_type": "admin",
  "description": "description"
}' 

All the requests for Admin service are described in Admin service reference manual.

Admin user interface#

The user interface of the Admin service is designed to simplify the work with administrative tasks.

The interface can be opened in a browser by specifying the address and port of the Admin service: <Admin_server_address>:<Admin_server_port>.

The default Admin service port is 5010.

The default login and password to access the interface are root@visionlabs.ai/root. You can also use default login and password in Base64 format - cm9vdEB2aXNpb25sYWJzLmFpOnJvb3Q=.

You can change the default password for the Admin service using the "Change authorization" request.

There are three tabs on the page:

  • Accounts. The tab is designed to provide information about all created accounts and to create new accounts.
  • Tasks - The tab is designed for working with Garbage collection and Additional extraction tasks.
  • Info - The tab contains information about the user interface and the LUNA PLATFORM license.

Accounts tab#

This tab displays all existing accounts.

Accounts tab
Accounts tab

You can manage existing accounts using the following buttons:

– view account information.

– delete account.

Clicking the view info button opens a page containing general information about the account, lists created with that account, and faces.

When you click the "Create account" button, an account creation window opens, containing the standard account creation settings - login, password, account type, description and the desired "account_id".

See "Account" for details on accounts and their types.

Tasks tab#

This tab displays running/completed Garbage collection and Additional extraction tasks.

Tasks tab
Tasks tab

.

Tasks are displayed in a table whose columns can be sorted and also filtered by the date the tasks were completed.

When you press the "Start Garbage collection" and "Start Additional extraction" buttons, windows for creating the corresponding tasks open.

The "Garbage collection" window contains the following settings, similar to the parameters of the "garbage collecting task" request body to the Tasks service:

  • Description - "description" parameter
  • Target - "content > target" parameter
  • Account ID - "content > filters > account_id" parameter
  • Remove sample - "content > remove_samples" parameter
  • Remove image origins - "content > remove_image_origins" parameter
  • Delete data before - "content > create_time__lt" parameter

See "Garbage collection task" for details.

The "Additional extraction" window contains the following settings, similar to the parameters of the "additional extract task" request body to the Tasks service:

  • Objects type - "content > filters > object_type" parameter
  • Extraction type - "content > extraction_target" parameter
  • Descriptor version - "content > options > descriptor_version" parameter
  • Description - "description" parameter
  • Account ID - "content > filters > account_id" parameter

See "Additional extraction task" for details.

After creating a task, its execution begins. The progress of the task is displayed by the icon . The task is considered completed when the "Parts done" value matches the "Parts total" value and the icon changes to . If necessary, you can stop the task execution using the icon .

The following buttons are available for each task:

– download the task result as a JSON file.

– go to the page with a detailed description of the task and errors received during its execution.

– delete task.

Tasks are executed by the Tasks service after receiving a request from the Admin service.

Schedules tab#

This tab is intended for working with task scheduling.

Schedules tab
Schedules tab

The tab displays all created task schedules and all relevant information (status, ID, Cron string, etc.).

When you click on the "Create schedule" button, the schedule creation window opens.

Schedule creation window
Schedule creation window

In the window you can specify schedule settings for the Garbage collection task. The parameters in this window correspond to the parameters of the "create tasks schedule" request.

After filling in the parameters and clicking the "Create schedule" button, the schedule will appear in the Schedules tab.

You can control delayed start using the following buttons:

– start the schedule.

– pause the schedule.

Using the button, you can edit the schedule. Using the button you can delete a schedule.

Info tab#

This tab displays complete license information and features that can be performed using the Admin UI.

Tasks tab
Tasks tab

.

See the detailed license description in the "License information" section.

By clicking on the "Download system info" button, you can also get the following technical information about the LP:

You can also get the above system information using the "get system info" request to the Admin service.

Configurator service#

The Configurator service simplifies the configuration of LP services.

The service stores all the required configurations for all the LP services in a single place. You can edit configurations through the user interface or special limitation files.

You can also store configurations for any third-party party software in Configurator.

The general workflow is as follows:

  • The user edits configurations in the UI;
  • Configurator stores all changed configurations and other data in the database;

  • LP services request Configurator service during startup and receive all required configurations. All the services should be configured to use the Configurator service.

Configurator workflow
Configurator workflow

During Configurator installation, you can also use your limitation file with all the required fields to create limitations and fill in the Configurator database. You can find more details about this process in the "ConfiguratorDevopsManual" documentation.

Settings used by several services are updated for each of the services. For example, if you edit the "LUNA_FACES_ADDRESS" setting for the Remote SDK service in the Configurator user interface, the setting will be also updated for API, Admin and Python Matcher services.

Configurator UI#

Open the Configurator interface in your browser: <Configurator_server_address> :5070

This URL may differ. In this example, the Configurator service interface is opened on the Configurator service server.

LP includes the beta version of the Configurator UI. The UI was tested on Chrome and Yandex browser. The recommended screen resolution for working with the UI is 1920 x 1080.

The following tabs are available in the UI of Configurator:

  • Settings. All the data in the Configurator service is stored on the Settings tab. The tab displays all the existing settings. It also allows to manage and filter them;
  • Limitations. The tab is used to create new limitations for settings. The limitations are templates for JSON files that contain available data type and other rules for the definition of the parameters;
  • Groups. The tab allows to group all the required settings. When you select a group on the Settings tab, only the settings corresponding to the group will be displayed. It is possible to get settings by filters and/or tags for a single specific service. For this purpose, the Groups tab is used.
  • About. The tab includes information about the Configurator service interface.

Settings#

Each of the Configurator settings contain the following fields:

  • Name - a name for the setting.
  • Description - setting description;
  • ID and Times - unique setting ID;
  • Create time - setting create time;
  • Last update time - setting last update time;
  • Value - a body of the setting;
  • Schema - a verification template for the schema body;
  • Tag - tags for the setting used to filter settings for the services.
Configurator interface
Configurator interface

The "Tags" field is not available for the default settings. You should press the Duplicate button and create a new setting on the basis of the existing one.

The following options for the settings are available:

  • Create a new setting - press the Create new button, enter required values and press Create. You should also select an already existing limitation for the setting. The Configurator will try to check the value of a setting if the Check on save flag is enabled and there is a limitation selected for the setting;

  • Duplicate existing setting - press the Duplicate button on the right side of the setting, change required values and press Create. The Configurator will try to check the setting value if the Check on save flag is enabled on the lower left side of the screen and there is such a possibility;

Duplicate setting window
Duplicate setting window
  • Delete existing setting - press the Delete button on the right side of the setting.

  • Update existing setting - change name, description, tags, value and press Save button on the right side of the setting.

  • Filter existing settings by name, description, tags, service names, groups - use the filters on the left side of the screen and press Enter or click on Search button;

Show limitations - the flags are used to enable displaying of limitations for each of the settings.

JSON editors - the flag enables you to switch the mode of the value field representation. If the flag is disabled, the name of the parameter and a field for its value are displayed. If the flag is enabled, the Value field is displayed as a JSON.

The Filters section on the left side of the window enables you to display all the required settings according to the specified values. You may enter the required name manually or select it from the list:

  • Setting. The filter enables you to display the setting with the specified name.
  • Description. - The filter enables you to display all settings with the specified description or part of description.
  • Tags. The filter enables you to display all settings with the specified tag.
  • Service filter. The filter enables you to display all settings that belong to the selected service.
  • Group. The filter enables you to display all settings that belong to the specified group. For example, you can select to display all the services belonging to LP.

Limitations#

Limitations are used as service settings validation schema.

Settings and limitations have the same names. A new setting is created upon limitation creation.

The limitations are set by default for each of the LP services. You cannot change them.

Each of the limitations includes the following fields:

  • Name is the name of the limitation.
  • Description is the description of the limitation.
  • Service list is the list of services that can use settings of this limitation.
  • Schema is the object with JSON schema to validate settings
  • Default value is the default value created with the limitation.

The following actions are available for managing limitations:

  • Create a new limitation - press the Create new button, enter required values and press "Create". Also, the setting with default value will be created;
  • Duplicate existing limitation - press the Duplicate button on the right side of the limitation, change required values and press Create. Also, the setting with default value will be created;
  • Update limitation values - change name/description/service list/validation schema/default values and press the Save button on the right side of the limitation;
  • Filter existing limitations by names, descriptions, and groups;
  • Delete existing limitation - press the Delete button on the right side of the limitation.

Groups#

Group has a name and a description.

It is possible to:

  • Create a new group - press the Create new button, enter the group name and optionally description and press Create;
  • Filter existing groups by group names and/or limitation names - use filters on the left side and press 'RETURN' or click on Search button;
  • Update group description - update the existing description and press the Save button on the right side of the group;
  • Update linked limitation list - to unlink limitation, press "-" button on the right side of the limitation name, to link limitation, enter its name in the field at the bottom of the limitation list and press the "+" button. To accept changes, press the Save button;
  • Delete group - press the Delete button on the right side of the group.

Settings dump#

The dump file includes all the settings of all the LP services.

Receive settings dump#

You can fetch the existing service settings from the Configurator by creating a dump file. This may be useful for saving the current service settings.

To receive a dump file, enter the Configurator container and use the following options:

  • wget: wget -O settings_dump.json 127.0.0.1:5070/1/dump;
  • curl: curl 127.0.0.1:5070/1/dump > settings_dump.json;
  • text editor.

The current values, specified in the Configurator service, are received.

Apply settings dump#

To apply the dumped settings use the db_create.py script with the --dump-file command line argument (followed with the created dump file name): base_scripts/db_create.py --dump-file settings_dump.json:

You can apply full settings dump on an empty database only. If any settings already exist, you should use the drop-database flag before applying new dump.

If the settings update is required, you should delete the whole "limitations" group from the dump file before applying it.

    "limitations":[
      ...
    ],

Follow these steps to apply the dump file:

  1. Enter the Configurator container;

  2. Run python3 base_scripts/db_create.py --dump-file settings_dump.json

Limitations from the existing limitations files are replaced with limitations from the dump file, if limitations names are the same.

Limitations file#

Receive limitation file#

Limitation file includes limitations of the specified service. It does not include existing settings and their values.

To download a limitations file for one or more services, use the following commands:

  1. Enter the Configurator container;

  2. Create the output base_scripts/results directory: mkdir base_scripts/results;

  3. Run the base_scripts/get_limitation.py script: python3 base_scripts/get_limitation.py --service luna-image-store luna-handlers --output base_scripts/results/my_limitations.json.

Note the base_scripts/get_limitation.py script parameters:

  • --service for specifying one or more service names (required);
  • --output for specifying the directory or a file where to save the output. The default value: current_dir/_limitation.json (optional).

Database drop#

Users can wipe out the Configurator database data, when needed. After the script finished processing, a database structure is created in the Configurator DB.

This operation leads to the stored settings loss. Create settings dump file before executing the following commands!

To drop the Configurator database, use the base_scripts/db_create.py script with --recreate-database command line argument:

  1. Enter the Configurator container;

  2. Run python3 base_scripts/db_create.py --recreate-database

The --recreate-database command line argument can be combined with the --dump-file command line argument to wipe out the data and apply the required settings at one time, when needed.

Existing settings migration#

You can migrate settings in the Configurator DB without changing already existing values of the settings. Hence, the names of the settings are changed according to the current LP build, but their values are not changed.

The migration updates LP parameters only. The parameters added by users and parameters not related to LP5 are not updated.

Settings revision is added to the database after the migration was finished. Starting with LP build 5.1.1, this migration is performed automatically during the Configurator database creation.

It is recommended to manually transfer settings for LP builds of version 5.1.0 and earlier to the updated Configurator database.

Licenses service#

General information#

The Licenses service stores information about the available licensed features and their limits.

There are three ways to get license information:

You can also use the "get platform features" request to the API service, in the response to which you can get information about the license status, the license functions enabled ("face_quality", "body_attributes" and "liveness") and the status of optional services (Image Store, Events, Tasks and Sender) from the "ADDITIONAL_SERVICES_USAGE" configuration of the Configurator service.

If you disable some license feature and try to use a request that requires this function, error 33002 will be returned with the description "License problem Failed to get value of License feature {value}".

License information#

LP license includes the following features:

  • License expiration date.
  • Maximum number of faces with linked descriptors or basic attributes.
  • OneShotLiveness availability.
  • OneShotLiveness current balance.
  • Image check according to ISO/IEC 19794-5:2011 standard availability.
  • Body parameters estimation availability.
  • People count estimation availability.
  • Using Lambda service availability.
  • Possibility of using the Index Matcher service in the LUNA Index Module.
  • Maximum number of streams created by the LUNA Streams service.

When ordering the license, you need to inform technical support about the need to use any of the above features.

The features "Possibility of using the Index Matcher service in the LUNA Index Module" and "Maximum number of streams created by the LUNA Streams service" are described in the LUNA Index Module and FaceStream documentation, respectively.

Notifications are available for some features when approaching the limit. Notifications work in the form of sending messages to the logs of the corresponding service. For example, when approaching the allowable number of created faces with descriptors, the following message will be displayed in the Faces service logs: "License limit exceeded: 8% of the available license limit is used. Please contact VisionLabs for license upgrade or delete redundant faces". Notifications work due to constant monitoring implemented using the Influx database. Monitoring data is stored in the corresponding fields of the Influx database.

See the detailed information in the section "Monitoring".

Expiration date#

When the license expires, you cannot use LUNA PLATFORM.

By default, the notification about the end of the license is sent two weeks before the expiration date.

When the license ends, the following message is returned "License has expired. Please contact VisionLabs for a license extension.".

The Licenses service writes data about the license expiration date to the logs and the Influx database in the "license_period_rest" field.

Faces limit#

The Faces service checks the number of faces left according to the maximum available number of faces received from the Licenses service. The faces with linked descriptors or basic attributes are counted only.

The percentage of the used limit for faces is written in the Faces log and displayed in the Admin GUI.

The Faces service writes data about the created faces to the logs and the Influx database in the "license_faces_limit_rate" field.

The created faces are written in the Faces log and displayed in the Admin GUI as a percentage of the database fullness. You should calculate the number of faces with descriptors left using the current percentage.

You start receiving notifications when there are 15% of available faces left. When you exceed the number of available faces, the message "License limit exceeded. Please contact VisionLabs for license upgrade or delete redundant faces" appears in logs. You cannot attach attributes to faces if the number of faces exceeds 110%.

Consequences of missing a feature

If this feature is disabled, it will be impossible to perform the following requests:

OneShotLiveness#

An unlimited license or a license with a limited number of transactions is available to estimate Liveness using the OneShotLiveness estimator.

Each use of Liveness in requests reduces the transaction count. It is impossible to use the Liveness score in requests after the transaction limit is exhausted. Requests that do not use Liveness and requests where the Liveness estimation is disabled are not affected by the exhaustion of the limit. They continue to work as usual.

The Licenses service stores information about the liveness transactions left. The number of transactions left is returned in the response from the "/license" resource.

The Remote SDK service writes data on the number of available Liveness transactions to the logs and the Influx database in the "liveness_balance" field.

A warning about the exhaustion of the number of available transactions is sent to the monitoring and logs of the Remote SDK service when the remaining 2000 transactions of Liveness are reached (this threshold is set in the system).

See the "OneShotLiveness description" section for more information on how Liveness works.

Consequences of missing a feature

If this feature is disabled, it will be impossible to estimate Liveness (the "estimate_liveness" parameter) in the following requests:

Body parameters estimation#

This feature enables you to estimate body parameters. Two values can be set in the license - 0 or 1. Monitoring is not intended for this parameter.

Consequences of missing a feature

If this feature is disabled, it will be impossible to estimate the body parameters (parameters "estimate_upper_body", "estimate_lower_body", "estimate_body_basic_attributes", "estimate_accessories") in the following requests:

People count estimation#

This feature enables you to estimate the number of people. Two values can be set in the license - 0 or 1. Monitoring is not intended for this parameter.

Consequences of missing a feature

If this feature is disabled, it will be impossible to estimate the number of people (the "estimate_people_count" parameter) in the following requests:

Image check by ISO/IEC 19794-5:2011 standard#

This feature enables you to perform various image checks by ISO/IEC 19794-5:2011 standard. Two values can be set in the license - 0 or 1. Monitoring is not intended for this parameter.

Consequences of missing a feature

If this feature is disabled, it will be impossible to perform the following requests:

Lambda service#

The Lambda service is intended to work with user modules that mimic the functionality of a separate service. The service enables you to write and use your own handler or write an external service that will closely interact with the LUNA PLATFORM and immediately have several functions typical of LP services (such as logging, automatic configuration reload, etc.).

The Lambda service creates a Docker image and then runs it in a Kubernetes cluster. It is impossible to manage a custom module without Kubernetes. Full-fledged work with the Lambda service is possible when deploying LUNA PLATFORM services in Kubernetes. To use it, you must independently deploy LUNA PLATFORM services in Kubernetes or consult VisionLabs specialists. If necessary, you can use Minikube for local development and testing, thus providing a Kubernetes-like environment without the need to manage a full production Kubernetes cluster.

This functionality should not be confused with the plugin mechanism. Plugins are designed to implement narrow targeted functionality, while the Lambda service enables you to implement the functionality of full-fledged services.

It is strongly recommended to learn as much as possible about the objects and mechanisms of the LUNA PLATFORM (especially about handlers) before starting to work with this service.

A custom module running in a Kubernetes cluster is called lambda. Information about the created lambda is stored in the Lambda database.

The number of lambda created is unlimited. Each lambda has the option to add its own OpenAPI specification.

To work with the Lambda service, you need a special license feature. If the feature is not available, the corresponding error will be returned when requesting the creation of lambda.

Note. The description given below is intended for general acquaintance with the functionality of the Lambda service. See developer manual for more details. The "Quick start guide" section is available in the developer manual, which enables you to start working with the service.

Before start#

Before you start working with the Lambda service, you need to familiarize yourself with all the requirements and set the service settings correctly.

Code and archive requirements#

The module is written in Python and must be transferred to the Lambda service in a ZIP archive.

The code and archive must meet certain requirements, the main of which are listed below:

  • Python version 3.11 or higher must be used.
  • Development requires the "luna-lambda-tools" library, available in VisionLabs PyPI.
  • Archive should not be password-protected.

Also, the files in the archive must have a certain structure. See the detailed information in the section "Requirements" of the developer manual.

Environment requirements#

To work with the Lambda service, the following environment requirements are required:

  • Availability of running Licenses and Configurator services*.
  • Availability of S3 bucket for storing archives.
  • Availability of Docker registry for storing images.
  • Availability of Kubernetes cluster.

* during its operation, lambda will additionally interact with some LUNA PLATFORM services. The list of services depends on the lambda type (see "Lambda types").

Write/read access to the S3 storage bucket must be provided and certain access rights in the Kubernetes cluster must be configured. You need to transfer the basic Lambda images to your Docker registry. The commands for transferring images are given in the LUNA PLATFORM installation manual.

For more information, see the "Requirements" section of the developer manuals.

Service configuration#

In the Lambda service settings, you must specify the following data:

  • Location of the Kubernetes cluster (see setting "CLUSTER_LOCATION"):

  • "internal" - Lambda service works in a Kubernetes cluster and does not require other additional settings.

  • "remote" - Lambda service works with a remote Kubernetes cluster and correctly defined settings "CLUSTER_CREDENTIALS" (host, token and certificate).
  • "local" - Lambda service works in the same place where the Kubernetes cluster is running.

In the classic scenario of working with the Lambda service, it is assumed to use the "internal" parameter. Using the "local" and "remote" parameters highly not recommended for anything except for development.

For more information, see the "Configuration requirements" section of the developer manual.

Lambda types#

Lambda can be of two types:

  • Handlers-lambda, intended to replace the functionality of the classic handler.
  • Standalone-lambda, intended to implement independent functionality to perform close integration with the LUNA PLATFORM.

Each type has certain requirements for LUNA PLATFORM services, the actual settings of which will be automatically used to process requests. Before starting work, the user must decide which lambda he needs.

Handlers-lambda#

Examples of possible functionality:

  • Performing verification with the possibility of saving an event.
  • Matching of two images without performing the rest of the functionality of the classic handler.
  • Adding your own filtering logic to the matching functionality;
  • Circumventing certain limitations of the LUNA PLATFORM (for example, specify the maximum number of candidates greater than 100).
  • Embedding the neural network SDK bypassing the LUNA PLATFORM.

During its operation, Handlers-lambda will interact with the following LUNA PLATFORM services:

  • Configurator - to get the settings,
  • Faces - for working with faces and lists,
  • Remote SDK - for performing detections, estimations and extractions,
  • Events* - for working with events,
  • Python Matcher/Python Matcher Proxy** - for classical/cross-matching of faces or bodies,
  • Image Store* - for storing samples and source images of faces or bodies.

The Lambda service will not check the connection to the disabled services and will give an error if the user tries to make a request to the disabled service.

To run the Lambda service, only the presence of the Configurator and Licenses services is required.

* the service can be disabled in the "ADDITIONAL_SERVICES_USAGE" setting

** the service is disabled by default. To enable the service, see the setting "ADDITIONAL_SERVICES_USAGE"

The Lambda-handler can be used in two cases:

  • As a custom handler that has its own response scheme, which may differ from the response of classic handlers and cannot be properly used in other LUNA PLATFORM services.
  • As a custom handler that mimics the response of a classic handler. There are some requirements for such a case:

    • The response must match the response scheme of the event generation request;
    • The handler must process incoming data correctly so that other services can use it, otherwise there is no guarantee of compatibility with other services. That is, if such a handler implies face recognition, the module should return information about face recognition in response, if the handler implies body detection, the module should return body detection in response, etc.

    For example, if a Lambda handler satisfies the above conditions, then it can be used in Estimator task as a classic handler.

For more information and code examples for Handlers-lambda, see the "Handlers lambda development" section of the developer manual.

Standalone-lambda#

Examples of possible functionality:

  • Filtering incoming images by format for subsequent sending to the Remote SDK service.
  • Creation of a service for sending notifications by analogy with the Sender service.
  • Creation of a service for recording a video stream and saving it as a video file to the Image Store service for subsequent processing by the FaceStream application.

During its operation, Standalone-lambda will interact at least with the Configurator service, which enables lambda to receive its settings (for example, logging settings).

To run the Lambda service, only the presence of the Configurator and Licenses services is required.

For more information and sample code for Standalone-lambda, see the "Standalone lambda development" section of the developer manual.

Create lambda#

To create a lambda, you need to do the following:

  1. Write Python code in accordance with the type of future lambda and code requirements.
  2. Move files to the archive in accordance with archive requirements.
  3. Perform the "create lambda" request, specifying the following mandatory data:
  4. "archive" - The address to the archive with the user module.
  5. "credentials" > "lambda_name" - The name for the lambda being created.
  6. "parameters" > "lambda_type" - The type of lambda being created ("handlers" or "standalone").

If necessary, you can specify a list of additional Docker commands to create a lambda container. See the "Lambda - Archive requirements" section of the developer manual.

In response to a successful request, the "lambda_id" will be issued.

The creation of lambda consists of several stages, namely:

  • Creating a Docker image:
  • Getting the provided ZIP archive.
  • Addition of the archive with the necessary files.
  • Saving the archive in S3 storage.
  • Publishing the image to the registry.
  • Creating a service in the Kubernetes cluster.

During lambda creation, you can run the following requests:

See the lambda creation sequence diagram in "Lambda creation diagram".

See the detailed description of the lambda creation process in the "Creation pipeline" section of the developer manual.

Create handler for Handlers-lambda#

If lambda is supposed to be used as a custom handler simulating the response of a classic handler, then it is necessary to create a handler, specifying "handler_type" = "2" and the resulting "lambda_id".

During the creation of the handler, the Handlers service will perform a health check to the Kubernetes cluster.

The resulting "handler_id" can be used in requests "generate events" or "estimator task".

Use lambda#

The table below shows resources for working with lambda, depending on its type.

Resource Lambda type Request and response body format
"/lambdas/{lambda_id}/proxy" Standalone-lambda, Handlers-lambda with own response scheme Own
"/handlers/{handler_id}/events" Handlers-lambda Corresponding OpenAPI specification
"/tasks/estimator" Handlers-lambda Corresponding OpenAPI specification

See the lambda processing sequence diagram in "Lambda processing diagram".

Each lambda has its own API, the description of which is also available using the "get lambda open api documentation" request.

Each lambda response will contain multiple headers, including:

  • "Luna-Request-Id" - Classic external ID of the LUNA PLATFORM request.
  • "Lambda-Version" - Contains the current lambda version.

Useful requests when working with lambda:

  • "get lambda status" to get lambda creation status ("running", "waiting", "terminated", "not_found")
  • "get lambda" to get complete information about the created lambda (creation time, name, status, etc.)
  • "get lambda logs" to get lambda creation logs

Update lambda#

The base image for lambda containers is updated periodically. This image contains the necessary modules to interact with LUNA PLATFORM services. Applying updates requires that the lambda be recreated so that the container can be rebuilt based on the new image. After updating the base image and the "luna-lambda-tools" library, lambda functionality may be broken.

You can update the lambda using the request "update lambda" using the latest base image. It is recommended to have a backup copy of the archive on S3.

See the "Lambda updates" section of the developer manual for detailed information about the update mechanism, backup strategies, and restoring from a backup.

Backport 3#

The Backport 3 service is used to process the requests for LUNA PLATFORM 3 using LUNA PLATFORM 5.

Although most of the requests are performed in the same way as in LUNA PLATFORM 3, there are still some restrictions. See "Backport 3 features and restrictions" for details.

See "Backport3ReferenceManual.html" for details about the Backport 3 API.

Backport 3 new resources#

Liveness#

Backport 3 provides Liveness estimation in addition to the LUNA PLATFORM 3 features. See the "liveness > predict liveness" section in "Backport3ReferenceManual.html".

Handlers#

The Backport 3 service provides several handlers: extractor, identify, verify. The handlers enable to perform several actions in a single request:

  • "handlers" > "face extractor" - enables you to extract a descriptor from an image, create a person with this descriptor, attach the person to the predefined list.

  • "handlers" > "identify face" - enables you to extract a descriptor from an image and match the descriptor with the predefined list of candidates.

  • "handlers" > "verify face" - enables you to extract a descriptor from an image and match the descriptor with the person's descriptor.

The description of the handlers and all the parameters of the handlers can be found in the following sections:

The requests are based on handlers and unlike the standard "descriptors" > "extract descriptors", "matching" > "identification", and "matching" > verification" requests the listed above request are more flexible.

You can patch the already existing handlers thus applying additional estimation to the requests. E. g. you can specify head angles thresholds or enable/disable basic attributes estimation.

The Handlers are created for every new account at the moment the account is created. The created handlers include default parameters.

Each of the handlers has the corresponding handler in the Handlers service. The parameters of the handlers are stored in the luna_backport3 database.

Each handler supports GET and PATCH requests thus it is possible to get and update parameters of each handler.

Each handler has its version. The version is incremented with every PATCH request. If the current handler is removed, the version will be reset to 1:

  • For the requests with POST and GET methods:

    If the Handlers and/or Backport 3 service has no handler for the specified action, it will be created with default parameters.

  • For requests with PATCH methods:

    If Handlers and/or Backport 3 service has no handler for the specified action, a new handler with a mix of default policies and policies from the request will be created.

Backport 3 architecture#

Interaction of Backport 3 and LP 5 services
Interaction of Backport 3 and LP 5 services

Backport 3 interacts with the API service and sends requests to LUNA PLATFORM 5 using it. In turn, the API service interacts with the Accounts service to check the authentication data.

Backport 3 has its own database (see "Backport 3 database". Some of its tables are similar to the tables of the Faces database of LP 3. It enables you to create and use the same entities (persons, account tokens and accounts) as in LP 3.

The backport service uses Image Store to store portraits.

You can configure Backport 3 using the Configurator service.

Backport 3 features and restrictions#

The following features have core differences:

For the following resources on method POST default descriptor version to extract from image is 56:

  • /storage/descriptors
  • /handlers/extractor
  • /handlers/verify
  • /handlers/identify
  • /matching/search

You can still upload the existing descriptors of versions 52, 54, 56. The older descriptor versions are no longer supported. - For resource /storage/descriptors on method POST, estimation of saturation property is no longer supported, and the value is always set to 1. - For resource /storage/descriptors on method POST, estimation of eyeglasses attribute is no longer supported. The attributes structure in the response will lack the eyeglasses member. - For resource /storage/descriptors on method POST, head position angle thresholds can still be sent as float values in range [0, 180], but they will be internally rounded to integer values. As before, thresholds outside the range [0, 180] are not taken into account.

Garbage Collection (GC) module#

According to LUNA Platform 3 logic, garbage is the descriptors that are linked neither to a person nor to a list.

For normal system operation, one needs to regularly delete garbage from databases. For this, run the system cleaning script remove_not_linked_descriptors.py from ./base_scripts/gc/ folder.

According to Backport 3 architecture, this script removes faces, which do not have links with any lists or persons from the Luna Backport 3 database, from the Faces service.

Script execution pipeline#

The script execution pipeline consists of several stages:

1) A temporary table is created in the Faces database. See more info about temporary tables for oracle or postgres. 2) Ids of faces that are not linked to lists are obtained. The ids are stored in the temporary table. 3) While the temporary table is not empty, the following operations are performed:

  • The batch of ids from the temporary table is obtained. First 10k (or less) face ids are received.
  • Filtered ids are obtained. Filtered ids are ids that do not exist in the person_face table of the Backport 3 database.
  • Filtered ids are removed from the Faces database. If some of the faces cannot be removed, the script stops.
  • Filtered ids are removed from the Backport 3 database (foolcheck). A warning will be printed.
  • Ids are removed from the temporary table.

Script launching#

docker run --rm -t --network=host --entrypoint bash dockerhub.visionlabs.ru/luna/luna-backport-3:v.0.8.21 -c "python3 ./base_scripts/gc/remove_not_linked_descriptors.py"

The output will include information about the number of removed faces and the number of persons with faces.

Backport 4#

The Backport 4 service is used to process the requests for LUNA PLATFORM 4 using LUNA PLATFORM 5.

Although most of the requests are performed in the same way as in LUNA PLATFORM 4, there are still some restrictions. See "Backport 4 features and restrictions" for details.

See "Backport4ReferenceManual.html" for details about the Backport 4 API.

Backport 4 architecture#

Interaction of Backport 4 and LP 5 services
Interaction of Backport 4 and LP 5 services

Backport 4 interacts with the API service and sends requests to LUNA PLATFORM 5 using it.

Backport 4 directly interacts with the Faces service to receive the number of existing attributes.

Backport 4 directly interacts with the Sender service. All the requests to Sender are sent using the Backport 4 service. See the "ws" > "ws handshake" request in the "Backport4ReferenceManual.html".

You can configure Backport 4 using the Configurator service.

Backport 4 features and restrictions#

The following features have core differences:

The current versions for LUNA PLATFORM services are returned on the request to the /version resource. For example, the versions of the following services are returned:

  • "luna-faces"
  • "luna-events"
  • "luna-image-store"
  • "luna-python-matcher" or "luna-matcher-proxy"
  • "luna-tasks"
  • "luna-handlers"
  • "luna-api"
  • "LUNA PLATFORM"
  • "luna-backport4" - the current service

Resources changelog:

  • Resource /attributes/count is available without any query parameters and does not support accounting. The resource works with temporary attributes.
  • Resource /attributes on method GET: attribute_ids query parameter is allowed instead of page, page_size, time__lt and time__gte query parameters. Thus you can get attributes by their IDs not by filters. The resource works with temporary attributes.
  • Resource /attributes/<attribute_id> on methods GET, HEAD, DELETE and resource /attributes/<attribute_id>/samples on method GET interact with temporary attributes and return attribute data if the attribute TTL has not expired. Otherwise, the "Not found" error is returned.
  • If you already used the attribute to create a face, use the face_id to receive the attribute data. In this case, the attribute_id from the request is equal to face_id.
  • Resource /faces enables you to create more than one face with the same attribute_id.
  • Resource /faces/<face_id> on method DELETE enables you to remove face without removing its attribute.
  • Resource /faces/<face_id> on method PATCH enables you to patch attribute of the face making the first request to patch event_id, external_id, user_data, avatar (if required) and the second request to patch attribute (if required).
  • If face attribute_id is to be changed, the service will try to patch it with temporary attribute data if the temporary attribute exists. Otherwise, the service tries to patch it with attribute data from the face with face_id = attribute_id.
  • The match policy of resource /handlers now has the default match limit that is configured using the MATCH_LIMIT setting from the Backport 4 config.py file.
  • Resource /events/stats on method POST: attribute_id usage in filters object was prohibited as this field is no longer stored in the database. The response with the 403 status code will be returned.
  • Attribute_id in events is not null and is equal to face_id for back compatibility. GC task is unavailable because all the attributes are temporary and will be removed automatically. Status code 400 is returned on a request to the /tasks/gc resource.
  • The column attribute_id is not added to the report of the Reporter task and this column is ignored if specified in the request. Columns top_similar_face_id, top_similar_face_list, top_similar_face_similarity are replaced by the top_match column in the report if any of these columns is passed in the reporter task request.
  • Linker task always creates new faces from events and ignores faces created during the event processing request.
  • Resource /matcher does not check the presence of provided faces thus error FacesNotFound is never returned. If the user has specified a non-existent candidate of type "faces", no error will be reported, and no actual matching against that face will be made.
  • Resource /matcher checks whether reference with type attribute has the ID of face attribute or the ID of temporary attribute and performs type substitution. Hence it provides sending references for matching in the way it was done in the previous version.
  • Resource /matcher takes matching limits into account. By default, the maximum number of references or candidates is limited to 30. If you need to overcome these limits, configure REFERENCE_LIMIT and CANDIDATES_LIMIT.
  • Resource /ws has been added. There was no /ws resource in the LUNA PLATFORM 4 API as it was a separate resource of the Sender service. This added resource is similar to the Sender service resource, except that attribute_id of candidates faces is equal to face_id.
  • Resource /handlers returns the error "Invalid handler with id ", if the handler was created in the LUNA PLATFORM 5 API and is not supported in LUNA Backport 4.

Backport 4 User Interface#

The User Interface service is used for the visual representation of LP features. It does not include all the functionality available in LP. User Interface enables you to:

  • Download photos and create faces using them;
  • Create lists;
  • Match existing faces;
  • Show existing events;
  • Show existing handlers.

All the information in User Interface is displayed according to the account data, specified in the configuration file of the User Interface service ()./luna-ui/browser/env.js).

User Interface works with only one account at a time, which must be of "user" type.

You should open your browser and enter the User Interface address. The default value is: :4200.

You can select a page on the left side of the window.

Lists/faces page#

The starting page of User Interface is Lists/Faces. It includes all the faces and lists created using the account specified in the configuration.

Lists/Faces Page
Lists/Faces Page

The left column of the workspace displays existing lists. You can create a new list by pressing the Add list button. In the appeared window you can specify the user data for the list.

The right column shows all the created faces with pagination.

Use the Add new faces button to create new faces.

On the first step, you should select photos to create faces from. You can select one or several images with one or several faces in them.

After you select images, all the found faces will be shown in a new dialog window.

All the correctly preprocessed images will be marked as "Done". If the image does not correspond to any of the requirements, an error will be displayed for it.

Press the Next step button.

Select images
Select images

On the next step, you should select the attributes to extract for the faces.

Press the Next step button.

Select attributes
Select attributes

On the next step, you can specify user data and external ID for each of the faces. You can also select lists to which each of the faces will be added. Press Add Faces to create faces.

Add user data, external ID and specify lists
Add user data, external ID and specify lists

You can change pages using arrow buttons.

You can change the display of faces and filter them using buttons in the top right corner.

Filters_icon
Filters_icon

Filer faces. You can filter faces by ID, external ID or list ID;

View_icon
View_icon

/

View_icon_2
View_icon_2

Change view of the existing faces.

Handlers page#

Handlers page displays all handlers created using the account specified in the configuration.

All the information about specified handler policies is displayed when you select a handler.

You can edit or delete a handler using edit

Edit
Edit

and delete
Delete
Delete

icons.

Handlers page
Handlers page

Events page#

The events page displays all the events created using the account specified in the configuration.

Events Page
Events Page

It also includes filters for displaying of events

Filters_icon
Filters_icon

.

Common information#

You can edit

Edit
Edit

or delete
Delete
Delete

an item (face, list or handler) using special icons. The icons appear when you hover the cursor on an item.

Icons for element
Icons for element

Matching dialog#

The Matching button in the bottom left corner of the window enables you to perform matching.

After pressing the button you can select the number of the received results for each of the references.

Select number of results
Select number of results

On the first step, you should select references for matching. You can select faces and/or events as references.

Select references
Select references

On the second step, you should select candidates for matching. You can select faces or lists as candidates.

Select candidates
Select candidates

On the last step, you should press the Start matching button to receive results.

Start matching
Start matching