Glossary#
Term | Description |
---|---|
Age category (group) | A group of people that is within a specified range of ages. In accordance with the age periodization of the World Health Organization, groups 18–44 (young adults), 45–60 (middle-aged adults), 61–75 (older adults) are distinguished |
Attributes | Age, gender, automatically determined by the system |
Authorization | A security mechanism to determine access levels or user privileges related to system resources |
Avatar | A visual representation of the face that can be used in the user interface |
Best shot | A frame of the video stream, in which the face is captured in the optimal angle for further use in the face recognition system |
Biometric sample (sample) | Analog or digital representation of biometric characteristics prior to biometric feature extraction and descriptor generation |
Descriptor | A binary data set prepared by the system based on analyzed characteristic. It is a composite vector of person's face attributes |
Candidate | Applicant for similarity with the reference |
Cross-matching | Many-to-many comparison (M:N). In the context of this document, comparing multiple lists |
Task | A task that is created by a user and runs in the background. In LUNA PLATFORM 5 UI, tasks include cross-matching, export, batch processing, batch import, and batch identification |
Detection | FaceStream entity that contains the coordinates of face or body and the estimated value of the object that determines the best shot |
Event | Detection recorded by the system with the extraction of attributes by the handler |
Exchangeable image file format (EXIF) | A standard for embedding technical metadata in image files that many camera manufacturers use and many image-processing programs support |
Extraction | A descriptor extraction procedure |
Face recognition | A set of methods for collecting, processing, and storing data of person's face images for identity recognition or identity confirmation using mathematical methods |
Faces | Changeable LUNA PLATFORM 5 objects that contain information about one person |
Handler | Image processing entry points that characterize the image processing procedure and define the LUNA PLATFORM 5 algorithms used for this |
Handling policy | A set of rules (policies) for image processing |
Identification | Search for the most suitable descriptor by comparing the vectors of face features with a list of similar descriptors in the database (one to many) |
List | A set of faces in the LUNA PLATFORM 5 system, combined automatically or manually according to a certain criterion |
Liveness | A software method to confirm the vitality of a person by one or several images in order to prevent spoofing attacks |
LUNA PLATFORM 5 (LP5) | VisionLabs automated facial recognition system designed to process, collect, analyze, store, and compare biometric data obtained from facial images |
Matching | A procedure of matching descriptors for the purpose of comparison |
Physical access control system (PACS) | A set of hardware and software tools aimed at controlling the entrance and exit in order to ensure safety and regulate visits to a particular facility |
Reference | Object (attribute, face, body, face and event external IDs, event track ID, descriptor) that is compared/verified with the candidate. |
Similarity | Probability characteristic in the range from 0 to 1, characterizing the level of similarity of subjects of biometric data |
Spoofing attack | Substitution of a real person for a fake image (for example, a photograph) to deceive the system |
Track | Information about object’s position (face or body of a person) in a sequence of frames |
Introduction#
This document describes the purpose and functions of the LUNA PLATFORM 5 UI user interface (hereinafter referred to as Interface) version v.5.67.0.
All information provided in the documentation is for informational purposes only. The use of the product may vary significantly depending on various factors (case, legality of use, compliance with the law and regulatory requirements, etc.) and depends on individual circumstances.
Overview#
LUNA PLATFORM 5 UI is a user interface that provides user interaction with LUNA PLATFORM 5 for operating with events and lists.
LUNA PLATFORM 5 UI allows the user to capture and view events according to a customized policy. For example, when identifying persons using control lists, user can search among events for a certain period of time by various attributes and photographic image of a person.
The main functions of LUNA PLATFORM 5 UI are presented below:
- display detection and object recognition events (faces, bodies);
- display information about the temperature of a person, filter events by temperature;
- search through the archive of events;
- create, view and edit face cards containing information about a person’s face;
- create, view and edit lists;
- identify faces, bodies and uploaded photo images by lists;
- face and body verification;
- verify person’s identity;
- create and configure handling policies;
- verification of compliance of the photo with the requirements of biometric standards;
- create tasks (cross-matching of lists, export of faces, bodies and events, batch processing of photo images, batch import of photo images, batch identification of photo images, batch deleting faces from the list);
- evaluate existing photo images for compliance with the ISO/IEC 19794-5:2011 standard;
- show information about user accounts;
- show information about the status of connected components and systems;
- show information information about available licenses;
- show information information about plugins imported into LUNA PLATFORM 5.
System requirements#
Hardware requirements#
To get started with LUNA PLATFORM 5 UI, make sure you can meet the following hardware requirements.
Resource |
Minimum |
Recommended |
---|---|---|
CPU |
Intel Core i3, 2nd Generation AMD Athlon X4 860K |
Intel Core i3, 4th Generation and above AMD Ryzen 3 and above |
RAM |
2 GB |
4 GB and above |
Display resolution |
1024 px (for example, 1024x768), 1920px (for example, 1920х1080) |
- |
Software requirements#
To get started with LUNA PLATFORM 5 UI, make sure you can meet the following software and Internet connection requirements.
Resource |
Recommended |
---|---|
Supported web browser |
Google Chrome (version 109.0 and above); Microsoft Edge (version 109.0 and above); Mozilla Firefox (version 109.0 and above). |
It is recommended to update your browser to the latest version. Check browser updates:
|
Installing and configuring the above software is beyond the scope of this document.
Working with interface#
Authorization in interface#
Create account using a POST request "create account" to the API service, or using the Admin service. When creating the account, you must specify the following data: login (email), password and account type.
The Interface is accessed by logging in to the website at <host:5000/ui>
in a web browser. Authorization form is launched when you log first time into LUNA PLATFORM 5 UI (Figure 1). For authorization in the Interface, enter your credentials (email and password) in the appropriate fields and click the “Login” button.
When logged in, the user is taken to the “Last events” section (Figure 2).
Switching the interface theme#
The Interface allows you to customize the color theme. Fot this, click on the icon in the top main menu:
-
to activate night mode or dark theme;
-
to activate day mode or light theme.
Sign out of account#
To log out of your account, click the arrow on the right of the user’s name. Click the “Exit” button (Figure 3).
After clicking on the “Exit” button, the user is moved to the authorization form.
Interface sections#
Switching between which is carried out in the main menu bar and in the drop-down menu (Figure 4).
The main menu consists of the “Last events”, “Events archive”, “Search”, "Faces" and “Lists” sections.
The drop-down menu consists of the following sections: “Handling policies”, “Verification”, “Tasks”, “ISO/IEC 19794-5:2011 check”, “Users”, “Monitoring”, “Licenses” and “Plugins”.
To expand the drop-down menu, click the arrow on the right of the user’s avatar.
Purpose of the sections of the main menu:
- “Last events” displays the last 30 events, and it is possible to filter events by various parameters.
- “Events archive” displays all events recorded by the Interface and it is possible to filter events by various parameters.
- “Search” allows user to search faces, bodies and events by the following parameters:
- by external face ID;
- by face image;
- by body image;
- by Face ID from LP5;
- by event ID from LP5.
- “Faces” allows users to create, edit, and delete a faces.
- “Lists” allows users to create, edit, and delete lists.
Purpose of the sections of the drop-down menu:
- “Handling policies” allows user to create, delete, and edit policies (handlers);
- “Verification” allows user to create, delete, edit, and test verifiers. Verifiers are used to quickly compare two faces: by face photo and face ID, external ID, attribute, event, and display the test result;
- “Tasks” allows user to create, delete, and view tasks: cross-matching (comparison of two lists of faces), export of faces or events, batch processing of a photo archive according to a specific policy, batch import of an archive with photo images of faces into the list, and batch identification of an archive with photo images by faces or events.
- “ISO/IEC 19794-5:2011 check” allows to check photo image for compliance with the ISO/IEC 19794-5:2011 standard.
- “Users” shows the list of user accounts created in LUNA PLATFORM 5.
- “Monitoring” shows status of the connected services, modules, components, and systems.
- “Licenses” shows status of the available licenses;
- “Plugins” shows status of plugins imported into LUNA PLATFORM 5.
Last events section#
The “Last events” section displays detection and object (faces, bodies) recognition events, and records identification events using lists.
The general view of the “Last events” section is shown below (Figure 5).
The section displays the last 30 events within the settings of handling policy for processing incoming images of the video stream, terminals, REST requests, etc. Receiving and displaying events is performed with minimal delays in near real time.
At the bottom of the screen, there is a “View events archive” button which leads to “Events archive” section
The filter icon (1), which is located on the right, hides the block with filtering settings. The page shows the following event data (2):
- "Event image"
- a photo image of the face from the video stream;
- a photo image of the body from the video stream;
- "Top Match"—the column is shown if the “Display top match” checkbox is active (3). If no matches are found for a photo from an event, then the column with the top match for this event will remain empty. The "Top match" includes:
- reference photo images of the face and/or body;
- value of similarity of the identified face with the reference (in percentage terms and with the color coding of similarity thresholds);
- "Match type"—the type of object (face or event), according to which the similarity of the identified face/body with the reference was found;
- "External ID"—external identifier of the face, the field is shown if such an ID is available (for "Face" in "Match type");
- "User data"—information from the database, linked to a person from the control (for "Face" in "Match type");
- "List"—the name of the list to which the person is attached (for "Face" in "Match type");
- "Date created"—date and time of fixing the event (for "Event" in "Match type");
- "Source"—the name of the event source that recorded the event (for "Event" in "Match type");
- "Handling policy"—the name of the handler, according to which the reference photo image of the body was processed (for "Event" in "Match type").
- "Event details" shows the available event data:
- "Date of created"—date and time of event registration;
- "Source"—the name of the event source that recorded the event;
- "Handling policy"—the name of the handler, according to which the reference photo images of the face/body were processed
- "Metadata"*—button for uploading arbitrary user data in JSON format, the filed is shown if such data was added to the event (for "Event" in "Match type").
- Face attribute, if found:
- "Gender"—gender based on face image;
- “Age category”—the age of the detected person;
- Body attributes, if found:
- "Upper body colors"—an indication of the color of the clothes of the human body upper part;
- "Lower body colors"—indicating the color of the human body upper part;
- "Headwear"—the presence or absence of a headdress, if it is defined.
- "Backpack"—the presence or absence of a backpack, if it is defined.
*All detailed capabilities and limitations of the "Metadata" field are specified in the "Administrator Manual" of LUNA PLATFORM 5 in the paragraph 6.9.4 "Events meta-information".
Last events filtering#
The Interface allows you to filter last events (2 in Figure 5) to find and display necessary events (Figure 6).
User can quickly find an event among the last 30, as well as set a limit for displaying new events on the screen.
When a user clicks on the icon (1 in Figure 5) on the “Last events” page, a menu with settings and filters opens. The number next to the icon shows the number of applied filters. A short description of the elements and parameters of the filter block on the “Last events” page is presented below (Table 1).
Table 1. Filters available to the user to search for last events
Name |
Description |
---|---|
Event details |
|
"Sound notification" toggle and "Similarity threshold" parameter |
Allows user to configure sound alerts about detection of an object that is not below the specified value of the "Similarity threshold" field |
General |
|
Source |
Select one or more sources from the list of available ones; |
Handling policies |
Handling policy names, according to which the face or body in the image was processed. One or several handling policies can be selected for searching. |
Tags |
Selection of one or more tags. For example, the "Temperature"* tag, is intended for displaying information about the temperature of the human body, filtering events by temperature. "Temperature":
|
Labels |
|
Label |
Label name (labels are rules by which the comparison is made); |
Similarity,% |
Lower and/or upper limits of similarity for displaying faces identified by the lists; |
Face attributes and properties |
|
Gender |
Gender of a person to be detected, determined by the image of a face:
|
Age category |
Age range of a person to be detected, determined by the image of a face:
|
Emotion |
Emotion of a person to be detected:
A combination of several values is possible; |
Mask |
Indication of the presence of a mask:
A combination of several values is possible; |
Liveness |
Liveness status selection:
A combination of several values is possible; |
Deepfake** |
Liveness status selection:
A combination of several values is possible; |
Body attributes and properties |
|
Upper body colors |
Top clothing color specification:
A combination of several values is possible; |
Lower body type |
Bottom clothing type specification:
A combination of several values is possible; |
Lower body colors |
Bottom clothing color specification:
A combination of several values is possible; |
Shoes color |
Shoe color specification:
A combination of several values is possible; |
Headwear |
Headdress specification:
A combination of several values is possible; |
Headwear colors |
Headdress color specification:
A combination of several values is possible; |
Backpack |
Backpack presence specification:
A combination of several values is possible; |
Sleeve |
Sleeve length specification:
A combination of several values is possible; |
Gender by body |
Gender of a person to be detected, determined by the image of a body:
A combination of several values is possible; |
Age category by body |
Age range of a person to be detected, determined by the image of a body:
|
Location |
|
City Area District Street House number Longitude(-180…180); Accuracy (0…90); Latitude(-90…90); Accuracy (0…90); |
Event location |
Other |
|
Comma-separated track IDs |
Specifying track IDs |
Add filter by meta*** |
Allows you to fill in a set of blocks to create a filter by the "meta" field. The number of meta filters is unlimited. The following blocks are required to be filled in when creating a filter by meta:
|
*Color coding of temperature values:
- increased temperature values will be marked in yellow;
- abnormal temperature values will be marked in red;
- normal temperature values will be marked in green.
See the LUNA Access documentation for more information on setting of temperature ranges.
**Deepfake license required
***For advanced users
The user selects one filter or a combination of filters and clicks on the “Filter” button for the applied filters to be applied.
To reset the applied filters, click on the “Reset” button.
The applied filters will affect the appearance of new events on the screen.
To collapse “Filters”, click on the filter icon on the right side of the screen.
Event details#
Click on an arrow button on the face or body from the event image (Figure 5) to open a page with detailed event data (Figure 7).
When an event contains data on the detection of both a face and a body, you can switch between these data on the page with event details. If an event contains detection data for only one object, such as a face, then there will be no detection data for another object.
The the page with event details consists of four blocks. The description of the elements of the page is presented below (Table 2).
Table 2. Elements and parameters of the "Event details" page
Name |
Description |
---|---|
Event details |
|
Event information |
|
Date created |
Date and time of event recording |
Event |
“Event ID” — when clicked on |
Track |
“Track ID” — when clicked on |
Handling policy |
Name of the policy by which the image processing in the video stream is performed. Clicking on the name of the policy opens the form for editing its parameters |
Source |
Name of the source that recorded the event with the face Clicking on the name of the source, a real-time image of the stream from the source opens |
Tags |
Name of tags by which the event is filtered |
Liveness |
Result of the Liveness check for person identification purposes (KYC) |
Deepfake |
Result of the Deepfake check to determine face replacing |
Metadata |
Uploading arbitrary user data in JSON format, if available |
Location |
Event location data: “City”, “Area”, “District”, “Street”, “House number”, “Coordinates (latitude)”, “Coordinates (longitude)” |
Detection |
If equipped: face and/or body detection |
Find similar: events |
Clicking on |
Find similar: faces |
Clicking on For face detection only |
Photo image of a face and/or body from a video stream |
Normalized image. When clicked on:
|
Attributes |
Face attributes:
Body attributes:
|
Additional properties |
Face properties:
|
Best match (type - "Event" or "Person") |
Similarity value of identified face/body with the face/body from control list/event (in percentage); |
Find similar: events |
Clicking on |
Find similar: faces |
Clicking on For face detection only |
Photo image of a face and/or body |
Reference photo image of the face or body (sample) or no photo. When clicked on
|
Face attributes |
Face attributes:
Body attributes:
|
Additional properties |
Face properties:
|
"Matches" |
List of matches with a detected face and/or body |
Event photo |
“Face” type—an avatar, sample, or no photo image. Similarity value of identified face with the face from control list (in percentage) “Event” type—detection (photo image of a face from a video stream). Similarity value of identified face or body with the face or body from the event (in percentage) |
Type |
|
Date created |
Date and time of the biometric sample of the face or body creation from the identification event |
Label |
Label name. Labels are the rules by which the comparison is made |
Additional information |
“Face” type — “Information”, “Lists”, “External ID”. “Event” type — “Handling policy”, “Source” with the ability to go to the handling policy editing page and view the stream from the source in real time |
|
When clicked, the face or event details opens |
The external ID is used to integrate LUNA PLATFORM 5 UI with external systems, to transfer data to other systems in order to analyze and quickly respond to an event.
Events archive section#
The “Events archive” section is designed to display all events of face and body detection as well as recognition and search for events in archive.
Receiving and displaying new events in the event archive is performed with minimal delays in near real-time.
General view of the “Events archive” section is shown below (Figure 8).
If there are no filters applied (1), the Interface displays the latest detection and identification events identical to those presented in the “Last events” section, as well as events created earlier.
The number of events displayed on the page is set by the switch in the lower right corner of the page. There can be 10, 25, 50 or 100 events in total on one page (2).
The displayed data is identical to the data in the “Last events” section.
If no filters are set, only events from the last 30 days are displayed.
Click on a line to open a page with event details.
Click on a reference photo of a face from the control list to open a page with face details.
Archived events filtering#
The Interface allows you to filter archived events (1 in Figure 8) to find and display necessary events.
With filters (Figure 9) user can quickly find an event among the last, as well as set a limit for displaying new events on the screen.
A short description of the elements and parameters of the block with filters in the "Events archive" section is presented in the table (Table 3).
Table 3. Filters available to the user to search for archived events:
Name |
Description |
---|---|
General |
|
Date from |
Start of the search period by date and time of the event; |
Date to |
End of the search period by date and time of the event; |
Source |
Event source that recorded the event. Select one or more sources from the list of available ones; |
Handling policies |
Handling policy names, according to which the face or body in the image was processed. One or several handling policies can be selected for searching. |
Tags |
Selection of one or more tags. For example, the "Temperature" tag, is intended for displaying information about the temperature of the human body, filtering events by temperature. "Temperature":
|
Event ID |
Identifiers of detection and attribute extraction events.Values are separated by commas, for the correct search must be specified in full; |
External events ID |
External identifiers of events. Values are separated by commas, for the correct search must be specified in full; |
Labels |
|
Label |
Label name (labels are rules by which the comparison is made); |
Similarity,% |
Lower and/or upper limits of similarity for displaying faces identified by the lists; |
ID of objects with maximum match result |
The ID of the top similar object (event or face) from matching results (match policy, values are separated by commas, for the correct search must be specified in full); |
Face attributes and properties |
|
Gender |
Gender of a person to be detected:
|
Age category |
Lower and/or upper limits of age of a person to be detected:
|
Emotion |
Emotion of a person to be detected:
A combination of several values is possible; |
Mask |
Indication of the presence of a mask:
A combination of several values is possible; |
Liveness |
Liveness status selection:
A combination of several values is possible; |
Deepfake |
Liveness status selection:
A combination of several values is possible; |
Face IDs from events |
Face IDs of persons that are created in the LUNA PLATFORM 5 system as a result of a detection event and extraction of attributes. Values are separated by commas, for the correct search must be specified in full; |
Body attributes and properties |
|
Upper body colors |
Top clothing color specification:
A combination of several values is possible; |
Lower body type |
Bottom clothing type specification:
A combination of several values is possible; |
Lower body colors |
Bottom clothing color specification:
A combination of several values is possible; |
Shoes color |
Shoe color specification:
A combination of several values is possible; |
Headwear |
Headdress specification:
A combination of several values is possible; |
Headwear colors |
Headdress color specification:
A combination of several values is possible; |
Backpack |
Backpack presence specification:
A combination of several values is possible; |
Sleeve |
Sleeve length specification:
A combination of several values is possible; |
Gender by body |
Gender of a person to be detected, determined by the image of a body:
A combination of several values is possible; |
Age category by body |
Age range of a person to be detected, determined by the image of a body:
|
Location |
|
City Area District Street House number Longitude(-180…180); Accuracy (0…90); Latitude(-90…90); Accuracy (0…90); |
Event location |
Other |
|
Comma-separated track IDs |
Specifying track IDs |
Add filter by meta*** |
Allows you to fill in a set of blocks to create a filter by the "meta" field. The number of meta filters is unlimited. The following blocks are required to be filled in when creating a filter by meta:
|
*Color coding of temperature values:
- increased temperature values will be marked in yellow;
- abnormal temperature values will be marked in red;
- normal temperature values will be marked in green.
See the LUNA Access documentation for more information on setting of temperature ranges.
**Deepfake license required
***For advanced users
The user selects one filter or a combination of filters and clicks on the “Filter” button for the applied filters to be applied.
To reset the applied filters, click on the “Reset” button. To collapse Filters, click on the filter icon on the right side of the page.
Search section#
The “Search” section is designed to search by photo, event (event ID), and face (face ID: “External ID”, “Face ID”). This section displays all detection face and body recognition events that match the search conditions. General view of the “Search” section is shown below (Figure 10).
The “Search” section contains the following blocks:
- Search options:
- “Photo” — search by uploaded photo image:
- field for uploading a photo image;
- “Event” — search by registered event in the system:
- “Event ID” — identifier of the event of detection and attribute extraction;
- “Face” — search by registered face in the system:
- “External ID” — external face identifier;
- “Face ID” — face identifier that is created in the LUNA PLATFORM 5 system as a result of a detection event and attribute extraction;
- Searching results:
- “Events”:
- “Display search image” checkbox — disable if you need to hide the column with the original photo;
- “Search result”;
- “Search image”;
- “Details”;
- “Faces”:
- “Display search image” checkbox — disable if you need to hide the column with the original photo;
- “Search result”;
- “Search image”;
- “Details”;
- Filters.
A short description of the elements and parameters of the filter block in the "Search" section is presented in the table (Table 4).
Table 4. “Filters” block elements description
Name |
Description |
---|---|
Search events |
|
Date from |
Start of the search period by date and time of the event; |
Date to |
End of the search period by date and time of the event; |
Source |
Event source that recorded the event. Select one or more sources from the list of available ones; |
Handling policies |
Handling policy names, according to which the face or body in the image was processed. One or several handling policies can be selected for searching. |
Liveness |
Liveness status selection:
A combination of several values is possible; |
Tags |
Selection of one or more tags. For example, the "Temperature" tag, is intended for displaying information about the temperature of the human body, filtering events by temperature. "Temperature":
|
Event ID |
Identifiers of detection and attribute extraction events. Values are separated by commas, for the correct search must be specified in full); |
External events ID |
External identifiers of events. Values are separated by commas, for the correct search must be specified in full); |
Similarity is not less than, % |
The similarity value is not lower than the specified one, in percent; |
Labels |
|
Label |
Label name (labels are rules by which the comparison is made); |
ID of objects with maximum match result |
The ID of the top similar object (event or face) from matching results (match policy, values are separated by commas, for the correct search must be specified in full); |
Face attributes and properties |
|
Gender |
Gender of a person to be detected:
|
Age category |
Lower and/or upper limits of age of a person to be detected:
|
Emotion |
Emotion of a person to be detected:
A combination of several values is possible; |
Mask |
Indication of the presence of a mask:
A combination of several values is possible; |
Liveness |
Liveness status selection:
A combination of several values is possible; |
Deepfake |
Liveness status selection:
A combination of several values is possible; |
Face IDs from events |
Face IDs of persons that are created in the LUNA PLATFORM 5 system as a result of a detection event and extraction of attributes. Values are separated by commas, for the correct search must be specified in full; |
Body attributes and properties |
|
Upper body colors |
Top clothing color specification:
A combination of several values is possible; |
Lower body type |
Bottom clothing type specification:
A combination of several values is possible; |
Lower body colors |
Bottom clothing color specification:
A combination of several values is possible; |
Shoes color |
Shoe color specification:
A combination of several values is possible; |
Headwear |
Headdress specification:
A combination of several values is possible; |
Headwear colors |
Headdress color specification:
A combination of several values is possible; |
Backpack |
Backpack presence specification:
A combination of several values is possible; |
Sleeve |
Sleeve length specification:
A combination of several values is possible; |
Gender by body |
Gender of a person to be detected, determined by the image of a body:
A combination of several values is possible; |
Age category by body |
Age range of a person to be detected, determined by the image of a body:
|
Location |
|
City Area District Street House number Longitude(-180…180); Accuracy (0…90); Latitude(-90…90); Accuracy (0…90); |
Event location |
Other |
|
Comma-separated track IDs |
Specifying track IDs |
Add filter by meta*** |
Allows you to fill in a set of blocks to create a filter by the "meta" field. The number of meta filters is unlimited. The following blocks are required to be filled in when creating a filter by meta:
|
Search faces |
|
Date from |
Beginning of the search period by date and time of face creation |
Date before |
End of search period by date and time of face creation |
External events ID |
External event identifiers. To correctly search for the value indicated separated by commas and in full; |
Similarity is not less than, % |
The similarity value is not lower than the specified value, in percent |
Lists |
Selecting a list in which to search for a face |
User data |
Information about the person from the database (if available) |
*Color coding of temperature values:
- increased temperature values will be marked in yellow;
- abnormal temperature values will be marked in red;
- normal temperature values will be marked in green.
See the LUNA Access documentation for more information on setting of temperature ranges.
**Deepfake license required
***For advanced users
To search by face or body image, select the "Photo" section, click on the field to upload an image from your computer, or drag and drop a photo into this field.
Image file requirements:
- *.jpeg, *.png or *.bmp format;
- image size no less than 320х250 and no more than 3840х2160 pixels;
- image may contain one or more people;
- image must have a person's face or body.
When loading a photo image containing many faces and/or bodies, the Interface detects all faces and/or bodies in the image, then displays them to the right of the loaded photo image and displays the number of detected faces and/or bodies (Figure 11). To reset the image, click on .
To find similar faces or events with the face or body, select one face or body by clicking on it on the uploaded photo image. Then, in the filter block, select the necessary search options and click "Filter". To reset the parameter values, click the "Reset" button. The search results will be displayed at the bottom of the page (Figure 12).
The description of the elements of the search results block is presented in the tables (Table 5 and 6).
Table 5. Elements and parameters of the search results, if "Search events" is selected in the block with filters:
Name |
Description |
---|---|
Search result |
The face and/or body of a person from the event that is the most similar to the one selected in the uploaded photo |
Search image |
Normalized photo image of a person's face or body from the uploaded photo. Displayed if the “Display search image” checkbox is enabled |
Details |
|
Face attributes, if a face is selected as the search object |
|
Body attributes, if a body is selected as the search object |
|
|
Go to the “Event details” page |
Table 6. Elements and parameters of the search results, if "Search faces" is selected in the block with filters:
Name |
Description |
---|---|
Search result |
The face of a person from the event that is the most similar to the one selected in the uploaded photo |
Search image |
Normalized photo image of a person's face from the uploaded photo. Displayed if the “Display search image” checkbox is enabled |
Details |
|
|
Go to the “Face details” page |
Faces section#
The “Faces” section is intended for viewing, creating and deleting faces. The general view of the “Faces” section is presented below (Figure 13).
The “Faces” section contains the following elements:
- List of faces (1):
- Photo of the face;
- Information, such as the temperature of the person whose face is in the photo;
- Date and time of creation of the face;
- Lists that contain the face;
- Face ID;
- Filters (2);
- Button to open the face creation form. (3)
Click on the filter icon to find events by (Figure 14):
- Lists;
- Date of creation;
- External ID;
- Face ID;
- Information.
Click on from the list of faces to go to the face details.
Creating face#
To add a new face, click on the “Create face” button in the upper right corner of the section page. The general view of the window for creating a face is presented below (Figure 15).
Enter the required information:
- Field for uploading a photo of a person — avatar (required to be filled out). There may be one face or several ones in the photo. If there is more than one face in the photo, then after uploading you have to select which face to be added in the list/lists — the selected face will be highlighted with a green frame. You can add more than one person via batch import;
- “Information” — information about the person;
- “External ID” — external identifier of the person;
- “Lists” — the name of the list to which the person will be added (multiple lists can be selected);
- “Check photo image quality for compliance with the ISO/IEC 19794-5:2011 standard” — the photo will be added to the list only after passing the ISO/IEC 19794-5:2011 verification.
— button for resetting the uploaded photo image.
Image file requirements:
- *.jpeg, *.png or *.bmp format;
- image size no more than 15 MB and no more than 3840х2160 pixels;
- image may contain one or more people;
- image must have a person's face.
Fill in the fields and click the “Save” button. A message about the successful face creating will appear on the screen.
Face details#
To open the page with face details click on the arrow button on the face image from the "Top match" column in the "Last events" section or in the event details. The "Face details" page consists of two blocks (Figure 16).
Descriptions of page elements are presented below (Table 7).
Table 7. Elements and parameters of the face details page
Name |
Description |
---|---|
Photo image of a face |
Avatar is a biometric sample that is created when uploading a photo image to the list (to the LUNA PLATFORM 5 system):
when clicked on |
|
When clicked, a search for faces by face ID is performed in a new tab |
Update photo |
Opening the form to upload a new face photo image |
Edit face user data |
Opening a form for editing face data (“Information”, “External ID”, “Lists”) |
Delete face |
Removal of face biometric sample, face photo image, and face details |
Date created |
Date and time of creation of the biometric sample |
Information |
User data from the database, linked to a face (upon availability) |
External ID |
External identifier of the face |
Lists (N) |
The list and number of lists to which the person is attached. Clicking on the name opens the list |
Additional information |
“Face ID” — when clicked on |
Face attributes |
“Attributes”: “Gender” — gender of a person (male/female); “Age category” — indication of age category. Hover the cursor over the card to find out the exact age of the person determined from the face image. |
Additional properties |
Face properties:
|
Faces with the same external ID. |
Could be empty (“No external ID”) |
Photo image of a face |
Avatar, sample or no photo |
Date created |
Date and time of creation of the biometric sample |
Information |
User data from the database, linked to a face (upon availability) |
Lists (N) |
The list and number of lists to which the person is attached. Clicking on a name opens a list |
|
When clicked, face details open |
View all faces with the same external ID |
When clicked, the search for faces by external ID is performed, and a list of all faces, whose external ID matches the one of the reference photo, opens. |
Last events with this face |
Could be empty (“No events with this face”) |
Photo image of a face from a video stream |
Normalized image:
Similarity value of identified face with the face from control list (in percentage) |
Date created |
Date and time of recording the event with a face |
Event source |
The name of the event source that recorded the event with a face When clicked on the name of the event source, a preview of the video stream in real time opens |
Handling policy |
Name of the policy that processed the image in the video stream Clicking on the name of the policy opens the form for editing its parameters |
|
When clicked, the event details opens |
View all events with this face |
When clicked, an events archive page with maximum match result this person opens |
Editing and deleting face#
Click the “Update photo” button to update the photo image on the page with face details. The general view of the photo image update form is shown below (Figure 17).
Image file requirements:
- *.png , *.jpeg, or *.bmp format;
- image size no more than 15 MB and no more than 3840х2160 pixels;
- image may contain one or more people;
- image must have a person's face.
Click the “Edit face user data” button to edit the face user data. The general view of the face user data editing form is shown below (Figure 18).
Face user data editing form contains:
- “User data” — information from the database, linked to a face (upon availability);
- “External ID” — external identifier of the face;
- “Lists” — lists to which the face is attached;
- “Save” button — button for saving changes.
If you need to go back to the page with face details during editing, press the Esc key on your keyboard.
Click the “Delete face” button to delete the face along with its data. Confirm the action in the pop-up window—click the “Delete” button or cancel the action using the “Cancel” button (Esc key on the keyboard). After successful removal, a corresponding notification will appear.
Lists section#
The “Lists” section is intended for creating, deleting, editing, and viewing lists.
The general view of the “Lists” section is shown below (Figure 19).
“Lists” section contains the following elements:
- table of lists:
- checkbox — selection of a list or lists;
- “Name” — name of the list;
- “Date created” — date and time when the list was created;
- “Date modified” — date and time when the list was last modified;
— button for counting the number of faces in the list (1);
— button for editing the list name (2);
— button for detaching all faces from the selected list and deleting the selected list (3);
— button for deleting all faces in the selected list (4);
— button for deleting the list with faces (5);
- “Add” button — button for creating a list;
- “Delete without faces” button — button for removing all faces from the list and deleting the list;
- “Delete with faves” button — button for deleting the list with faces in it;
- the number of lists displayed on the page is set by the switch in the lower right corner of the page. There can be 10, 25, 50 or 100 lists in total on one page (5).
In the table with lists, it is possible to sort by the columns “Name”, “Date created” and “Date modified”. To sort a column in the table, click on the column name.
The sorting arrow icon
indicates the current sorting by one of the parameters: alphabetically, ascending, or descending.
List creation#
To create a list, click on the “Add” button in the lower left corner of the page.
The general view of the form for creating a list is shown below (Figure 20).
Enter a name for the list and click on the “Save” button. A message about the successful list creation will appear on the screen as well as the new list will appear in the table of lists.
Adding faces to the list#
To add a face to the list, click on the line with the name of the list to which you want to add the face. The form for editing the list will open (Figure 21).
To add a face to the list, click on the “Add” button. A form for adding a face will open (Figure 22).
Enter the required information:
- Field for uploading a photo of a person — avatar (required to be filled out). There may be one face or several ones in the photo. If there is more than one face in the photo, then after uploading you have to select which face to be added in the list/lists — the selected face will be highlighted with a green frame. You can add more than one person via batch import;
- “Information” — information about the person;
- “External ID” — external identifier of the person;
- “Lists” — the name of the list to which the person will be added (multiple lists can be selected);
- “Check photo image quality for compliance with the ISO/IEC 19794-5:2011 standard” — the photo will be added to the list only after passing the ISO/IEC 19794-5:2011 verification.
— button for resetting the uploaded photo image.
Image file requirements:
- *.jpeg, *.png or *.bmp format;
- image size no more than 15 MB and no more than 3840х2160 pixels;
- image may contain one or more people;
- image must have a person's face.
Fill in the fields and click on the “Save” button. A message about the successful face adding will appear on the screen.
The form for list editing allows to search for faces by user data (the search is performed among faces containing the specified information), external ID or creation date in the line for quick search.
The added faces will be displayed in the form for list editing (Figure 23).
The number of faces in this list is displayed next to the list name.
In the table with faces, it is possible to sort by the columns “User data”, “External ID” and “Date created”. To sort a column in the table, click on the column name.
The sorting arrow icon
indicates the current sorting by one of the parameters: alphabetically, ascending, or descending. Click on a line to open the page with face details.
The number of faces displayed on the page is set by the switch in the lower right corner of the page. There can be 10, 25, 50 or 100 faces in total on one page.
To edit a face in the list, click on the button in the line with that face.
To detach a face from the list, click on the button in the line with that face.
To delete a face from the list, click on the button in the line with that face. To delete multiple faces from the list, select those faces and click on the “Delete” button. In the pop-up window (Figure 24), confirm the action—click on the “Delete” button or cancel the action by clicking on the “Cancel” button. Once a face or faces were successfully deleted from the list, a corresponding notification appears.
You can also delete more than one faces via creating task for deleting faces from the list;
List editing#
Editing the name of the list is performed by clicking on the button in the line (Figure 19). The general view of the form for editing the list name is shown below (Figure 25).
Change the name of the list and click on the “Save” button. A notification about successful list editing appears.
List deleting#
Deleting the list with faces is performed by clicking on the button (Figure 19).
To delete multiple lists, select those lists. Then click on the “Delete with faces” button, if you need to delete both the list and the faces in it, or click the “Delete without faces” button if you want to delete only the list. To detach all faces from the selected lists and delete the lists, check the boxes for the names of these lists, and click the button (Figure 26).
In the pop-up window (Figure 27), confirm the action — click on the “Delete” button or cancel the action by clicking on the “Cancel” button. A corresponding notification appears after successful list deletion.
Handling policies section#
The “Handling policies” section is intended for creating, deleting, viewing policies, and editing their parameters.
Handling policies (handlers) can be static or dynamic.
If the handler is static, its parameters are specified when creating the handler.
If the handler is dynamic, then you can change its parameters when generating an event. For this, create a generate events
request with a specific content type (see API Reference Manual of the LUNA PLATFORM 5 documentation). In a dynamic handler, administrator can allow users to specify parameters that change with each request. At the same time other technical parameters can be set separately and left hidden from the user. With a static handler, administrator would have to create a new handler for each new task.
The general view of the “Handling policies” section is presented below (Figure 28).
“Handling policies” section contains the following elements:
- table of policies:
- “Description”—policy name;
- “Handling policy ID”—policy identifier;
- “Handler type”—static or dynamic policy;
—button for editing policy parameters (1);
—button for deleting the policy (2);
- “Add static” button—button for creating a static handling policy;
- “Add dynamic button—button for creating a dynamic handling policy;
- the number of policies displayed on the page is set by the switch in the lower right corner of the page. There can be 10, 25, 50 or 100 policies in total on one page (3).
Policy creation#
Static policy creation#
To create a static policy, click on the “Add static” button (Figure 28). A form will open to select how to create the static policy (Figure 29):
- preconfigured typical policy templates (policies 1–6);
- step by step custom policy (“Other”).
To quickly create simple static policies, use one of the typical policy templates.
Six standard templates are available:
- “Policy 1. Registration of a reference descriptor (with saving to a list)”—allows user to detect a face on the frame, check Liveness, and save the face to a specified list;
- “Policy 2. Biometric identification of faces (without saving to a list)”—allows user to detect all faces in the frame and compare them with all faces in the specified list;
- “Policy 3. Saving the faces identified in the list to the database”—allows user to detect all faces on the frame, check Liveness, compare detected faces with all faces in the specified list, and if the comparison is successful, save the face to the specified list;
- “Policy 4. Determination of attributes and properties of a face without identification (gender, age, emotions, etc.)”—allows user to detect all faces in the frame, perform all possible checks, and save the event;
- “Policy 5. Saving events for unique faces for later counting”—allows user to detect all faces in the frame, check Liveness, compare the detected faces with all faces in the list of unique faces, and if this face is not in the list, save the face to this list of unique faces;
- “Policy 6. Registration of a reference descriptor with verification of compliance of the photo with the requirements of biometric standards*—allows you to save the reference descriptor in a specific list only for those photos that have been verified in accordance with the biometric standards.
* The following checks are missing in the beta version:
- it is not allowed to use retouching and image editing;
- image cropping is allowed;
- compression code: JPEG (0 x 00), PNG (0 x 03).
When user clicks on a line with a standart template (policies 1–6), a window opens for entering the main parameters of a preconfigured policy (Figure 30).
Fill in all the required parameters and click on the “Create” button. A window will open with a message about the successful creation of the policy (Figure 31).
Click anywhere outside the successful static policy generation message to navigate to the “Select the type of policy you want to create” form (Figure 29).
To create a unique static policy that requires detailed parameter settings, use the step-by-step custom policy.
When user clicks on the line with a step-by-step custom policy (“Other”), a form for step-by-step static policy creation will open (Figure 32).
Fill in all the required parameters and click on the “Next” button to proceed to the next step. After setting all the parameters, a window with a message about the successful creation of the policy will open.
Dynamic policy creation#
To create a dynamic policy, click on the “Add dynamic” button on the page with the list of policies (Figure 28). In the opened window, enter the name of the new dynamic policy and click “Save” (Figure 33). If you need to go back to the page with the list of handlers during creating a handler, press the Esc key on your keyboard.
Policy editing#
Static policy editing#
The general view of the static policy editing form is shown below (Figure 34).
Description of the parameters of the static policy editing form is given in the tables (Table 8-15).
Table 8. Parameters of the policy editing form: general parameters and determined attributes
Parameter |
Description |
Default value |
---|---|---|
General |
||
Policy name* |
Specifies the name that will be displayed in the list of other policies |
- |
Determined attributes |
||
Detect face |
Face detection in photo images When enabled, the "Face descriptor" and "Basic attributes (gender, age)" options become avalible |
Off |
Estimate people count |
Counts the number of people in the frame |
Off |
Face descriptor |
Image processing and creation of a data set in a closed, binary format using a special extraction algorithm. When the attribute is enabled, the options “Labels”, “Save descriptor in database”, “Save face to database”, “Attach face to list”, “Save event in cases where a face was found”, and “Display event in cases where a face was found” become available |
On |
Basic attributes (gender, age) |
Assessment of the basic attributes of a person in the image. On When the attribute is enabled, the “Save if” and "Call only in cases" options become available |
|
Head position |
Assessment of the head position (angles of inclination and rotation of the head left/right and up/down). When the attribute is enabled, the options “Discard face images with head rotation/tilt angle above” become available |
On |
Emotion |
Determination of the dominant emotion (anger, disgust, fear, happiness, neutral, sadness, surprise) |
Off |
Mask |
Assessment of the presence or absence of a medical mask or mouth covering. When the attribute is enabled, the filter “Process images only if detected” becomes available |
Off |
Image quality |
Determination of quality (the presence of overexposure, blurring, underexposure, the presence of glare on the face, uneven lighting) |
On |
Eye direction |
Assessment of the direction of a person's gaze in the image |
Off |
Eye status |
Evaluating whether a person's eyes are open or closed in the image, as well as determining key points of the irises of the eyes |
Off |
Mouth status |
Closed or occluded mouth detection and smile detection |
Off |
Perform Liveness check |
Enabling Liveness check |
Off |
Position of 68 feature points of the face |
Determination of 68 feature points of the face (requires additional time for calculations, it is used to determine emotions, eye direction or Liveness check) |
Off |
EXIF metadata |
Defining image metadata |
Off |
| Detect body | Face detection in photo images | Off | +------------------------------------------------+-------------------------------------------------------------------------------------+-------------------+ | Body descriptor | Image processing and creation of a data set in a closed, | On | | | binary format using a special extraction algorithm. | | | | | | | | When the attribute is enabled, the “Labels” option becomes available | | +------------------------------------------------+-------------------------------------------------------------------------------------+-------------------+ | Body basic attributes | Gender and age estimation based on body silhouette | | +------------------------------------------------+-------------------------------------------------------------------------------------+-------------------+ | Upper body attributes based on body silhouette | Estimation of headwear, upper body clothing color, and sleeve length | | +------------------------------------------------+-------------------------------------------------------------------------------------+-------------------+ | Lower body attributes based on body silhouette | Estimation of lower body clothing type and shoe color | | +------------------------------------------------+-------------------------------------------------------------------------------------+-------------------+ | Accessories | Estimation of the presence or absence of a backpack | | +------------------------------------------------+-------------------------------------------------------------------------------------+-------------------+
Table 9. Parameters of the policy editing form: Deepfake check **
Parameter |
Description |
Default value |
---|---|---|
Perform Deepfake check |
Determination of digital manipulations for replace one person's likeness convincingly with that of another |
Off |
Discard images of faces with a Deepfake score below the specified score below the specified |
Ignoring images with a Liveness score below the specified value. Possible values: from 0 to 1, where 1 is a real person, 0 - fake |
0,5 |
Use specified Deepfake mode |
Possible values:
The choice of mode determines what set of neural networks perform photo processing for deepfake checking. For more information about the neural networks used in deepfake verification modes, contact VisionLabs technical support. |
Mode 2 |
** Deepfake license required. Deepfake check is not performed on normalized (centered and cropped) images after face detection.
Table 10. Parameters of the policy editing form: image quality check
Parameter |
Description |
Default value |
---|---|---|
Perform face image quality check |
||
Image format |
Must be saved in .jpeg or .png format (correct verification). Possible values:
|
JPEG; PNG JPEG2000; |
Image size in Mb |
This assessment determines the size of the image in bytes. It also compares the estimated value with the specified threshold |
5120: 2097152 |
Image width in pixels |
This assessment determines the width of the image in pixels. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
180:1920 |
Image height in pixels |
This assessment determines the width of the image in pixels. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
180:1080 |
Image aspect ratio |
This assessment determines the proportional ratio of the image width to height. It also compares the estimated value with the specified threshold |
0.74:0,8 |
Degree of illumination uniformity |
It is possible to evaluate the uniformity of illumination according to the requirements specified in the ICAO standard. It also compares the estimated value with the specified threshold (correct verification) |
0.3:1 |
Degree of image specularity |
Bright light artifacts and flash reflection from glasses are not allowed (indirect verification) |
0.3:1 |
Degree of image blureness |
The pixel colors of front-type photo images must be represented in the 24-bit RGB color space, in which each pixel has 8 bits for each color component: red, green, and blue (indirect verification) |
0.61:1 |
Degree of absence of underexposure in the photo |
An underexposure assessment is available. It also compares the estimated value with the specified threshold |
0.5:1 |
Degree of absence of overexposure in the photo |
Too much exposure assessment is available. It also compares the estimated value with the specified threshold |
0.57:1 |
Face illumination uniformity |
It is possible to evaluate the uniformity of illumination according to the requirements specified in the ICAO standard. The face should be evenly lit so that there are no shadows or glare on the face image. It also compares the estimated value with the specified threshold (correct verification) |
0.5:1 |
Skin tone dynamic range |
This assessment is a determination of the ratio of the brightness of the lightest and darkest areas of the face according to the requirements specified in the ICAO standard. It also compares the estimated value with the specified threshold (correct verification) |
0.5:1 |
Degree of uniformity of the background |
This assessment determines the degree of background uniformity from 0 to 1, where:
|
0.5:1 |
Degree of lightness of the background |
This rating determines the degree of background brightness from 0 to 1, where:
|
0.5:1 |
Presence of radial distortion (Fisheye effect) |
Possible values: No—the Fisheye effect is not presented in the image; Yes—the Fisheye effect is presented in the image |
No |
Type of image color based on face |
Possible values: Color; Grayscale; Infrared—near-infrared |
Color |
Shoulders position |
This assessment determines the position of the shoulders if they are in the frame: Parallel Non-parallel Hidden |
Parallel |
Face width in pixels |
This assessment determines the width of the face in pixels. It also compares the estimated value with the specified threshold |
180:1920 |
Face height in pixels |
This assessment determines the height of the face in pixels. It also compares the estimated value with the specified threshold |
180:1080 |
Face offset from the top edge of the image in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification) |
20:50 |
Face offset from the bottom edge of the image in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification) |
20:50 |
Face offset from the left edge of the image in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification) |
20:50 |
Face offset from the right edge of the image in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification) |
20:50 |
Head yaw angle |
Head rotation should be no more than 5° from the frontal position (correct verification) |
-5:5 |
Head pitch angle |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification). The tilt of the head should be no more than 5° from the frontal position (correct verification) |
-5:5 |
Head roll angle |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification). The inclination of the head should be no more than 8° from the frontal position (correct verification) |
-8:8 |
Gaze yaw angle |
This assessment determines the direction of gaze (yaw) |
-5:5 |
Gaze pitch angle |
This assessment determines the direction of gaze (pitch) |
-5:5 |
Probability of smile presence |
The facial expression must be neutral (indirect verification). |
0:0.5 |
Probability of mouth occlusion |
It is not allowed to cover the face with hair or foreign objects along the entire width, from the eyebrows to the lower lip (indirect verification) |
0:0.5 |
Probability of open mouth presence |
This assessment determines the state of the mouth The mouth is closed (correct verification) |
0:0.5 |
Smile properties |
This assessment determines the state of the mouth The facial expression must be neutral (indirect verification). Possible values: None—smile is not found; Smile with closed mouth; Smile with teeth |
None |
Glasses |
Sun glasses are not allowed (correct verification). Possible values: Sunglasses; Eyeglasses; No glasses |
No glasses |
Left eye state |
Both eyes are open normally for the respective subject (considering behavioral factors and/or medical conditions, correct verification). It is not allowed to cover the face with hair or foreign objects along the entire width, from the eyebrows to the lower lip (indirect verification) Possible values: Open; Closed; Occluded |
Open |
Right eye state |
Both eyes are open normally for the respective subject (considering behavioral factors and/or medical conditions, correct verification). It is not allowed to cover the face with hair or foreign objects along the entire width, from the eyebrows to the lower lip (indirect verification). Possible values: Open; Closed; Occluded |
Open |
Red eyes effect presence |
Possible values: No—there is no red-eye effect; Yes—there is a red-eye effect |
No |
Distance between eye centers in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any),the top point of the forehead area and the chin (correct verification) The distance between the centers of the eyes must be at least 120 pixels or at least 45 pixels in accordance with paragraph 12 of the procedure for placing and updating biometric personal data in a unified biometric system (correct verification) |
90:100 |
Horizontal head size relative to image size |
This assessment determines the horizontal head size relative to the image size. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
0.5:75 |
Vertical head size relative to image size |
This assessment determines the vertical head size relative to the image size. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
0.6:0.9 |
The position of the center point of the face horizontally relative to the image |
This assessment determines the horizontal position of the center point relative to the image. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
0.45:0.55 |
The position of the center point of the face vertically relative to the image |
This assessment determines the vertical position of the center point relative to the image. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
0.3:0.5 |
Eyebrows state |
The facial expression must be neutral (indirect verification). Possible values: Neutral; Raised; Squnting; Frowning |
Neutral |
Headwear type |
Possible values: None; Baseball_cap; Beanie; Peaked_cap; Shawl; Hat with earflaps; Helmet; Hood; Hat; Other |
None |
Presence of natural lighting |
The face should be evenly lit so that there are no shadows or glare on the face image (correct verification) Possible values: No—the lighting is unnatural; Yes—the lighting is natural |
Yes |
Table 11. Parameters of the policy editing form: filters
Parameter |
Description |
Default value |
---|---|---|
Perform face image quality assessment |
||
Filters |
||
Discard images with multiple faces |
Determination of images containing multiple faces. Possible values: Select only one face of the best quality—process an image containing several faces, but detect only a face of the best quality; Do not discard—detect all faces in the image; Discard—ignore an image containing multiple faces |
Do not discard |
Reject descriptors with quality below the specified threshold |
Ignoring low quality images. To use the filter, you must enable the determination of the descriptor in the determined attributes |
0,5 |
Process images only if detected |
Possible values: Missing—the event is created when there is no overlap of the face by the medical mask (no mask); Occluded—the event is created in case of detection of face overlapping; Medical mask—the event is created when a medical mask is detected on the face. Several filter values can be specified. Available only when defining the “Medical mask” attribute |
- |
Discard face images with head rotation angle (to the left or right, yaw) above |
Ignoring images in which the person's head is turned to the left or right at a too large angle —no information will be extracted when detecting a face and evaluating the angle of head rotation. Available only if the “Head position” attribute is enabled |
30 |
Discard face images with head tilt angle (to the left or right, roll) above |
Ignoring images in which a person's head is tilted to the left or right at a too large angle—no information will be extracted during face detection and head tilt evaluating. Available only if the “Head position” attribute is enabled |
40 |
Discard face images with head tilt angle (up or down, pitch) above |
Ignoring images in which the person’s head is tilted up or down at a too large angle —no information will be extracted during face detection and head tilt evaluating. Available only if the “Head position” attribute is enabled |
30 |
Discard images of faces with a Liveness score below the specified |
Ignoring images with a Liveness score below the specified value. Possible values: from 0 to 1. Available only if the “Perform Liveness check” attribute is enabled |
0,5 |
Discard images of faces with the Liveness quality lower than the specified |
Ignoring images with a Liveness quality lower than the specified. Possible values: from 0 to 1. Available only if the “Perform Liveness check” attribute is enabled |
0,5 |
Process images of faces only with Liveness states |
Processing images with Liveness status: Spoof—the absence of a “live” person in the frame; Real—the presence of a “live” person in the frame; Unknown. Available only if the “Perform Liveness check” attribute is enabled |
- |
Process images of faces only with Deepfake states |
Processing images with Deepfake status: Fake—the absence of a “live” person in the frame; Real—the presence of a “live” person in the frame. Available only if the Perform Deepfake check” attribute is enabled |
- |
Filter images based on face image quality assessment results |
Filter images according to the parameters set in the "Perform face image quality assessment" setting that comply with ISO/IEC 19794-5:2011 and ICAO. Available only when the parameter “Perform face image quality assessment*” is enabled |
Off |
Table 12. Parameters of the policy editing form: labels
Parameter |
Description |
Default value |
---|---|---|
Labels |
||
Label name |
Specify the name that will be displayed in the policy settings, including the parameters for creating and saving an image/descriptor/event/face, adding a tag |
- |
Identify among |
Searching for a detected person for identification among those created in the database:
|
Faces |
Search for a descriptor |
Among the events created in the database, search for a descriptor:
Only for "Identify among events" |
Faces |
Perform search by |
|
- |
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
Each filled field imposes a search restriction—the comparison will be successful only if all the search conditions are met |
||
Location (only for “Identify among events”) |
“District”; “Area”; “City”; “Street”; “House number”; “Longitude (-180…180)”; “Accuracy (0…90)”; “Latitude (-90…90)”; “Accuracy (0…90)” |
- |
Filter search result by |
“Gender”—specifies the gender for which the face comparison is performed; “Age category”—specifies the lower and/or upper limits of the age of the face is indicated for comparison; “Liveness”—specifies Liveness state (Spoof, Real or Unknown) |
- |
Additional search parameters |
“The maximum number of similar ones in the search results”; “Accuracy threshold”—a value from 0 to 1 |
- |
Table 13. Parameters of the policy editing form: save parameters
Parameter |
Description |
Default value |
---|---|---|
Save parameters |
||
Save face sample |
Saving the event without creating a face in the LUNA PLATFORM 5 database. If enabled, images are saved unconditionally in the database. For selective saving, you must specify: “Save if”:
— “Save face image in cases where a face was found”:
parameters specified in the comparison (from 0 to 1) |
On |
Save body sample |
Saving the event without creating a face in the LUNA PLATFORM 5 database. If enabled, images are saved unconditionally in the database. For selective saving, you must specify: “Save if”:
— “Save body image in cases where a body was found”:
|
On |
Save descriptor in database |
Saving the created descriptor in the LUNA PLATFORM 5 database. If enabled, the unconditional saving of the descriptor in the database is performed. For selective saving, specify the parameters (for more information see description of “Save face sample ” parameter). |
Off |
— “FaceAttributes storage time”—indicates the time in seconds after which the descriptor will be deleted from the database |
- |
|
Save original image in database |
Saving the original image in the LUNA PLATFORM 5 database. |
Off |
— “Use external link as original image URL”—if enabled, the link to the external image is stored in the address of the original image, thus avoiding image duplication in the database. If a biometric sample was sent in the request and it was stored in the Image Store, then the link to it will be indicated in the address of the original image. For selective saving, specify the parameters (for more information see description of “Save face sample ” parameter) |
Off |
|
Save face in database |
Saving the face detected in the image in the LUNA PLATFORM 5 database with the creation of a face in the database. Saving is possible only when the option “Save descriptor in database” is enabled. If enabled, the unconditional saving of the descriptor in the database is performed For selective saving, specify the parameters (for more information see description of “Save face sample ” parameter) |
Off |
— "Attach face to list"—adds the saved face to the control list or lists in LUNA PLATFORM 5. Possible only if the option “Save descriptor in the database” is enabled. For selective saving, specify the parameters (for more information see description of “Save face sample ” parameter) |
Off |
|
Save event in database |
Saving the detection/identification event in the LUNA PLATFORM 5 database. If enabled, all events are stored unconditionally in the database. For selective saving,specify the parameters (for more information see description of “Save face sample ” parameter) |
On |
Receive and display an event in the “Last events” section |
Displaying an event in the “Last events” section. For selective displaying of events, specify the parameters (for more information see description of “Save image in database” parameter) |
On |
Table 14. Parameters of the policy editing form: tagging parameters
Parameter |
Description |
---|---|
Tagging parameters |
|
Tag name* |
Assigning a tag of the given name when conditions are met. In the absence of parameter specifications, the assignment is unconditional. |
Save if |
“Gender”—the gender of the face in the image matches the specified; “Age category”—the age of the face in the image matches the specified limits; “Liveness”—specifies Liveness state (Spoof, Real or Unknown); “Deepfake”—specifies Deepfake check (Fake or Real) |
Add a tag for each case where a face was found |
“Labels”—the list of labels, specifies the names of labels; “With precision”—the lower and/or upper limit of the satisfaction of the comparison result with the parameters specified in the comparison (from 0 to 1) |
* Required field
Table 15. Parameters of the policy editing form: callbacks
Callbacks allows you to send generated events (notifications) to the third-party system at the specified URL. A mechanism for notifications is based on the principles of HTTP webhooks. They provide asynchronous interaction between systems, allowing external services to react to the emergence of events.
Parameter |
Description |
Default value |
---|---|---|
Add callback |
||
Type |
Protocol type when creating a notification |
HTTP |
URL |
Address of the external system where the notification will be sent |
- |
Authorization type |
Selecting the type of authorization into an external system and setting up authorization data. The basic type of authorization requires specifying login and password to enter an external system |
Basic |
Timeout (seconds) |
Maximum time to wait for a request to complete |
60 |
Request body format |
Data interchange format: JSON or MessagePack |
application/json |
HTTP Headers |
HTTP Request Headers |
- |
Call only in cases where |
Conditions for sending notification Activated when determination of basic attributes (gender, age) is enabled, see the Table 9 — Gender:
— Age category:
Activated when Liveness check is enabled, see the Table 9 — Liveness:
Activated when Deepfake check is enabled, see the Table 10 — Deepfake:
|
|
Call only in cases where a person or body has been found |
|
Adding a label#
To create a label, click on in policy editing form (Figure 35).
If you need to identify faces among other faces in the label, then select “Faces” for “Identify among” field in the window for the parameter adding (Figure 36). If you need to identify among events, select “Events” for “Identify among” field (Figure 37).
Fill in all the required parameters and click on the “Add” button at the bottom of the form.
Label editing#
Editing of the label is performed by clicking on the button in the line (Figure 35).
A general view of the form for editing the label is shown below (Figure 38).
Edit the parameter values and click on the “Change” button.
Label deleting#
Deletion of the label is performed by clicking on the button in the line (Figure 35).
Tag adding#
To create a tag, click on in policy editing form (Figure 39).
A general view of the form for creating a tag is shown below (Figure 40).
Fill in all the required parameters and click on the “Add” button at the bottom of the form.
Tag editing#
Tag editing is performed by clicking on the button in the line (Figure 39).
A general view of the tag editing form is shown below (Figure 41).
Edit the values of the tag parameters and click on the “Change” button.
Tag deleting#
Deletion of the tag is performed by clicking on the button in the line (Figure 39).
After finishing editing the policy, click on the “Save” button in the upper right corner (Figure 34).
Dynamic policy editing#
To edit a dynamic policy, first click on the button on the page with a list of policies (1 in Figure 28). Then in the editing form change the name of the policy and click “Save” (Figure 42).
Policy deleting#
Deleting a policy is performed by clicking on the button in the line (2 in the Figure 28).
Confirm the action in the pop-up window—click on the “Delete” button or cancel the action by clicking on the “Cancel” button (Figure 43). After successful deletion, a corresponding notification will appear.
Verification section#
The “Verification” section is intended for creating, deleting, testing verifiers, and editing their parameters. Verifiers are used to quickly compare two faces by face photo image and Face ID, external ID, attribute, event, and display the result of the test. The general view of the “Verification” section is shown below (Figure 44).
“Verification” section contains the following elements:
- table of verifiers:
- “Description”—verifier name;
- “Verifier ID”—verifier identifier;
—button for testing the verifier (1);
—button for editing verifier parameters (2);
—button for deleting the verifier (3);
- “Add” button—button for creating a verifier;
- the number of verifiers displayed on the page is set by the switch in the lower right corner of the page. There can be 10, 25, 50 or 100 verifiers in total on one page (4).
Verifier creation#
To create a verifier, click on the “Add” button (Figure 44). A form for step by step verifier creation will open (Figure 45).
Fill in all the required parameters and click on the “Next” button to proceed to the next step. The description of the parameters is presented below, in the "Verifier editing" section.
After setting all the parameters, a window with a message about the successful verifier creation appears (Figure 46). Click anywhere outside the successful verifier creation message to navigate to the “Verification” section.
Verifier testing#
Testing a verifier is performed by clicking on the button in the line (1 in Figure 44). The general view of the “Verifier testing” form is shown below (Figure 47).
“Verifier testing” form contains the following blocks:
- “Search by”—search options:
- “Face”—search by registered face in the system:
- “Face ID”—face identifier that is created in the LUNA PLATFORM 5 system as a result of a detection event and attribute extraction;;
- “External ID”—search by external face identifier:
- “External ID”—external face identifier;
- “Attribute”—search by face attribute:
- “Attribute ID”—attribute (descriptor) identifier;
- “Event”—search by registered event in the system:
- “Event ID”—identifier of the event of detection and attribute extraction;
- Photo image—search by uploaded photo image:
- field for uploading a photo image;
- Searching results:
- “Photo”—sample of detected face (candidate);
- “Similarity, %”—similarity value, in percent;
- “Status”—verification result:
—successful verification;
—unsuccessful verification;
- “Link to the reference”
—go to the page the reference face;
—go to the “Event details” page;
—button for downloading the result of the verification (Figure 48).
To test the verifier by face, in the “Search by” block enter the Face ID and select photo image, click on or “Select file”, and specify the path to the image file.
Image file requirements:
- *.jpeg, *.png or *.bmp format;
- image size no less than 320х250 and no more than 3840х2160 pixels;
- image may contain one or more people;
- image must have a person's face.
When loading a photo image containing many faces, the Interface verifies all faces in the image.
To reset the image, click on .
Verifier editing#
Editing of verifier parameters is performed by clicking the button in the line (2 in the Figure 44). The general view of the verifier editing form is shown below (Figure 49).
Description of the parameters of the verifier editing form is given in the tables (Table 16-20).
Table 16. Parameters of the verifier editing form: general parameters and determined attributes
Parameter |
Description |
Default value |
---|---|---|
General |
||
Verifier name |
Specifies the name that will be displayed in the list of verifiers |
- |
Similarity threshold |
Specifies a similarity score, which will consider that the reference and the candidate contain the face of the same person |
0.93 |
Determined attributes |
||
Basic attributes (gender, age) |
Assessment of the basic attributes of a person in the image. On When the attribute is enabled, the “Save if” and "Call only in cases" options become available |
|
Head position |
Assessment of the head position (angles of inclination and rotation of the head left/right and up/down). When the attribute is enabled, the options “Discard face images with head rotation/tilt angle above” become available |
On |
Emotion |
Determination of the dominant emotion (anger, disgust, fear, happiness, neutral, sadness, surprise) |
Off |
Mask |
Assessment of the presence or absence of a medical mask or mouth covering. When the attribute is enabled, the filter “Process images only if detected” becomes available |
Off |
Image quality |
Determination of quality (the presence of overexposure, blurring, underexposure, the presence of glare on the face, uneven lighting) |
On |
Eye direction |
Assessment of the direction of a person's gaze in the image |
Off |
Eye status |
Evaluating whether a person's eyes are open or closed in the image, as well as determining key points of the irises of the eyes |
Off |
Mouth status |
Closed or occluded mouth detection and smile detection |
Off |
Perform Liveness check |
Enabling Liveness check |
Off |
Position of 68 feature points of the face |
Determination of 68 feature points of the face (requires additional time for calculations, it is used to determine emotions, eye direction or Liveness check) |
Off |
EXIF metadata |
Defining image metadata |
Off |
Table 17. Parameters of the verifier editing form: image quality check
Parameter |
Description |
Default value |
---|---|---|
Perform face image quality check |
||
Image format |
Must be saved in .jpeg or .png format (correct verification). Possible values:
|
JPEG; PNG JPEG2000; |
Image size in Mb |
This assessment determines the size of the image in bytes. It also compares the estimated value with the specified threshold |
5120: 2097152 |
Image width in pixels |
This assessment determines the width of the image in pixels. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
180:1920 |
Image height in pixels |
This assessment determines the width of the image in pixels. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
180:1080 |
Image aspect ratio |
This assessment determines the proportional ratio of the image width to height. It also compares the estimated value with the specified threshold |
0.74:0,8 |
Degree of illumination uniformity |
It is possible to evaluate the uniformity of illumination according to the requirements specified in the ICAO standard. It also compares the estimated value with the specified threshold (correct verification) |
0.3:1 |
Degree of image specularity |
Bright light artifacts and flash reflection from glasses are not allowed (indirect verification) |
0.3:1 |
Degree of image blureness |
The pixel colors of front-type photo images must be represented in the 24-bit RGB color space, in which each pixel has 8 bits for each color component: red, green, and blue (indirect verification) |
0.61:1 |
Degree of absence of underexposure in the photo |
An underexposure assessment is available. It also compares the estimated value with the specified threshold |
0.5:1 |
Degree of absence of overexposure in the photo |
Too much exposure assessment is available. It also compares the estimated value with the specified threshold |
0.57:1 |
Face illumination uniformity |
It is possible to evaluate the uniformity of illumination according to the requirements specified in the ICAO standard. The face should be evenly lit so that there are no shadows or glare on the face image. It also compares the estimated value with the specified threshold (correct verification) |
0.5:1 |
Skin tone dynamic range |
This assessment is a determination of the ratio of the brightness of the lightest and darkest areas of the face according to the requirements specified in the ICAO standard. It also compares the estimated value with the specified threshold (correct verification) |
0.5:1 |
Degree of uniformity of the background |
This assessment determines the degree of background uniformity from 0 to 1, where:
|
0.5:1 |
Degree of lightness of the background |
This rating determines the degree of background brightness from 0 to 1, where:
|
0.5:1 |
Presence of radial distortion (Fisheye effect) |
Possible values: No—the Fisheye effect is not presented in the image; Yes—the Fisheye effect is presented in the image |
No |
Type of image color based on face |
Possible values: Color; Grayscale; Infrared—near-infrared |
Color |
Shoulders position |
This assessment determines the position of the shoulders if they are in the frame: Parallel Non-parallel Hidden |
Parallel |
Face width in pixels |
This assessment determines the width of the face in pixels. It also compares the estimated value with the specified threshold |
180:1920 |
Face height in pixels |
This assessment determines the height of the face in pixels. It also compares the estimated value with the specified threshold |
180:1080 |
Face offset from the top edge of the image in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification) |
20:50 |
Face offset from the bottom edge of the image in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification) |
20:50 |
Face offset from the left edge of the image in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification) |
20:50 |
Face offset from the right edge of the image in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification) |
20:50 |
Head yaw angle |
Head rotation should be no more than 5° from the frontal position (correct verification) |
-5:5 |
Head pitch angle |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification). The tilt of the head should be no more than 5° from the frontal position (correct verification) |
-5:5 |
Head roll angle |
The image must contain a full front view of the person's head, including the left and right ear (if person has any), the top point of the forehead area and the chin (correct verification). The inclination of the head should be no more than 8° from the frontal position (correct verification) |
-8:8 |
Gaze yaw angle |
This assessment determines the direction of gaze (yaw) |
-5:5 |
Gaze pitch angle |
This assessment determines the direction of gaze (pitch) |
-5:5 |
Probability of smile presence |
The facial expression must be neutral (indirect verification). |
0:0.5 |
Probability of mouth occlusion |
It is not allowed to cover the face with hair or foreign objects along the entire width, from the eyebrows to the lower lip (indirect verification) |
0:0.5 |
Probability of open mouth presence |
This assessment determines the state of the mouth The mouth is closed (correct verification) |
0:0.5 |
Smile properties |
This assessment determines the state of the mouth The facial expression must be neutral (indirect verification). Possible values: None—smile is not found; Smile with closed mouth; Smile with teeth |
None |
Glasses |
Sun glasses are not allowed (correct verification). Possible values: Sunglasses; Eyeglasses; No glasses |
No glasses |
Left eye state |
Both eyes are open normally for the respective subject (considering behavioral factors and/or medical conditions, correct verification). It is not allowed to cover the face with hair or foreign objects along the entire width, from the eyebrows to the lower lip (indirect verification) Possible values: Open; Closed; Occluded |
Open |
Right eye state |
Both eyes are open normally for the respective subject (considering behavioral factors and/or medical conditions, correct verification). It is not allowed to cover the face with hair or foreign objects along the entire width, from the eyebrows to the lower lip (indirect verification). Possible values: Open; Closed; Occluded |
Open |
Red eyes effect presence |
Possible values: No—there is no red-eye effect; Yes—there is a red-eye effect |
No |
Distance between eye centers in pixels |
The image must contain a full front view of the person's head, including the left and right ear (if person has any),the top point of the forehead area and the chin (correct verification) The distance between the centers of the eyes must be at least 120 pixels or at least 45 pixels in accordance with paragraph 12 of the procedure for placing and updating biometric personal data in a unified biometric system (correct verification) |
90:100 |
Horizontal head size relative to image size |
This assessment determines the horizontal head size relative to the image size. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
0.5:75 |
Vertical head size relative to image size |
This assessment determines the vertical head size relative to the image size. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
0.6:0.9 |
The position of the center point of the face horizontally relative to the image |
This assessment determines the horizontal position of the center point relative to the image. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
0.45:0.55 |
The position of the center point of the face vertically relative to the image |
This assessment determines the vertical position of the center point relative to the image. It also compares the estimated values with thresholds (according to ISO or custom thresholds) |
0.3:0.5 |
Eyebrows state |
The facial expression must be neutral (indirect verification). Possible values: Neutral; Raised; Squnting; Frowning |
Neutral |
Headwear type |
Possible values: None; Baseball_cap; Beanie; Peaked_cap; Shawl; Hat with earflaps; Helmet; Hood; Hat; Other |
None |
Presence of natural lighting |
The face should be evenly lit so that there are no shadows or glare on the face image (correct verification) Possible values: No—the lighting is unnatural; Yes—the lighting is natural |
Yes |
Table 18. Parameters of the verifier editing form: Deepfake check **
Parameter |
Description |
Default value |
---|---|---|
Perform Deepfake check |
Determination of digital manipulations for replace one person's likeness convincingly with that of another |
Off |
Discard images of faces with a Deepfake score below the specified score below the specified |
Ignoring images with a Liveness score below the specified value. Possible values: from 0 to 1, where 1 is a real person, 0 - fake |
0,5 |
Use specified Deepfake mode |
Possible values:
The choice of mode determines what set of neural networks perform photo processing for deepfake checking. For more information about the neural networks used in deepfake verification modes, contact VisionLabs technical support. |
Mode 2 |
** Deepfake license required. Deepfake check is not performed on normalized (centered and cropped) images after face detection.
Table 19. Parameters of the verifier editing form: save parameters
Parameter |
Description |
Default value |
---|---|---|
Save parameters |
||
Save image in database |
Saving the image in the LUNA PLATFORM 5 database. If enabled, the unconditional saving of images in the database is performed. |
Off |
Save biometric template in database |
Saving the created biometric template in the LUNA PLATFORM 5 database. If enabled, the unconditional saving of biometric templates in the database is performed. |
Off |
Table 20. Parameters of the verifier editing form: filters
Parameter |
Description |
Default value |
---|---|---|
Perform face image quality assessment |
||
Filters |
||
Discard images with multiple faces |
Determination of images containing multiple faces. Possible values: Select only one face of the best quality—process an image containing several faces, but detect only a face of the best quality; Do not discard—detect all faces in the image; Discard—ignore an image containing multiple faces |
Do not discard |
Reject descriptors with quality below the specified threshold |
Ignoring low quality images. To use the filter, you must enable the determination of the descriptor in the determined attributes |
0,5 |
Process images only if detected |
Possible values: Missing—the event is created when there is no overlap of the face by the medical mask (no mask); Occluded—the event is created in case of detection of face overlapping; Medical mask—the event is created when a medical mask is detected on the face. Several filter values can be specified. Available only when defining the “Medical mask” attribute |
- |
Discard face images with head rotation angle (to the left or right, yaw) above |
Ignoring images in which the person's head is turned to the left or right at a too large angle —no information will be extracted when detecting a face and evaluating the angle of head rotation. Available only if the “Head position” attribute is enabled |
30 |
Discard face images with head tilt angle (to the left or right, roll) above |
Ignoring images in which a person's head is tilted to the left or right at a too large angle—no information will be extracted during face detection and head tilt evaluating. Available only if the “Head position” attribute is enabled |
40 |
Discard face images with head tilt angle (up or down, pitch) above |
Ignoring images in which the person’s head is tilted up or down at a too large angle —no information will be extracted during face detection and head tilt evaluating. Available only if the “Head position” attribute is enabled |
30 |
Discard images of faces with a Liveness score below the specified |
Ignoring images with a Liveness score below the specified value. Possible values: from 0 to 1. Available only if the “Perform Liveness check” attribute is enabled |
0,5 |
Discard images of faces with the Liveness quality lower than the specified |
Ignoring images with a Liveness quality lower than the specified. Possible values: from 0 to 1. Available only if the “Perform Liveness check” attribute is enabled |
0,5 |
Process images of faces only with Liveness states |
Processing images with Liveness status: Spoof—the absence of a “live” person in the frame; Real—the presence of a “live” person in the frame; Unknown. Available only if the “Perform Liveness check” attribute is enabled |
- |
Process images of faces only with Deepfake states |
Processing images with Deepfake status: Fake—the absence of a “live” person in the frame; Real—the presence of a “live” person in the frame. Available only if the Perform Deepfake check” attribute is enabled |
- |
Filter images based on face image quality assessment results |
Filter images according to the parameters set in the "Perform face image quality assessment" setting that comply with ISO/IEC 19794-5:2011 and ICAO. Available only when the parameter “Perform face image quality assessment*” is enabled |
Off |
After finishing editing the verifier, click on the “Save” button in the upper right corner.
Verifier deleting#
Deleting a verifier is performed by clicking on the button in the line (3 in the Figure 44).
In the pop-up window (Figure 50), you must confirm the action — click on the “Delete” button or cancel the action by clicking on the “Cancel” button.
Tasks section#
The “Tasks” is intended for creating, deleting, and displaying tasks, downloading search results by events and persons. Export to a file is implemented in the Interface in the form of a task;
General view of the “Tasks” section is shown below (Figure 51).
The “LUNA PLATFORM. Deferred tasks” tab contains the following elements:
- task counter (1);
- “Cross-matching” button — button for creating a task for cross-matching lists of faces;
- “Export faces” button — button for creating a task to export faces and information on them;
- “Export events” button — button for creating a task to export events and information on them;
- “Batch processing” button — button for creating a task for batch processing of photo image archives according to a specific policy;
- “Batch import” button — button for creating a task for batch import of photo image archive into the list;
- “Batch identification” — button for creating a task for identifying an archive of photo images of references with candidates (faces or events with faces);
- “Deleting faces from the list” — button for creating a task for removing persons from the selected list;
- table of tasks:
- “ID” — task identifier;
- “Description” — user who created the task;
- “Type” — task type (cross-matching, export, batch processing, batch import, batch identification);
- “Date created” — date and time of task creation;
- “Expiration date” — date and time of completion of the task;
- “Status” — task progress state;
— button to stop the task (appears if the task status is “In progress”);
— button for downloading the result of the task (2);
— button for deleting a task (3);
- the number of tasks displayed on the page is set by the switch in the lower right corner of the page. There can be 10, 25, 50 or 100 tasks in total on one page (4).
The status changes during the task execution. In total, 4 statuses are applied to tasks in the Interface:
— task is being performed;
- “Collecting results” — collecting the results of the task;
— task completed;
— an error occurred while executing a task.
The process of creating tasks and the values of the specified parameters are described below. If you need to go back to the task list page during creating a task, press the Esc key on your keyboard.
Configure notifications about task status using the "callbacks" functionality. Notifications will be sent to the external system at the specified URL. The notification settings block opens after filling in the required fields to create a task (Table 21).
Table 21. Notification settings in the task creation form
Parameter |
Description |
Default value |
---|---|---|
Add callback |
||
Type |
Protocol type when creating a notification |
HTTP |
URL |
Address of the external system where the notification will be sent |
- |
Authorization type |
Selecting the type of authorization into an external system and setting up authorization data. The basic type of authorization requires specifying login and password to enter an external system |
Basic |
Timeout (seconds) |
Maximum time to wait for a request to complete |
60 |
Request body format |
Data interchange format: JSON or MessagePack |
application/json |
HTTP Headers |
HTTP Request Headers |
- |
Creating a cross-matching Task#
To create a task for cross-matching lists of faces, click on the “Cross-matching” button (Figure 51). A general view of the form for creating a cross-matching task is shown below (Figure 52).
The “Cross-matching” form contains the following elements:
- “List”* — selection of a list for comparison;
- “Find matches in”* — selection of a list for comparison;
- “The maximum number of similar ones” — maximum number of similar candidates (the default is 3);
- “Minimum similarity threshold, %” — the lowest score of similarity in percentage between candidates that the Interface accepts as a possible match (the default is 50).
* Required field
Fill in all the required parameters and click on the “Create task” button (or Enter key on your keyboard)..
Resource-intensive tasks can take a while. In the pop-up window (Figure 53), you must confirm the action — click on the “Ok” button or cancel the action by clicking on the “Cancel” button (Esc key on keyboard)..
After successfully creating a cross-matching task, the message “Task for cross-matching has been created” will appear in the upper right corner of the screen (Figure 54).
Creating an export faces task#
To create a task to export faces and information on them, click on the “Export faces” button (Figure 51). A general view of the form for creating an export task is shown below (Figure 55).
Description of the parameters of the “Export faces” form is given in Table 22.
Table 22. Export faces task parameters
Parameter |
Description |
Default value |
---|---|---|
Data upload parameters |
||
List |
Specifies the list for export |
- |
User data |
Indicates face data (up to 128 characters) |
- |
Face IDs |
Specifies the values of identifiers of faces in LUNA PLATFORM 5 in UUID format |
- |
Face external IDs |
Specifies the values of third-party external identifiers |
- |
Date from |
Specifies the lower limit of the period of creation of faces or events in LUNA PLATFORM 5 |
- |
Date to |
Specifies the upper limit of the period of creation of faces or events in LUNA PLATFORM 5 |
- |
ID of the first face |
Specify the values of the identifier of the first face from the exported faces |
- |
ID of the last face |
Specify the values of the identifier of the last face from the exported faces |
- |
Additional parameters |
||
Columns in the report—selecting table columns to be included in the file upon export, and indication the order in which they are located |
Face ID User data External ID Time Avatar Event ID Lists |
On On On On On On Off |
Save face images |
Enabling this option allows you to upload face images into the archive with the .csv report |
Off |
Type of biometric template |
Specifies the biometric template of which objects will be exported—faces or bodies |
Faces |
Delimiter for .csv |
A special character that will be used in the file with export results to divide text into columns |
, |
Fill in all the required parameters and click on the “Create task” button.
Resource-intensive tasks can take a while. In the pop-up window (Figure 56), you must confirm the action — click on the “Ok” button or cancel the action by clicking on the “Cancel” button (Esc key on keyboard)..
After successfully creating an export task, the message “Export task has been successfully created” will appear in the upper right corner of the screen (Figure 57).
Creating an export events task#
To create a task to export objects events and information on them, click on the “Export events” button (Figure 51). A general view of the form for creating an export events task is shown below (Figure 58).
Description of the parameters of the “Export events” form is given in Table 23.
Table 23. Export events task parameters
Parameter |
Description |
Default value |
---|---|---|
Data upload parameters |
||
General data about the event |
||
Source |
Specifies the name of the event source |
- |
User data |
Indicates face data (up to 128 characters) |
- |
Event IDs |
Specifies the values of the event identifiers in UUID format for performing an accurate search separated by commas |
- |
Event external IDs |
Specifies the values of third-party external identifiers separated by commas |
- |
Face IDs |
Specifies the values of identifiers of faces in UUID format separated by commas |
- |
Similarity |
A value from 0 to 1 is specified |
- |
Tags |
Specifies a tag or tags separated by commas |
- |
Handling policies |
Specifies policy name, it is possible to specify several values |
- |
Date from |
Specifies the lower limit of the period of creation of events |
- |
Date to |
Specifies the upper limit of the period of creation of events |
- |
Advanced event filters |
||
End date from |
Indicates the lower limit of the event end period |
- |
End date to |
Indicates the upper limit of the event end period |
- |
ID of the first event |
Indicate the value of the identifier of the first event from the exported events |
- |
ID of the last event |
Indicates the value of the identifier of the last event from the exported events |
- |
Track IDs |
Specifies the values of the track identifiers in the UUID format separated by commas |
- |
Face attributes |
||
Gender |
Specifies male/female gender |
- |
Age category |
Specifies the age range |
- |
Emotions |
Specifies emotions |
- |
Medical mask |
Detection of the presence/absence of a medical mask, mouth occlusion |
- |
Liveness |
Specifies the result of checking for the presence of a living person in the frame |
|
Body attributes |
||
Gender by body |
Specifies the female, male, undefined gender |
- |
Age category by body |
Specifies the age range by body image |
- |
Headwear |
Specifies headdress |
- |
Upper body colors |
Specifies top clothing color |
- |
Sleeve |
Specifies sleeve length |
- |
Headwear color |
Specifies headdress color |
- |
Lower body colors |
Specifies bottom clothing color |
- |
Lower body type |
Specifies bottom clothing type |
- |
Shoes color |
Specifies shoe color (only for “Identify among events); |
|
Backpack |
Specifies backpack presence |
- |
Best match data |
||
Face IDs |
Specifies the values of identifiers of faces 5 in UUID format |
- |
Face external IDs |
Specifies the values of third-party external identifiers |
- |
Label |
Name of the label parameter (the rule by which comparison occurs); |
- |
Geoposition |
||
“District”; “Area”; “City”; “Street”; “House number”; |
||
Advanced geoposition filters |
||
“Longitude (-180…180)”; “Accuracy (0…90)”; “Latitude (-90…90)”; “Accuracy (0…90)” |
||
Other |
||
Add filter by meta** |
||
Additional parameters |
||
Columns in the report—selecting table columns to be included in the file upon export, and indication the order in which parameters are located in the report |
Face ID User data Time External ID Event ID Source Handling Policy ID Tags Track ID Metadata Geo position |
On On On On On On On Off On Off On |
Save face images |
Enabling this option allows you to upload face images into the archive with the .csv report |
Off |
Type of biometric template |
Specifies the biometric template of which objects will be exported—faces or bodies |
Faces |
Delimiter for .csv |
A special character that will be used in the file with export results to divide text into columns |
, |
Fill in all the required parameters and click on the “Create task” button or the Enter key on your keyboard.
Resource-intensive tasks can take a while. In the pop-up window, confirm the action — click on the “Ok” button or cancel the action by clicking on the “Cancel” button (Esc key on keyboard).
After successfully creating an export task, the message “Export task has been successfully created” will appear in the upper right corner of the screen (Figure 55).
Creating a batch processing task#
The batch processing task allows user to process several photos using a specified policy.
To create a task for batch processing of photo image archives according to a specific policy, click on the “Batch processing” button (Figure 51). The general view of the form for creating a batch processing task is shown below (Figure 59).
By default the “Batch Processing” form contains the following elements:
- “Data source type” — selection of the source type of the loaded data;
- “Description” — description of the task;
- “Handling policy”* — selection of a policy (required);
The resource can accept five types of sources with images for processing:
- File;
- ZIP;
- S3;
- Network disk;
- FTP;
- Samba.
Additional options appear depending on the selected data source type.
To quickly download a ZIP archive from your local machine without additional options, select “File” as the data source type. Then upload or drag-and-drop the archive with photo images in the field for uploading data.
Download file requirements:
- *.zip file format;
- there can be one or more people on the image (depends on policy settings);
- the image must contain a person's face or body;
- images must be located immediately inside the archive, and not in a folder inside the archive;
- the archive size is set using the ARCHIVE_MAX_SIZE parameter in the config.py configuration file of the Tasks component, the default size is 100 GB (for details, see “VisionLabs LUNA PLATFORM 5. Administrator manual”).
When choosing a ZIP archive as image source for the batch processing task, the following parameters can be set:
- “File URL”* — URL address of the archive with images, the default archive size is 100 GB;;
- “Archive password” — a password for the transferred archive protection;
- “File key prefix” — a file key prefix that can be used to load images from a specific directory, for example, "2022/January";;
- “File key postfix” — file key postfix that can be used to upload images with a specific extension;
- “Whether to estimate images from ZIP archive subdirectories recursively?” switch — allows you to recursively receive images from subdirectories.
- “Input image type” — selection of the type of image that is input in the batch processing task — "Raw image", "Face warped image", "Body warped image".
When choosing an S3-like storage as an image source for the batch processing task, the following parameters can be set:
- “Storage endpoint” — only when specifying the bucket name;
- “Bucket name”* — Access Point ARN / Outpost ARN;
- “File key prefix” — file key prefix. It can be used to load images from a specific folder, such as "2022/January";
- “Bucket region” — only when specifying the bucket name;
- “Public access key”* — public key for setting up authorization;
- “Secret access key”* — secret key for setting up authorization;
- “Signature version” — signature "s3v2" / "s3v4";
- “Whether to estimate images from bucket subdirectories recursively?” — possibility to recursively download images from nested bucket folders;
- “Whether to save image origin?” — saving original images in the LUNA PLATFORM 5 database.
It is also possible to select the type of transferred images. For more information about working with S3-like repositories, see AWS User Guide.
When choosing a network disk as an image source for the batch processing task, the following parameters can be set:
- “Path to directory with images”* — absolute path to the directory with images in the container (required);
- “File key prefix” — a file key prefix that can be used to load images from a specific directory;
- “File key postfix” — file key postfix that can be used to upload images with a specific extension;
- “Whether follow file system links?” — enable/disable of symbolic links processing.
As in the batch processing task using S3-like storage as image source, it is possible to recursively receive images from nested directories, and to select the type of transferred images.
When choosing a FTP server as an image source for the batch processing task, the following parameters can be set:
- “Server host”* — FTP server IP address or hostname;
- “Port” — FTP server port;
- “FTP sessions” — maximum number of allowed sessions on the FTP server;
- “Server user” and “Server password” — authorization parameters.
* Required field
As in the batch processing task using network disk as image source, it is possible to set the path to the directory with images, recursively receive images from nested directories, select the type of transferred images, and specify the prefix and postfix.
When choosing a Samba as an image source for the batch processing task, the parameters are similar to those of an FTP server, except for the "max_sessions" parameter. Also, if authorization data is not specified, the connection to Samba will be performed as a guest.
Fill in all the required parameters and click on the “Create task” button or the Enter key on your keyboard. Resource-intensive tasks can take a while. In the pop-up window (Figure 60), you must confirm the action — click on the “Ok” button or cancel the action by clicking on the “Cancel” button.
After successfully creating a batch processing task, the message “Batch processing task has been successfully created” will appear in the upper right corner of the screen (Figure 61).
Creating a batch import task#
The batch import task allows you to batch import faces from photos into a specified list.
To create a task for batch import of photo image archive into the list, click on the “Batch import” button (Figure 51). The general view of the form for creating a batch import task is shown below (Figure 62).
The “Batch import” form contains the following elements:
- field for uploading an archive with photographs — it is possible to upload archives in *.ZIP format (required);
- “List” — selection of a list (required);
- “Add a photo to the list only if it complies with the ISO/IEC standard” — the photo will be added to the list only after passing the ISO/IEC 19794-5:2011 verification.
— button for deleting the loaded archive — button for deleting the loaded archive.
Download file requirements:
- *.ZIP file format;
- there can be one or more people on the image (depends on policy settings);
- the image must contain a person's face;
- images must be located immediately inside the archive, and not in a folder inside the archive;
- the archive size is set using the ARCHIVE_MAX_SIZE parameter in the config.py configuration file of the Tasks component, the default size is 100 GB (for details, see “VisionLabs LUNA PLATFORM 5. Administrator manual”).
Fill in all the required parameters and click on the “Create task” button or the Enter key on your keyboard.
Resource-intensive tasks can take a while. In the pop-up window (Figure 63), you must confirm the action — click on the “Ok” button or cancel the action by clicking on the “Cancel” button.
After successfully creating a batch import task, the message “Batch import task has been successfully created” will appear in the upper right corner of the screen (Figure 64).
Creating a batch identification task#
To create a task for identifying an archive of photo images (faces or events with faces), click on the “Batch identification” button (Figure 51). The general view of the form for creating a batch identification task is shown below (Figure 65).
The “Batch Identification” form contains the following elements:
- field for uploading an archive with photographs in *.ZIP format (required);
—button for deleting the loaded archive;
- “Identify among”—look for matches among “Faces” or “Events”;
- “Filters” block—settings for user identification. A description of the parameters of the “Filters” block, depending on the selected object for identification, is presented in the tables (Table 24 and Table 25);
- “Additional filter parameters” block—general parameters for identification among faces and events:
- “Similarity threshold, %”—the lowest percentage similarity score between candidates that the Interface accepts as a possible match (default: 80).
- “Number of records (from 1 to 100)”—the number of lines with matches with a limit of 100 lines (default: 3).
Table 24. “Filters” block parameters of the batch identification task when searching for a match among faces
Name |
Description |
---|---|
List |
List name |
Comma-separated Face IDs |
Face ID from the list |
User data |
Information linked to the person from the database |
Comma-separated external face IDs |
External ID of persons face |
Date from |
Specifies the lower limit of the period of creation of faces or events in LUNA PLATFORM 5 |
Date to |
Specifies the upper limit of the period of creation of faces or events in LUNA PLATFORM 5 |
Table 25. “Filters” block parameters of the batch identification task when searching for a match among events
Name |
Description |
---|---|
Source |
List of available event sources |
User data |
Information linked to the person from the database |
Age category |
Age group: below 18; from 18 to 44; from 45 to 60; above 60 |
Gender |
Female; Male |
Emotion |
Anger; Sadness; Neutral; Disgust; Fear; Happiness; Surprise. Its possibly to select multiple emotions. |
Medical mask |
Detection of the presence/absence of a medical mask or mouth occlusion. Missing; Medical mask; Occluded. Its possibly to select multiple variants. |
Creation date from |
Specifies the lower limit of the period of creation of faces or events in LUNA PLATFORM 5 |
Creation date to |
Specifies the upper limit of the period of creation of faces or events in LUNA PLATFORM 5 |
Comma-separated event IDs |
Event ID of detection and attribute retrieval |
Comma-separated external event IDs |
External event ID |
Comma-separated Face IDs |
Face ID from events that are created in LUNA PLATFORM 5 as a result of a detection event and attribute extraction |
Similarity |
The lower threshold on the similarity if the person was identified |
Handling policies |
Handling Policy ID |
Comma-separated track IDs |
Specifies the values of the track identifiers in LUNA PLATFORM 5 in the UUID format |
Comma-separated tags |
Specifies a tag or tags |
Gender by body |
Specifies the female, male, undefined gender |
Age category by body |
Specifies the age range |
Headwear |
Specifies headdress |
Upper body colors |
Specifies top clothing color |
Sleeve |
Specifies sleeve length |
Headwear color |
Specifies headdress color |
Lower body colors |
Specifies bottom clothing color |
Lower body type |
Specifies bottom clothing type |
Shoes color |
Specifies shoe color (only for “Identify among events); |
Backpack |
Specifies backpack presence |
Location |
“District”; “Area”; “City”; “Street”; “House number”; “Longitude (-180…180)”; “Accuracy (0…90)”; “Latitude (-90…90)”; “Accuracy (0…90)” |
To upload an archive with photo images of faces to be identified, click on in the “References” section and specify the path to the archive on the local computer.
Download file requirements:
- *.ZIP file format;
- there can be one or more people on the image (depends on policy settings);
- the image must contain a person's face; images must be located immediately inside the archive, and not in a folder inside the archive;
- the archive size is set using the ARCHIVE_MAX_SIZE parameter in the config.py configuration file of the Tasks component, the default size is 100 GB (for details, see “VisionLabs LUNA PLATFORM 5. Administrator manual”).
Fill in all the necessary parameters and click the “Create task” button or the Enter key on your keyboard..
Resource-intensive tasks can take a while. In the pop-up window (Figure 66), you must confirm the action — click on the “Ok” button or cancel the action by clicking on the “Cancel” button.
After successfully creating a batch identification task, the message “Batch identification task has been successfully created” will appear in the upper right corner of the screen (Figure 67).
Creating a task for deleting faces from the list#
The task of removing persons from the list (Cleanup task) allows you to select faces based on specific parameters and then remove them from the selected list.
To create a Cleanup task, click on the “Deleting faces from the list” button (Figure 51). The general view of the window for creating a task for batch import is shown below (Figure 68).
The “Deleting faces from the list” window contains the following elements:
- “Description”—a field for adding an explanatory note to the task;
- “Store results” checkbox—if enabled, the results of the task will be saved in the Image Store service storage.
- “Delete samples” checkbox—if enabled, wrapped images obtained after detecting faces from the list will be deleted;
- “List”*—select a list from which faces will be removed;
- “Information”—a field for specifying information about persons from the list. Allows you to remove only a few people from the list, for example, those for whom the same information is provided.
- “Delete data after”—the lower included threshold value of the face creation time;
- “Delete data before”—the upper excluded threshold value of the face creation time.
* Required to be filled out.
Fill in all the required parameters and click on the “Create task” button or the Enter key on your keyboard.
Resource-intensive tasks may take some time to complete. Confirm the action in the pop-up window—click the “Ok” button or cancel the action using the “Cancel” button (Figure 69).
After successfully creating a task for removing persons from the list, the message “Cleanup task has been successfully created” appears on the screen (Figure 70).
Viewing the results of a task#
Viewing the results of a task is performed by pressing the button in the line (3 in the Figure 51). For viewing the *.ZIP archive for export tasks, the *.csv file for cross-matching tasks, the *.json file for batch processing, batch import and batch identification tasks (where * is the task ID) will be loaded.
The downloaded *.csv file contains a table with the export parameters selected in "Creating an export task" section (Figure 71) or with the results of cross-matching (Figure 72).
Task deleting#
Deleting a task is performed by clicking the button in the line (4 in the Figure 51).
In the pop-up window (Figure 73), you must confirm the action — click on the “Delete” button or cancel the action by clicking on the “Cancel” button. After the successful deletion, a corresponding notification appears.
ISO/IEC 19794-5:2011 check section#
The "ISO Verification" section is shown in the interface only if there is the corresponding ISO license.
The “ISO/IEC 19794-5:2011 check” section is intended to evaluate existing faces and uploaded photo images for compliance with the ISO/IEC 19794-5:2011 standard. The section with the results of checking the downloaded image is shown below (Figure 74).
The “ISO/IEC 19794-5:2011 check” section contains the following items:
- Photo image upload window (1) allows you to upload a photo image by clicking the file or using drag and drop;
- Form with verification results (2)
- Final assessment of whether the photo passed the ISO check;
- Results for each verification: if the verification is passed, then the font color is green; if the verification is failed, then the font color is red. A photo must pass successfully all verifications in order for it to comply with the ISO/IEC 19794-5:2011 standard;
- The number of failed verifications, if the result of the ISO check is negative;
- Button for downloading test results in json.
- Button for resetting old photo (3). Allows you to start a new ISO/IEC 19794-5:2011 check for another photo.
To start checking your photo for compliance with the ISO standard, upload or drag and drop your file, then click the “Check” button.
Image file requirements:
- *.jpeg, *.png or *.bmp format;
- image size no more than 15 MB and no more than 3840х2160 pixels;
- image may contain one or more people;
- image must have a person's face.
Users section#
The “Users” section is intended for showing user accounts, created in LUNA PLATFORM 5. General view of the “Users” section is shown below (Figure 75).
“Users” section contains the following elements:
- Table of existing user accounts containing columns:
- "Login—account login;
- "Description"—account description
- "Account type"
- user — the type of account with which you can create objects and use only your account data.
- advanced_user — allows to interact with its own data and view other accounts data
- admin — the type of account for which rights similar to "advanced_user" are available, and there is also access to the Admin service.
- "Create time" — date and time of account creation;
- "Last update time" — date and time of the last account update.
To sort a column in the table, click on the column name. The sorting arrow icon
indicates the current sorting by one of the parameters: alphabetically, ascending, or descending.
You can work with all types of accounts in the API service, but only "advanced_user" and "user" types of accounts can be created, while in the Admin service you can create all three types.
Creating a user account#
Create a user account using a POST request "create account" to the API service, or using the Admin service. When creating the account, you must specify the following data: login (email), password and account type.
Monitoring section#
The “Monitoring” section is intended for viewing information and status of connected services, modules, components, and systems. General view of the “Monitoring” section is shown below (Figure 76).
“Monitoring” section contains the following elements:
- List of connected services, modules, components, and systems:
- “Name” — component/service/system name;
- “Version” — component/service/system version;
- “State” — current state (status) of a component/service/system;
- “Documentation” — links to the documentation if it is present in the component/service/system.
This colors are used to indicate the current status of a service, module, component, or system:
— green color — component/service/system is up and running;
— blue color — component/service/system is loading;
— red color — component/service/system is temporarily unavailable.
Licenses section#
The “Licenses” section provides the information about available licenses. The section contains the following tabs:
“General information”–shows whether the license has expired (Figure 77);
“Estimations”–shows the status (enabled/disabled) of licenses for (Figure 78):
- checking quality of facial image (PlatformISO);
- estimation of body attributes (PlatformBodyAttributes);
- Liveness check (PlatformLiveness);
- estimating the number of people in an image (PlatformPeopleCounter);
“Modules”–shows the status (enabled/disabled) of service licenses for (Figure 79):
- storing data about events in the database (Events Service);
- completing tasks (Tasks Service);
- sending event notifications via a web socket (Sender Service);
- storing samples, reports on task execution, created clusters and additional metadata (Image Store Service);
- creating and storing handlers (Handlers Service).
Each tab allows you to go to the page for buying license or renewal.
Plugins section#
The “Plugins” section provides the information about plugins imported into LUNA PLATFORM 5. Plugins are used to perform secondary actions for the user’s needs. For example, you can expand the standard functionality of the product using them. The general view of the “Plugins” section is presented below (Figure 80).
“Plugins” section contains the following elements:
- table of plugins:
- “Name” — name of the plugin;
- “Status”–shows the current status (running/not running) of the plugin.
For more information on getting a list of imported plugins and their status, see LUNA PLATFORM 5 documentation .