"Events archive" section#
The “Events archive” section is designed to display all events of face and body detection as well as to search for events in archive (Figure 13).
Receiving and displaying new events in the event archive is performed with minimal delays in near real-time.
If there are no filters applied (1), the Service displays the latest detection and identification events for the last 30 days.
Upload events according to the specified parameters by clicking on the "Export events" button (2): fill in the fields, click "Save", then go to the "Tasks" section and download the results.
The number of events displayed on the page is set by the switch in the lower right corner of the page. There can be 10, 25, 50 or 100 events in total on one page (2).
The following data from archived events is displayed on the page:
- "Event image"
- a photo image of the face from the video stream;
- a photo image of the body from the video stream;
- "Top Match"—the column is shown if the “Display top match” checkbox is active (3). The "Top match" includes:
- reference photo images of the face and/or body;
- value of similarity of the identified face with the reference in percentage terms and with the color coding of similarity thresholds. Color coding of similarity thresholds is configured in the config.json configuration file (see LUNA POINT. Administration Manual). By default:
- similarity values below “low” are marked in red;
- similarity values between “low” and “medium” are marked in yellow;
- similarity values above “medium” are marked in green.
- Match type—the type of object (face or event) by which the similarity of the identified face/body with the standard was detected;
- Event type—"Detection", when a face or body is detected in the frame, and "Match", when a match is found for the detected face or body in the database.
- "External ID"—external identifier of the face, the field is shown if such an ID is available (for "Face" in "Match type"). The external ID is used to integrate LUNA POINT with external systems as well as to transfer data to other systems in order to analyze and quickly respond to an event;
- "User data"—information from the database, linked to a person from the control (for "Face" in "Match type");
- "List"—the name of the list to which the person is attached (for "Face" in "Match type");
- "Date created"—date and time of fixing the event (for "Event" in "Match type");
- "Source"—the name of the source that recorded the event at the time the event was created. Users can change the source name. Then the "Video stream" field will show the new name, and the "Source" field will show the original one (for "Event" in "Match type");
- "Video stream"—the current name of the source that recorded the event, with a link to the real-time image of the stream (for "Event" in "Match type");
- "Handling policy"—the name of the handler, according to which the reference photo image of the body was processed (for "Event" in "Match type").
- "Event details" shows the available event data:
- "Date of created"—date and time of event registration;
- "Source"—the name of the source that recorded the event at the time the event was created. Users can change the source name. Then the "Video stream" field will show the new name, and the "Source" field will show the original one;
- "Video stream"—the current name of the source that recorded the event, with a link to the real-time image of the stream;
- "Handling policy"—the name of the handler, according to which the reference photo images of the face/body were processed;
- "Metadata" [^1]—button for uploading arbitrary user data in JSON format, the filed is shown if such data was added to the event (for "Event" in "Match type").
- Face attribute, if found:
- "Gender"—gender based on face image;
- “Age category”—the age of the detected person;
- Body attributes, if found:
- "Upper body colors"—an indication of the color of the clothes of the human body upper part;
- "Lower body colors"—indicating the color of the human body upper part;
- "Headwear"—the presence or absence of a headdress, if it is defined.
- "Backpack"—the presence or absence of a backpack, if it is defined.
- Map with event geotag.
[^1]: All detailed capabilities and limitations of the "Metadata" field are specified in the "Administrator Manual" of LUNA PLATFORM 5 in the paragraph "Events meta-information".
Click on the arrow on the event photo to open the event card (see the "Event details" section).
Click the arrow on the reference face image from the top match (the column is visible if the "Display top match" is ticked) to open a face card (see the “Face details” section).
Archived events filtering#
The Service allows you to filter archived events (1 in Figure 15) to find and display necessary events.
With filters user can quickly find an event among the last ones, as well as set a limit for displaying new events on the page. A short description of the elements and parameters of the block with filters in the "Events archive" section is presented in the table (Table 8).
Table 8. Filters available to the user to search for archived events:
| Parameter | Description | Default value |
|---|---|---|
| General stream parameters | ||
| Account ID | The parameter is used to bind the received data to a specific user. Required field | Account ID in LP5 |
| Stream name | The stream name displayed in the Service. Used to identify the source of sent frames. | |
| Description | User information about the video stream | |
| Group | The name of the group to which the stream is attached | |
| Status | The current state of the stream. Possible statuses: Pause Pending |
Pending |
| Stream data | ||
| Type | Video stream transmission type: UDP; TCP; Videofile; Images The TCP protocol implements an error control mechanism that minimizes information loss and missing key frames at the cost of increasing network latency. The UDP protocol does not implement an error control mechanism, so the stream is not protected from damage. The use of this protocol is recommended only if a high quality network infrastructure is available. For a large number of streams (10 or more), it is recommended to use the UDP protocol. When using the TCP protocol, there may be problems with reading streams |
UDP |
| Full path to the source | The path to the source of the video stream. Required field For example, for TCP/UDP type: rtsp://some_stream_address USB device number for TCP/UDP type: /dev/video0 To use a USB device, you must specify the --device flag with the address of the USB device when starting the FaceStream Docker container (see “Launching keys” section in the FaceStream Installation manual). The full path to video file for Videofile type: https://127.0.0.1:0000/super_server/ The full path to the directory with images for Images type: /example1/path/to/images/ To use video files and images, you must first transfer them to a Docker container |
|
| ROI coordinates | A limited frame area where face or body is detected and tracked (for example, in a dense flow of people). Specify the ROI value in one of the two formats–px or %. The first two values specify the coordinates of the top left point of the frame. The second two values indicate the width and height of the area of interest, if the values are specified in px; and the width and height of area relative to current frame size, if the values are specified in %. For example: 0,0,1920,1080 px or 0,0,100,100% Parameter setting can be done visually on the preview image when editing the event source. To do this, click on the gear button. In the opened window, grab the border of the detection area and move it. The width, height and coordinates of the detection area will take on new values. If you need to detect over the entire frame, click the "Full frame" button. Save your changes. |
|
| DROI coordinates | A limited area within the ROI zone. Face detection is performed in the ROI region, but the best frame is selected only in the DROI region. The face detection must be completely within the DROI so that the frame is considered as the best one. Specify the DROI value in one of two formats–px or %. The first two values specify the coordinates of the top left point of the frame. The second two values indicate the width and height of the area of interest, if the values are specified in px; and the width and height of the area relative to the current frame size, if the values are specified in %. For example: 0,0,1920,1080 px or 0,0,100,100% DROI is recommended for use when working with access control systems. This parameter is used only for working with faces. Parameter setting can be done visually on the preview image when editing the event source. To do this, click on the gear button. In the opened window, grab the border of the best frame selection area and move it. The width, height and coordinates of the area will take on new values. If you need to select the best frame over the entire frame, click the "Full frame" button. Save your changes. |
|
| Rotation angle of the image from the source | Used when the incoming video stream is rotated (for example, if the event source is installed on the ceiling) | 0 |
| Frame width | The parameter is used only for the TCP and UDP types and is designed to work with protocols that imply the existence of several channels with different bit rates and resolutions (e.g., HLS). If the stream has several such channels, then this parameter will allow you to select from all channels of the entire stream the channel which frame width is closer to the value specified in this parameter |
800 |
| Endless | This parameter allows you to control how the stream is restarted when a network error is received. The parameter is available only for the TCP and UDP types. If the parameter takes the Enabled value, then in case of receiving an error and successful reconnection, the stream processing will continue. If all attempts to reconnect failed, then the stream will take the status “failure”. If the parameter takes the Disabled value, then the stream processing will not continue, and the stream will take the “done” status. When broadcasting a video file, the Disabled value is assumed. This will avoid re-processing an already processed video file fragment when an error is received. If the parameter value is Enabled when broadcasting a video file, then after the processing is completed, the video file will be processed from the beginning |
Enabled |
| Stream handler parameters | This group of parameters defines the parameters of the policy (handler) created in LP5, which will be used to process streams. Different handlers should be used for faces and bodies. The handler must be created in LP5 beforehand | |
| Handler URL | The full network path to the deployed LP5 API service, including the LUNA Handlers and LUNA Events services required to generate an event by handler: http:// Where Required field |
|
| API version | API version for event generation in LP5. Required field API version 6 is currently supported |
|
| Handler ID for best shots (static) | The parameter allows using an external static handler_id of the LP5 policy for processing biometric samples of faces or bodies according to the specified rules. When using this policy, LP5 generates an event that contains all the information received from FaceStream and processes it according to the processing rules. For example: aaba1111-2111-4111-a7a7-5caf86621b5a Required field |
|
| URL to save original frames | This parameter specifies the URL for saving original frames of faces or bodies in LP5. The URL can be either the address to the LUNA Image Store service container, or the address to the /images resource of the LUNA API service. When specifying an address to /images resource, original frame will be saved under the image_id identifier. To send a frame, the send_source_frame parameter must be enabled. An example of the address to the LUNA Image Store service container: http://127.0.0.1:5020/1/buckets/ Where 127.0.0.1 is IP address where the LUNA Image Store service is deployed; 5020 is the default port of the LUNA Image Store service; 1 is API version of the LUNA Image Store service; An example of the address to the /images resource of the LUNA API service: http://127.0.0.1:5000/6/images Where 127.0.0.1 is the IP address where the LUNA API service is deployed; 6 is the API version of the LUNA API service; 5000 is the default API service port |
|
| Authorization (Token) | This parameter specifies either a token or an LP5 account ID for making requests to the LUNA API service. If the authorization field is not filled, then the LP5 account ID will be used, which is set when creating a stream |
|
| Geoposition | This group of parameters includes information about location of video stream source | |
| City, Area, District, Street, House number, Longitude, Latitude | Event source geographical location | |
| Autorestart | This group of parameters allows to configure the automatic restart of the stream | |
| Attempt count | Number of attempts to automatically restart the stream | 10 |
| Autorestart delay (in seconds) | Stream auto restart delay | 60 |
| ** Sending parameters** | This group of parameters defines the period during which frames will be analyzed to select the best shot, as well as all parameters associated with compiling a collection of the best shots | |
| Frame analysis period after which the best shot will be sent | The period starts from the moment a person appears in the frame — the first detection. Decreasing this parameter allows to quickly determine person, but with a greater error. Possible values: number of frames; number of seconds; 1 — frames are analyzed for all frames until the end of the track. At the end of the track (when the object leaves the frame) the best shot will be sent to LP5 |
-1 |
| Wait duration between track analysis periods | Specifies the timeout between two consecutive tracks. Possible values: number of frames; number of seconds; 0 — there is no timeout; 1 — timeout will last indefinitely |
0 |
| Track analysis and waiting period duration measure | Specifies the measurement type of the frame analysis period and the timeout period: Seconds; Frames The choice depends on the business task |
Seconds |
| Number of frames that the user sets to receive from the track or certain periods of this track | Assumes the creation of a collection of the best shots of the track or the time interval of the track, specified in the "Frame analysis period after which the best shot will be sent" parameter. This collection will be sent to LP5. Increasing the value increases the probability of correct recognition of the object, but affects the network load. Possible values are from 1 and more |
1 |
| Send only full set | Allows to send data (best shots and detections) only if user have the required number of best shots (“Number of frames that the user sets to receive from the track or certain periods of this track”) and the track length (“Minimum detection size for Primary Track mode”) | Enabled |
| Delete bestshot and detection data | Allows to delete the best shots and detections after sending data. If disabled, data remains in memory | Disabled |
| Use Primary Track | This group of parameters is designed to work with access control systems (ACS, turnstiles at the entrances) to simplify control and of face recognition technology at the entrance to a protected area. This group of parameters is only used for working with faces. This group of parameters is not used for the Image type |
|
| Use Primary Track | If the value of this parameter is Enabled, then the implementation mode of the Primary Track is turned on. Of all the detections on the frame, the detection with the maximum size is selected and its track becomes the main one. Further analysis is performed based on this track. The best shot from that track is sent to LP5. When using the parameter at the checkpoint, the best shots of only the person closest to the turnstile will be sent (the condition for the largest detection is met) |
Disabled |
| Minimum detection size for Primary Track mode | Sets the minimum detection size (vertically in pixels) at which the analysis of stream frames begins and the determination of the best frame | 70 |
| Size of detection for the main track | Sets the detection size in pixels for the Primary Track. When the detection size in pixels reaches the specified value, the track immediately sends the best shot to the server |
140 |
| Healthcheck parameters | This parameter group is used only when working with streams (TCP, UDP) and video files. In this group, user can set the parameters for reconnecting to the stream in case of stream playback errors |
|
| Maximum number of stream errors to reconnect to the stream | The maximum number of errors during stream playback. The parameter works in conjunction with the parameters "Error count period duration (in seconds)" and "Time between reconnection attempts (in seconds)". After the first error is received, the timeout specified in the "Time between reconnection attempts (in seconds)" parameter is performed, and then the connection to the stream is retried. If during the time specified in the "Error count period duration (in seconds)" parameter, the number of errors is greater than or equal to the number specified in the parameter, then the processing of the stream will be terminated, and its status will change to "failure". Errors can be caused by a problem with the network or video availability |
10 |
| Error count period duration (in seconds) | Parameter-criterion of the time to reconnect to the video stream. If the maximum number of errors occurs within the specified time, an attempt is made to reconnect to the video stream | 3600 |
| Time between reconnection attempts (in seconds) | After receiving the first error, the timeout specified in the parameter is performed, then the connection to the stream is retried | 5 |
| Filtering parameters | The parameter group describes objects for image filtering and sending the resulting best shots | |
| Threshold value to filter detections | Also called Approximate Garbage Score (AGS) for faces and Detector score for bodies — threshold for filtering face or body detections sent to the server. All detections with a score above the value of the parameter can be sent to the server as an HTTP request, otherwise the detections are not considered acceptable for further work with them. The recommended threshold value was identified through research and analysis of detections on various images of faces and bodies |
0.5187 |
| Head rotation angle threshold (to the left or right, yaw) | The maximum value of the angle of rotation of the head to the left and right to the source of the stream (in degrees). If the head rotation angle on the frame is greater than the specified value, the frame is considered unacceptable for further processing. This parameter is used only for working with faces |
40 |
| Head tilt angle threshold (up or down, pitch) | The maximum value of the head tilt angle up and down relative to the source of the stream. If the head tilt angle on the frame is greater than the specified value, then the frame is considered unacceptable for further processing. This parameter is used only for working with faces |
40 |
| Head tilt angle threshold (to the left or right, roll) | The maximum head tilt angle to the left and right relative to the source of the stream. If the head tilt angle on the frame is greater than the specified value, then the frame is considered unacceptable for further processing. This parameter is used only for working with faces |
30 |
| Number of frames used to filter photo images by the angle of rotation of the head | Filtering cuts off images with faces strongly turned away from the stream source. Specifies the number of frames for analyzing head rotation angles on each of these frames. If the angle is drastically different from the group average, the frame will not be considered the best shot. This parameter is used only for working with faces. With a value of 1, the parameter is disabled. Recommended value: 7 |
1 |
| Number of frames the system must collect to analyze head yaw angle | Indicates to the system that it is necessary to collect the number of frames specified in the “Number of frames used to filter photo images by the angle of rotation of the head“ parameter to analyze the head rotation angle. If the parameter is disabled, the Service will sequentially analyze incoming frames, i.e., first, two frames are analyzed, then three, and so on. The maximum number of frames in this sequence is set in "Number of frames used to filter photo images by the angle of rotation of the head". This parameter is used only for working with faces |
Disabled |
| Mouth overlap threshold (minimum mouth visibility) | If the received value exceeds the specified threshold, the image is considered unacceptable for further processing. For example, with the parameter value equals to 0.5, 50% of the mouth area is allowed to be covered. This parameter is used only for working with faces | 0 |
| Minimum body detection size | The parameter specifies the body detection size, less than which it will not be sent for processing. If the value is 0, then body detection will not be filtered by size | 0 |
| Liveness parameters | Liveness is used to check if there is a live person in the frame and prevents a printed photo or a photo from a phone from being used to pass the check. This group of parameters is only used for working with faces. Not used for the Image type |
|
| Check RGB ACS Liveness | Enables the mode of checking the presence of a person in the frame, based on working with the background. The check execution speed depends on the frame size of the video stream. If the processing speed drops with the enabled parameter, you need to reduce the video resolution in the event source settings |
Disabled |
| Check FlyingFaces Liveness | Enables the mode of checking the presence of a person in the frame, based on working with the environment of the face | Disabled |
| Track frames to run liveness check on | Specifies for which frames of track the Liveness check will be performed. Frame selection options: First N shots; Last N shots before best shot sending; All shots of track. The value “N” is specified in the parameter “Number of frames in the track for Liveness check when liveness-mode is enabled” |
First N frames |
| Number of frames in the track for Liveness check when liveness-mode is enabled | The number of frames in a track for checking Liveness when using the parameter “Track frames to run liveness check on” | 0 |
| Threshold value at which the system will consider that there is a real person in the frame | The threshold value at which the Service considers that there is a living person in the frame. The Service verdict on the presence of a real person in the frame will follow only if Liveness returns a value higher than the specified threshold value |
0 |
| Liveness weights (RGB ACS, FlyingFaces) | The coefficient of influence of each type of Liveness checking on the final estimate of the presence of a living person in the frame. Three values are indicated, referring to different types of Liveness. Values are indicated in fractions of a unit. The ratio is scaled based on the given numbers, regardless of whether they constitute a unit and which Liveness methods are enabled |
0, 0, 0 |
| Number of background frames that are used for the corresponding checks | Allows to set the number of background frames in the track for the Liveness check. Recommended value: 300. It is not recommended to change this parameter |
0 |
| Additional parameters | ||
| Frame processing | Used only for TCP, UDP and Videofile types. Possible values: Auto; Full frame; Scale frame. The parameter is set for a specific instance of FaceStream. With Full frame value, frame is immediately converted to an RGB image of required size after decoding. This results in better image quality, reduces the frame rate. When set to Scale frame, the image is scaled based on the TrackEngine settings. The default value is Auto. In this case, one of the two modes is selected automatically |
Auto |
| Number of threads for video decoding | Sets the number of streams for video decoding with FFMPEG. With an increase in the number of streams, the number of processor cores involved in decoding increases. Increasing the number of streams is recommended when processing high-definition video (4K and above) | 0 |
| Maximum FPS for video processing | Used only for Videofile type. The video is processed at the specified FPS. Video cannot be processed with FPS higher than specified in this parameter. If the video has a high FPS value and FaceStream cannot operate at the specified FPS, then frames will be skipped. Thus, the video file imitates a stream from a real video camera. Useful for performance tuning. The video will be played at the selected speed, convenient for load testing and further analysis. The parameter is not used if the value is 0 |
0 |
The user selects one filter or a combination of filters and clicks on the “Filter” button for the applied filters to be applied. To reset the applied filters, click on the “Reset” button.