Skip to content

Overview#

FaceStream conducts several functions:

  • Stream reading

Web-cameras, USB and IP-cameras (via RTSP protocol), video files and images can act as data sources.

  • Stream processing

It searches for faces and bodies in the stream and tracks them until they leave the frame or are blocked.

  • Sending face images as HTTP-requests onto external service

VisionLabs Software LUNA PLATFORM 5 acts as an external service.

FaceStream workflow depends on the setting of four configurations.

  • FaceStream configuration

    In this configuration, general FaceStream settings are set, such as logging, setting up sending images from FaceStream to external services, debugging, etc.

  • Streams management configuration

    In this configuration, settings concerning stream sources are set, such as source type, source address, filtering settings, etc. The settings are set by sending requests with a body in JSON format to the LUNA Streams service. FaceStream takes the settings from LUNA Streams for further processing. A detailed description of how FaceStream works with LUNA Streams is given in the "Interaction of FaceStream with LUNA Streams" section.

  • Trackengine configuration

    In this configuration, settings are set regarding the face or body detection and tracking.

  • Faceengine configuration

    In this configuration, the settings for face recognition are set. It is recommended to change the parameters in this configuration only in consultation with VisionLabs employees.

The following features are also available when working with FaceStream:

  • Using the LUNA Configurator service, which stores FaceStream startup parameters and enables you to continue processing the current video even after restarting FaceStream in case of an emergency shutdown
  • Dynamic creation, editing, and deletion of stream sources via API requests
  • Real time video streams preview in a browser for the streams with specified parameters
  • Stream metrics (number of streams, number of errors, number of faces, number of skipped frames, FPS)

FaceStream can be configured to work with either faces or bodies. Simultaneous processing of faces and bodies is not possible.

FaceStream workflow with faces and bodies#

FaceStream can handle both faces and bodies. Each object has its own scheme of operation and its own set of parameters described below.

The required minimum parameters for working with both objects can be found in the section "Settings for sending images to LUNA PLATFORM".

FaceStream workflow with faces#

FaceStream application workflow with faces is shown in the image below:

FaceStream workflow with faces
FaceStream workflow with faces
  1. FaceStream receives video from a source (IP or USB camera, web-camera, video file) or images. FaceStream can work with several sources of video streams (the number is set by the license). Sources are set by sending requests with the necessary parameters to the LUNA Streams service;

  2. FaceStream decodes video frames;

  3. The ROI area is cut out from the frame if the "roi" parameter is specified;

  4. The received image is scaled to the "scale-result-size" size if the "detector-scaling" is set in the trackengine configuration;

  5. Faces are detected in the frame;

  6. The face is redetected in the frame instead of detection if the "detector-step" parameter (trackengine configuration) is set;

  7. A track is created for each new face in the stream; then it is reinforced with new detections of this face from the subsequent frames.

The track is interrupted if the face disappears from the frame. You can set the "skip-frames" parameter (trackengine configuration) so the track will not be interrupted immediately, and the system will wait for the face to appear in the area for several frames;

  1. FaceStream filters the frames of low quality and selects bestshots. There are several algorithms for choosing the best detection(s) in the track. See the "Filtering section";

  2. If the frame is bestshot, it is added to the collection of bestshots.

Depending on the "number_of_bestshots_to_send" setting one or several best detections are collected from each track;

  1. Optional. If the "warp" type is set in the "portrait_type" parameter, the bestshots are normalized to the LUNA PLATFORM standard, and normalized images are created. Normalized image is better for processing using LUNA PLATFORM;

  2. The bestshots are sent to an external service via HTTP-request. The image may be sent as it is or transformed into the normalized image.

    The frequency of images sending is specified in the "sending" (streams management configuration) section.

    The sending parameters and external service address are specified in sections "data" (streams management configuration) and "sending" (FaceStream configuration).

FaceStream workflow with bodies#

FaceStream application workflow with bodies is shown in the image below:

FaceStream workflow with bodies
FaceStream workflow with bodies
  1. FaceStream receives video from a source (IP or USB camera, web-camera, video file) or images. FaceStream can work with several sources of video streams (the number is set by the license). Sources are set by sending requests with the necessary parameters to the LUNA Streams service;

  2. FaceStream decodes video frames;

  3. The received image is scaled to the "scale-result-size" size if the "detector-scaling" is set in the Trackengine configuration;

  4. Bodies are detected in the frame;

  5. The body is redetected in the frame instead of detection if the "detector-step" parameter (trackengine configuration) is set;

  6. A track is created for each new body in the stream; then it is reinforced with new detections of this body from the subsequent frames.

The track is interrupted if the body disappears from the frame. You can set the "skip-frames" parameter (trackengine configuration) so the track will not be interrupted immediately, and the system will wait for the body to appear in the area for several frames;

  1. FaceStream filters low quality frames and selects the bestshots. See "Min-score";

  2. If the frame is bestshot, it is added to the collection of bestshots.

Depending on the "number_of_bestshots_to_send" setting one or several best detections are collected from each track;

  1. The bestshots are normalized to the LUNA PLATFORM standard, and normalized images are created. Normalized image is better for processing using LUNA PLATFORM;

  2. The bestshots are sent to an external service via HTTP-request. Events can be generated in an external service according to the specified handler (see the description of the event in the LUNA PLATFORM administrator manual). The bestshots are transformed into warps. Along with the bestshots, the coordinates of the human body can be sent if parameter "send_detection_path" is enabled.

    The frequency of images sending is specified in the "sending" (stream management configuration) section.

    The sending parameters and external service address are specified in sections "data" (stream management configuration) and "sending" (FaceStream configuration).

Interaction of FaceStream with LUNA Streams#

To work with FaceStream, you should first launch an additional service - LUNA Streams. The service sets settings for stream management and passes it to FaceStream for further processing. The default service port is 5160.

To use the LUNA Streams service, you should use the LUNA PLATFORM 5 services - LUNA Licenses and LUNA Configurator, as well as PostgreSQL or Oracle and Influx.

The Influx database is needed for the purposes of monitoring the status of LUNA PLATFORM services. If necessary, monitoring can be disabled.

The FaceStream documentation does not describe the use of an Oracle database.

If necessary, you can launch LUNA Streams without LUNA Configurator. This method is not described in the documentation.

FaceStream is licensed using the LUNA PLATFORM 5 key, which contains information about the maximum number of streams that LUNA Streams can process. The license is regulated by the LUNA Licenses service.

See the FaceStream installation manual for detailed information on activating the LUNA Streams license.

The PostgreSQL/Oracle database stores all the data of LUNA Streams.

The general process of interaction between FaceStream and LUNA Streams is presented below:

Interaction between FaceStream and LUNA Streams
Interaction between FaceStream and LUNA Streams

After sending an HTTP request with the specified parameters to the LUNA Streams (1) service, the presence of a parameter regulating the number of streams for LUNA Streams operation is checked at the LUNA PLATFORM key using the LUNA Licenses (2) service. The number of streams already being processed at the time of the request is also checked using the FaceStream report (7) (see below).

If the key parameter is missing, a license error will be issued.

If at the time of stream creation the maximum number of available streams is not processed yet, the parameters are added to the LUNA Streams database (3) under the unique identifier stream_id. The stream with the parameters gets into the queue (4), where it is in the status "pending" until a special FaceStream worker picks up the stream from the queue for subsequent processing.

If the maximum amount is already being processed at the time of stream creation, LUNA Streams will not be able to add parameters to the database and a license error will be issued.

If FaceStream is disabled at the time of stream creation, then only the number of streams with the “pending” status that is stipulated by the license can be created. After the FaceStream is launched, the streams created in the queue order will be accepted for processing.

Streams can be created with the status "pause". In this case, they will be added to the database and will wait for a manual status update to "pending".

The queue is implemented in the LUNA Streams service itself and is not external.

Next, FaceStream workers take the parameters of the stream(s) from the queue (5) with the status "pending" and begin processing. In this case, the status of the processed streams is changed to "in_progress" and the stream is removed from the queue.

During processing, data is regularly sent to the main services of LUNA PLATFORM 5 for further processing of frames according to the specified handler_id and for creating events (6), and a report on processing streams in LUNA Streams (7) is regularly sent.

The time of sending reports is fixed and cannot be changed.

If the report says that some stream has been processed, then the FaceStream handler takes the following parameters of the stream with the status "pending" from the LUNA Streams queue (5), and the service changes the status of the stream from "pending" to "in_progress", removing it from the queue. If, for unknown reasons, the report was not transferred, then the streams are re-queued.

For more detailed description of LUNA Streams stream processing, see "Stream processing pipeline".

Settings for stream management are set using a POST request to the "/streams" resource.

In addition, the following actions are available for a stream:

  • getting existing streams by their "stream_id" with a description of the data of each stream ("get streams" request)
  • getting all information about a stream by its "stream_id", incl. sizes and frame rate, bitrate, group of frames (gop), creation time, stream processing start time, last processing error, etc. ("get stream" request)
  • deleting existing streams by their "stream_id" ("delete streams" request)
  • deleting stream by its "stream_id" ("remove stream" request)
  • getting the number of streams created ("count streams" request)
  • updating the "description" and "status" fields of a stream by its "stream_id" ("update stream" request)
  • replacement of all stream data with new ones by its "stream_id" ("put stream" request)

A detailed description of requests and example requests can be found in the Open API document "StreamsReferenceManual.html".

Stream distribution in LUNA Streams#

As mentioned earlier, the ability to process multiple streams at the same time is available.

For each stream, its current status is assumed:

  • pending - stream is waiting for handler
  • in_progress - stream processing is in progress
  • done - stream processing is completed (relevant for video files)
  • pause - stream processing is paused by user (not applicable for video files)
  • restart - stream processing is restarted by server
  • cancel - stream processing is cancelled by user (relevant for video files, but it can also be used for other sources)
  • failure - stream processing is failed by handler
  • handler_lost - stream processing handler is lost, needs to be passed to another handler (not applicable for video files)
  • not_found - stream was removed during the processing
  • deleted - stream was removed intentionally

Statuses "restart", "handler_lost" are transient. With these statuses, it is impossible to receive a stream, however, the transition through these statuses is logged as usual.

The "not_found" status is internal and will be sent back for feedback if the stream was removed during processing. With this status, it is impossible to receive a stream.

The "deleted" status is virtual. Stream with this status cannot exist, but this status can be seen in the stream logs.

Statuses transition table#

The following table shows statuses that may be received after each listed status.

The "+" symbol means that the status listed in the first row may occur after the status in the first column. An empty field means that there are no cases when the status may occur.

The "-" symbol means that there is no stream in the system (it was not created or it was already deleted).

- pending in_progress done restart pause cancel failure handler_lost
- + +
pending + + + + +
in_progress + + + + +* + + +
done + + + +
restart + +
pause + + + +
cancel + + + +
failure + + + +
handler_lost +

* not supported for video files

Stream processing pipeline#

By default, the new stream is created with the "pending" status and immediately enters the processing queue. Stream processing can be postponed by specifying the pause status when creating.

As soon as a free stream handler appears with a request for a pool from the queue, the stream is accepted for processing and it is assigned the "in_progress" status.

After the stream has been processed by the handler, it is assigned to the status "done" in case of success, or "failure" if any errors have occurred. However, stream processing status may be downgraded from "in_progress" for the following reasons:

  • no feedback from stream handler: process will be downgraded by server and record with "handler_lost" status will be added to the stream logs

  • replacing the stream by user: record with "restart" status will be added to the stream logs

During the processing routine, any change in the stream status is logged. Thus, you can restore the stream processing pipeline from the logs.

For streams with a "failure" status that allows automatic restart, several restart attempts will be made:

  • its status will be automatically changed to "restart" and than, to "pending"

  • "current_attempt" parameter will be increased by 1

  • "last_attempt_time" parameter will be actualized

The possibility of autorestart, the maximum number of restart attempts, the delay between attempts are specified by the user for each stream at "autorestart" section.

In this case, a successfully restarted stream will be considered a stream that, after a specified period of time (delay), will have a status other than "failure".

The number of simultaneous processing streams (statuses "pending" and "in_progress") is regulated by the license, but the LUNA Streams database can store an infinite number of streams with a different status, for example, "pause".

LUNA Streams database description#

The LUNA Streams database general schem is shown below.

LUNA Streams database
LUNA Streams database

See "Streams management configuration" for a description of the database data.

General recommendations for FaceStream configuration#

This section provides general guidelines for setting up FaceStream.

The names of the configuration, which describes the configured parameters, are mentioned in this section.

Before starting configuration#

You should perform the FaceStream configuration for each camera used separately. FaceStream should work with the stream of the camera, located in the standard operating conditions. The following reasons lead to these requirements:

  • Frames with different cameras may differ by:
  • noise level,
  • frame size,
  • light,
  • blurring,
  • etc.;
  • FaceStream settings depend on the lighting conditions, therefore, will be different for the cameras placed in a dark room and a light;
  • FaceStream performance depends on the number of faces or bodies in the frame. Therefore, the settings for the camera, which detects one face every 10 seconds, will be different from the settings for the camera detecting 10 faces per second;
  • The number of detected faces and bodies and the quality of these detections depend on correct location of the camera. When the camera is at a wrong angle, faces are not detected in frames. Moreover, head angles can also exceed the acceptable degree hence the frame with the detected face could not be used for further processing.
  • Faces and bodies in the zone of camera view can be partially or completely blocked by some objects. There can be background objects that can prevent the proper functioning of recognition algorithms.

The camera can be positioned so that the lighting or shooting conditions change throughout the day. It is recommended to test FaceStream work under different conditions and choose the best mode, providing reliable FaceStream operation under any conditions.

You can specify the FPS for video processing using the "real_time_mode_fps" parameter.

The video cameras tested with FaceStream are listed in section "Appendix A: Cameras Compatibility".

FaceStream performance configuration#

The mentioned above parameters have the greatest impact on the FaceStream performance.

Reduction of face search area#

Not all the areas of the frame contain faces. Besides, not all the faces in the frame have the required size and quality. For example, the sizes of faces in the background may be too small, and the faces near the edge of the frame may have unacceptable pitch, roll, or yaw angles.

The "roi" parameter (stream management configuration, section "data"), enables you to specify a rectangular area to search for faces.

Source frame with DROI area specified
Source frame with DROI area specified

The specified rectangular area is cut out from the frame and FaceStream performs further processing using this image.

Cropped image processed by FaceStream
Cropped image processed by FaceStream

The smaller the search area, the less resources are required for processing each frame.

Correct exploitation of the "roi" parameter significantly improves the performance of FaceStream.

The parameter should be used only when working with faces.

Frame scaling#

The "detector-scaling" option (Trackengine configuration) enables you to scale the frame before processing.

The appropriate frame size should be selected using the "scale-result-size" parameter (Trackengine configuration file). This parameter sets the maximum frame size after scaling the largest side of the frame. If the source frame had a size of 1920x1080 and the value of "scale-result-size" is equal to 640, then FaceStream will process the frame of 640x360 size.

If the frame was cut out using the "roi" parameter, the scaling will be applied to this cropped frame. In this case, you should specify the "scale-result-size" parameter value according to the greater ROI side.

You should gradually scale the frame and check whether face or body detection occurs on the frame, to select the optimal "scale-result-size" value. You should set the minimum image size at which all objects in the area of interest are detected.

Further extending our example, images below depict a video frame without resize (at original 1920x1080 resolution) and after resize to 960x640 with face detections visualized as bounding boxes.

Six faces can be detected when the source image resolution is 1920x1080.

Detections in image 1960X1080
Detections in image 1960X1080

Three faces are detected after the image is scaled to the 960x640 resolution. The faces in the background are smaller in size and are of poor quality.

Detections in image 960X640
Detections in image 960X640

The smaller the frame resolution, the less resources are consumed.

When working with bodies, this parameter works the same way.

Defining area with movement#

frg-subtractor frg-regions-alignment frg-regions-square-alignment
Recommended value when utilizing CPU 1 0 0
Recommended value when utilizing GPU 1 360 0

When the "frg-subtractor" parameter (Trackengine configuration) is enabled, motion in the frame is considered. The following face and body detection will be performed in the area with motion, not in the entire frame.

The areas with motion are determined after the frame is scaled.

When the "frg-subtractor" is enabled, the performance of FaceStream is increased.

The "frg-regions-alignment" parameter (trackengine.conf) enables you to set the alignment for the area with motion.

When the "frg-regions-square-alignment" parameter (Trackengine configuration) is enabled, the width and height of the area with motion will always be equal.

Batch processing of frames#

The following parameters configure frames batches processing. The parameters are set in Trackengine settings.

The "batched-processing" enables batch processing of frames.

When working with several video cameras, a frame is collected from each frame. Then the batch of frames is processed.

When the parameter is disabled, the frames are processed one by one.

When using batch processing mode, the delay before processing increases, but the processing itself is faster.

It is recommended to enable the parameter both when using the GPU and when using the CPU.

The "min-frames-batch-size" parameter sets the minimal number of frames collected from all the cameras before processing.

It is recommended to set the "min-frames-batch-size" parameter value equal to the number of streams when using the GPU.

It is recommended to set the "min-frames-batch-size" parameter value equal to "2" when using the CPU.

The "max-frames-batch-gather-timeout" parameter specifies the time between processing of the batches.

If a single frame is processed within the specified time and there is an additional time margin, FaceStream waits for additional frames to increase GPU utilization.

If the "max-frames-batch-gather-timeout" parameter is set to "20", this time is used to process the previous batch and collect a new one. After 20 seconds, the processing begins even if the number of frames equal to "min-frames-batch-size" was not collected. Processing of the next batch cannot begin before the processing of the previous one is finished.

There is no timeout for collecting frames to the batch if the parameter is set to "0" and "min-frames-batch-size" is ignored.

It is recommended to set the "max-frames-batch-gather-timeout" parameter value equal to "0" both when using the GPU and when using the CPU.

Minimal face size#

You should configure the "minFaceSize" parameter in the Faceengine configuration file to specify the minimal face size for detection.

You should set the maximum possible face size. The larger the face, the fewer resources are required to perform detections.

Note that the face size will depend on the actual frame size set by the "scale-result-size" parameter (Trackengine configuration). A face with a size equal to 100 pixels on a 1280x760 frame will have a size equal to 50 pixels on a 640x480 frame.

General configuration information#

Working with track#

A new track is created for each detected face or body. Bestshots are defined and sent for each track.

In general, the track is interrupted when the face can no longer be found in the frame. If a track was interrupted and the same person appears in the frame, a new track is created.

There can be a situation when two faces or bodies interact in a frame (one person behind the other). In this case, the tracks for both persons are interrupted, and new tracks are created.

There can be a situation when a person turns away, or a face or body is temporarily blocked. In this case, you can specify the "skip-frames" parameter (Trackengine configuration) instead of interrupting the track immediately. The parameter sets the number of frames during which the system will wait for the face to reappear in the area where it disappeared.

The "detector-step" parameter in "trackengine.conf" enables you to specify the number of frames on which face redetection will be performed in the specified area before face detection is performed. Redetection requires fewer resources, but the face may be lost if you set a large number of frames for redetection.

Bestshot sending#

The "sending" parameters group (stream management configuration) enables you to set parameters for the bestshot sending. FaceStream sends the received bestshots to LUNA PLATFORM (see "Settings for sending images to LUNA PLATFORM").

You can send several bestshots for the same face or body to increase the recognition accuracy. You should enable the "number_of_bestshots_to_send" (stream management configuration) parameters in this case.

LUNA PLATFORM enables you to aggregate the bestshots and create a single descriptor of a better quality using them.

If the required number of bestshots was not collected during the specified period or when the track was interrupted the collected bestshots are sent.

The "time_period_of_searching" and "silent_period" parameters can be specified in seconds or in frames. Use the "type" parameter to choose the type.

The general options for configuring the "time_period_of_searching" and "silent_period" parameters of the "sending" group from streams management configuration are listed below.

  1. The bestshot is sent after the track is interrupted and the person left the video camera zone of view.

All the frames with the person's face or body are processed and the bestshot is selected.

time_period_of_searching = -1 silent_period = 0

  1. It is required to quickly receive the bestshot and then send bestshots with the specified frequency.

For example, it is required to send a bestshot soon after an intruder entered the shop. The intruder will be identified by the blacklist.

The mode is also used for the demonstration of FaceStream capabilities in real-time.

The bestshot will be sent after the track is interrupted even if the specified period did not exceed.

time_period_of_searching = 3 silent_period = 0

  1. It is required to quickly send the bestshot and then send the bestshot only if the person is in the frame for a long time.

time_period_of_searching = 3 silent_period = 20

  1. It is required to quickly send the bestshot and never send the bestshot from this track again.

time_period_of_searching = 3 silent_period = -1

Frames filtration#

The filtration of face frames is performed by three main criteria (they are all set in the stream management configuration):

The "yaw_number" and "yaw_collection_mode" parameters are additionally set for the yaw angle. The parameters reduce the possibility of the error occurrence when the "0" angle is returned instead of a large angle.

If a frame did not pass at least one of the specified filters, it cannot be selected as a bestshot.

If the "number_of_bestshots_to_send" parameter is set, the frame is added to the array of bestshots to send. If the required number of bestshots to send was already collected, the one with the lowest frame quality score is replaced with the new bestshot if its quality is higher.

The filtration of body frames is performed only by one criterion - "min_score".

Working with ACMS#

Work with ACS is performed only with faces.

Use the "primary-track-policy" settings when working with ACMS. The settings enables you to activate the mode for working with a single face, which has the largest size. It is considered, that the face of interest is close to the camera.

The track of the largest face in the frame becomes primary. Other faces in the frame are detected but they are not processed. Bestshots are not sent for these faces.

As soon as another face reaches a larger size than the face from the primary track, this face track becomes primary and the processing is performed for it.

The mode is enabled using the "use_primary_track_policy" parameter.

The definition of the bestshots is performed only after the size (vertical) of the face reaches the value specified in the "best_shot_min_size" parameter. Frames with smaller faces can't be the bestshots. When the face detection vertical size reached the value set in the "best_shot_proper_size" parameter the bestshot is sent as a bestshot at once.

The "best_shot_min_size" and "best_shot_proper_size" are set depending on the video camera used and its location.

The examples below show configuration of the "sending" group parameters from streams management configuration for working with ACMS.

  1. The turnstile will only open once. To re-open the turnstile you should interrupt the track (move away from the video camera zone of view).

time_period_of_searching = -1 silent_period = 0

  1. The turnstile will open at certain intervals (in this case, every three seconds) if a person stands directly in front of it.

time_period_of_searching = 3 silent_period = 0

If the "use_primary_track_policy" parameter is enabled, the bestshot is never sent when the track is interrupted.

Additional information#

Formats, video compression standards, and protocols#

FaceStream utilizes the FFMPEG library to convert videos and get a stream using various protocols. All the main formats, video compression standards, and protocols that were tested when working with FaceStream are listed in this section.

FFMPEG supports more formats and video compression standards. They are not listed in this section, because they are rarely used when working with FaceStream.

Video formats#

Video formats that are processed using FaceStream:

  • AVI,
  • MP4,
  • MOV,
  • MKV,
  • FLV.

Encodings#

Basic video compression standards that FaceStream works with:

  • MPEG4,
  • MS MPEG4,
  • MS MPEG4v2,
  • MJPEG,
  • H.264,
  • H.265.

Protocols#

Basic protocols used by FaceStream for data receiving:

  • HTTP,
  • RTP,
  • RTSP,
  • TCP,
  • HLS,
  • UDP.

Memory consumption when running FaceStream#

This section lists the reasons for increasing RAM consumption when running FaceStream.

  1. Each stream increases memory consumption. The amount of the consumed memory depends on the settings set for FaceStream:

  2. the number of Ffmpeg threads in the "ffmpeg_threads_number" parameter (stream management configuration),

  3. image cache size in the "stream_images_buffer_max_size" parameter (FaceStream configuration),

  4. set buffer sizes in the "frames-buffer-size" parameter (Trackengine configuration).

  5. If the number of threads specified in the "ffmpeg_threads_number" parameter is greater than "1" (stream management configuration), the memory consumption increases significantly. At the same time, the increase in consumption is extremely slow and can be noticed after several hours of operation only.

For RTSP streams, you can set the "ffmpeg_threads_number" parameter to "0" or "1" (stream management configuration). In this case, memory growth is not noticed.

  1. Memory consumption increases after FaceStream starts. Growth occurs within 1-2 hours. This is related to caches filling (see point 1). If no new streams are created and step 2 is not executed, the memory consumption stops growing.

  2. Memory consumption increases when settings in the Debug section are enabled (FaceStream and Trackengine configurations).

Monitoring#

Monitoring is implemented as sending data to the "InfluxDB OSS 2". Monitoring is enabled in LUNA PLATFORM services by default, but can be disabled.

Monitoring is performed only for LUNA PLATFORM services. For FaceStream monitoring is not used.

There are two types of events that are monitored: request (all requests) and error (failed requests only).

Every event is a point in the time series. The point is represented using the following data:

  • series name (requests or errors)
  • timestamp of the request start
  • tags
  • fields

The tag is an indexed data in storage. It is represented as a dictionary, where

  • keys - string tag names,
  • values - string, integer or float.

The field is a non-indexed data in storage. It is represented as a dictionary, where

  • keys - string field names,
  • values - string, integer or float.

See the LUNA PLATFORM administrator manual for more information.

InfluxDB OSS 2#

For InfluxDB OSS 2 usage, you should:

  • Install the DB. See the "InfluxDB OSS 2 container launch" in the installation manual.
  • Register in the DB. InfluxDB has a user interface where you can register. You should visit <server_ip>:<influx_port>.
  • Configure the display of monitoring information in the GUI. It is not described in this documentation.

InfluxDB configuration#

The settings for InfluxDB are described below.

InfluxDB settings

Setting name Type Description
send_data_for_monitoring integer Enables monitoring for the service.
use_ssl integer Enables HTTPS protocol usage for connection to InfluxDB (0 – do not use, 1 – use).
flushing_period integer The frequency of sending monitoring data to InfluxDB.
port integer InfluxDB port.
host integer InfluxDB host.
organization String The organization name specified during registration.
token String Token received after registration.
bucket String Bucket name.