Additional information#
This section provides the following additional information:
- Monitoring and logs visualization using Grafana
- Useful commands for working with Docker
- Description of the parameters for launching LUNA PLATFORM services and creating databases
- Actions to enable saving LP service logs to files
- Configuring Docker log rotation
- Setting custom InfluxDB settings
- Using Python Matcher service with Python Matcher Proxy service
- System scaling
- Compiling the VLMatch library for Oracle
Monitoring and logs visualization using Grafana#
Monitoring visualization is performed by the LUNA Dashboards service, which contains the Grafana monitoring data visualization platform with configured LUNA PLATFORM dashboards.
If necessary, you can install customized dashboards for Grafana separately. See the "LUNA Dashboards" section in the administrator manual for more information.
Together with Grafana, you can use the Grafana Loki log aggregation system, which enables you to flexibly work with LUNA PLATFORM logs. The Promtail agent is used to deliver LUNA PLATFORM logs to Grafana Loki (for more information, see the "Grafana Loki" section in the administrator manual).
LUNA Dashboards#
Note. To work with Grafana you need to use InfluxDB version 2.
Run LUNA Dashboards container#
Use the docker run
command with these parameters to run Grafana:
docker run \
--restart=always \
--detach=true \
--network=host \
--name=grafana \
-v /etc/localtime:/etc/localtime:ro \
dockerhub.visionlabs.ru/luna/luna-dashboards:v.0.0.7
Use "http://IP_ADDRESS:3000" to go to the Grafana web interface when the LUNA Dashboards and InfluxDB containers are running.
Grafana Loki#
Note. Grafana Loki requires LUNA Dashboards to be running.
Run Grafana Loki container#
Use the docker run
command with these parameters to run Grafana Loki:
docker run \
--name=loki \
--restart=always \
--detach=true \
--network=host \
-v /etc/localtime:/etc/localtime:ro \
dockerhub.visionlabs.ru/luna/loki:2.7.1
Run Promtail container#
Use the docker run
command with these parameters to run Promtail:
docker run \
-v /var/lib/luna/current/example-docker/logging/promtail.yml:/etc/promtail/luna.yml \
-v /var/lib/docker/containers:/var/lib/docker/containers \
-v /etc/localtime:/etc/localtime:ro \
--name=promtail \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/promtail:2.7.1 \
-config.file=/etc/promtail/luna.yml -client.url=http://127.0.0.1:3100/loki/api/v1/push -client.external-labels=job=containerlogs,pipeline_id=,job_id=,version=
-v /var/lib/luna/current/example-docker/logging/promtail.yml:/etc/promtail/luna.yml
- mounting the configuration file to the Promtail container
-config.file=/etc/promtail/luna.yml
- flag with the address of the configuration file
-client.url=http://127.0.0.1:3100/loki/api/v1/push
- flag with the address of deployed Grafana Loki
-client.external-labels=job=containerlogs,pipeline_id=,job_id=,version=
- static labels to add to all logs sent to Grafana Loki
Docker commands#
Show containers#
To show the list of launched Docker containers use the command:
docker ps
To show all the existing Docker containers use the command:
docker ps -a
Copy files to container#
You can transfer files into the container. Use the docker cp
command to copy a file into the container.
docker cp <file_location> <container_name>:<folder_inside_container>
Enter container#
You can enter individual containers using the following command:
docker exec -it <container_name> bash
To exit the container, use the command:
exit
Images names#
You can see all the names of the images using the command
docker images
Delete image#
If you need to delete an image:
- run the
docker images
command - find the required image, for example dockerhub.visionlabs.ru/luna/luna-image-store
- copy the corresponding image ID from the IMAGE ID, for example, "61860d036d8c"
- specify it in the deletion command:
docker rmi -f 61860d036d8c
Delete all the existing images:
docker rmi -f $(docker images -q)
Stop container#
You can stop the container using the command:
docker stop <container_name>
Stop all the containers:
docker stop $(docker ps -a -q)
Delete container#
If you need to delete a container:
- run the "docker ps" command
- stop the container (see Stop container)
- find the required image, for example dockerhub.visionlabs.ru/luna/luna-image-store
- copy the corresponding container ID from the CONTAINER ID column, for example, "23f555be8f3a"
- specify it in the deletion command:
docker container rm -f 23f555be8f3a
Delete all the containers:
docker container rm -f $(docker container ls -aq)
Check service logs#
You can use the following command to show logs for the service:
docker logs <container_name>
Launching parameters description#
When launching a Docker container for a LUNA PLATFORM service you should specify additional parameters required for the service launching.
The parameters specific for a particular container are described in the section about this container launching.
All the parameters given in the service launching example are required for proper service launching and utilization.
Launching services parameters#
Example command of launching LP services containers:
docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=<Port_of_the_launched_service> \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/<service>:/srv/logs/ \
--name=<service_container_name> \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/<service-name>:<version>
The following parameters are used when launching LP services containers:
docker run
- the command for running the selected image as a new container.
dockerhub.visionlabs.ru/luna/<service-name>:<version>
- the parameter specifies the image required for the container launching.
Links to download the container images you need are available in the description of the corresponding container launching.
--network=host
- the parameter specifies that a network is not simulated and the server network is used. If you need to change the port for third-party party containers, you should change this string to -p 5440:5432
. Where the first port 5440
is the local port and 5432
is the port used inside the container. The example is given for PostgreSQL.
--env=
- the parameter specifies the environment variables required to run a container. The following general values are specified:
-
CONFIGURATOR_HOST=127.0.0.1
- the host where the Configurator service is running. The localhost is set in the case when the container is launched on the same server with the Configurator service. -
CONFIGURATOR_PORT=5070
- the port where the Configurator service is listening. The 5070 port is used by default. -
PORT=<Port_of_the_service>
- the port where the service will listen. -
WORKER_COUNT
- specifies the number of worker processes for the service. -
RELOAD_CONFIG
enables auto-reload of configurations for the service when set to "1". See the "Automatic configurations reload" section in the LUNA PLATFORM 5 administrator manual. -
RELOAD_CONFIG_INTERVAL
specifies the configurations check period (10 seconds by default). See the "Automatic configurations reload" section in the LUNA PLATFORM 5 administrator manual.
--name=<service_container_name>
- the parameter specifies the name of the launched container. The name must be unique. If there is a container with the same name, an error will occur.
--restart=always
- the parameter specifies a restart policy. The daemon will always restart the container regardless of the exit status.
--detach=true
- run the container in the background mode.
-v
- the volume parameter enables you to mount the content of a server folder into a volume in the container. Thus their contents will synchronize. The following general data is mounted:
/etc/localtime:/etc/localtime:ro
- sets the current time zone used by the system in the container./tmp/logs/<service>:/srv/logs/
- enables copying of the folder with service logs to your server/tmp/logs/<service>
directory. You can change the directory where the logs will be saved according to your needs.
Creating DB parameters#
Example command of launching containers for database migration or database creation:
docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/<service>:/srv/logs/ \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/<service-name>:<version> \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1
The following parameters are used when launching containers for database migration or database creation:
--rm
- the parameter specifies if the container is deleted after all the specified scripts finish processing
python3 ./base_scripts/db_create.py
- the parameter specifies Python version and a script db_create.py
launched in the container. The script is used for the database structure creation.
--luna-config http://localhost:5070/1
- the parameter specifies where the launched script should receive configurations. By default, the service requests configurations from the Configurator service.
Logging to server#
To enable saving logs to the server, you should:
- create directories for logs on the server;
- activate log recording and set the location of log storage inside LP service containers;
- configure synchronization of log directories in the container with logs on the server using the
volume
argument at the start of each container.
Create logs directory#
Below are examples of commands for creating directories for saving logs and assigning rights to them for all LUNA PLATFORM services.
mkdir -p /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/accounts /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/remote-sdk /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4
chown -R 1001:0 /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/accounts /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/remote-sdk /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4
If you need to use the Python Matcher Proxy service, then you need to additionally create the /tmp/logs/python-matcher-proxy
directory and set its permissions.
Logging activation#
LP services logging activation#
To enable logging to file, you need to set the log_to_file
and folder_with_logs
settings in the <SERVICE_NAME>_LOGGER
section of the settings for each service.
Automatic method (before/after starting Configurator)
To update logging settings, you can use the logging.json
settings file provided with the distribution package.
Run the following command after starting the Configurator service:
docker cp /var/lib/luna/current/extras/conf/logging.json luna-configurator:/srv/luna_configurator/used_dumps/logging.json
Update your logging settings with the copied file.
docker exec -it luna-configurator python3 ./base_scripts/db_create.py --dump-file /srv/luna_configurator/used_dumps/logging.json
Manual method (after starting Configurator)
Go to the Configurator service interface (127.0.0.1:5070
) and set the logs path in the container in the folder_with_logs
parameter for all services whose logs need to be saved. For example, you can use the path /srv/logs
.
Set the log_to_file
option to true
to enable logging to a file.
Configurator service logging activation (before/after Configurator start)#
The Configurator service settings are not located in the Configurator user interface, they are located in the following file:
/var/lib/luna/current/example-docker/luna_configurator/configs/luna_configurator_postgres.conf
You should change the logging parameters in this file before starting the Configurator service or restart it after making changes.
Set the path to the logs location in the container in the FOLDER_WITH_LOGS = ./
parameter of the file. For example, FOLDER_WITH_LOGS = /srv/logs
.
Set the log_to_file
option to true
to enable logging to a file.
Mounting directories with logs when starting services#
The log directory is mounted with the following argument when starting the container:
-v <server_logs_folder>:<container_logs_folder> \
where <server_logs_folder>
is the directory created in the create logs directory step, and <container_logs_folder>
is the directory created in the activate logging step.
Example of command to launch the API service with mounting a directory with logs:
docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5000 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--name=luna-api \
--restart=always \
--detach=true \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/api:/srv/logs \
--network=host \
dockerhub.visionlabs.ru/luna/luna-api:v.6.16.10
The example container launch commands in this documentation contain these arguments.
Docker log rotation#
To limit the size of logs generated by Docker, you can set up automatic log rotation. To do this, add the following data to the /etc/docker/daemon.json
file:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "5"
}
}
This will allow Docker to store up to 5 log files per container, with each file being limited to 100MB.
After changing the file, you need to restart Docker:
systemctl reload docker
The above changes are the default for any newly created container, they do not apply to already created containers.
Set custom InfluxDB settings#
If you are going to use InfluxDB OSS 2, then you need to update the monitoring settings in Configurator service.
There are the following settings for InfluxDB OSS 2:
"send_data_for_monitoring": 1,
"use_ssl": 0,
"flushing_period": 1,
"host": "127.0.0.1",
"port": 8086,
"organization": "<ORGANIZATION_NAME>",
"token": "<TOKEN>",
"bucket": "<BUCKET_NAME>",
"version": <DB_VERSION>
You can update InfluxDB settings in the Configurator service by following these steps:
- open the following file:
vi /var/lib/luna/current/extras/conf/influx2.json
- set required data;
- save changes;
- copy the file to the influxDB container:
docker cp /var/lib/luna/current/extras/conf/influx2.json luna-configurator:/srv/
- update settings in the Configurator;
docker exec -it luna-configurator python3 ./base_scripts/db_create.py --dump-file /srv/influx2.json
You can also manually update settings in the Configurator service user interface.
The Configurator service configurations are set separately.
- open the file with the Configurator configurations:
vi /var/lib/luna/current/example-docker/luna_configurator/configs/luna_configurator_postgres.conf
- set required data;
- save changes;
- restart Configurator:
docker restart luna-configurator
Use Python Matcher with Python Matcher Proxy#
As mentioned earlier, along with the Python Matcher service, you can additionally use the Python Matcher Proxy service, which will redirect matching requests either to the Python Matcher service or to the matching plugins. Plugins may significantly improve matching processing performance. For example, it is possible to organize the storage of the data required for matching operations and additional objects fields in separate storage using plugins, which will speed up access to the data compared to the use of the standard LUNA PLATFORM database.
To use the Python Matcher service with Python Matcher Proxy, you should additionally launch the appropriate container, and then set a certain setting in the Configurator service. Follow the steps below only if you are going to use matching plugins.
See the description and usage of matching plugins in the administrator manual.
Python Matcher proxy container launch#
Use the following command to launch the service:
After starting the container, you need to set the
"luna_matcher_proxy":true
parameter in the "ADDITIONAL_SERVICES_USAGE" section in the Configurator service.
docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5110 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--env=SERVICE_TYPE="proxy" \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/python-matcher-proxy:/srv/logs \
--name=luna-python-matcher-proxy \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-python-matcher:v.1.6.8
After launching the container, you need to set the following value in the Configurator service.
ADDITIONAL_SERVICES_USAGE = "luna_matcher_proxy":true
System scaling#
All LP services are linearly scalable and can be located on several services.
You can run additional containers with LP services to improve performance and fail-safety. The number of services and the characteristics of servers depend on your tasks.
To increase performance, you may either improve the performance of a single server or increase the number of servers used by distributing most resource-intensive components of the system.
Balancers are used for the distribution of requests among the launched service instances. This approach provides the necessary processing speed and the required fail-safety level for specific customer's tasks. In the case of a node failure, the system will not stop: requests will be redirected to another node.
The image below shows two instances of the Faces service balanced by Nginx. Nginx receives requests on port 5030 and routes them to Faces instances. The faces services are launched on ports 5031 and 5032.
It is strongly recommended to regularly back up databases to a separate server regardless of the fail-safety level of the system. It allows you not to lose data in case of unforeseen circumstances.
MQs, databases, and balancers used by LUNA PLATFORM are products of third-party developers. You should configure them according to the recommendations of the corresponding vendors.
The Remote SDK service and the Python Matcher service perform the most resource-intensive operations.
The Remote SDK service performs mathematical image transformations and descriptors extraction. The operations require significant computational resources. Both CPU and GPU can be used for computations.
GPU usage is preferable since it improves the processing of requests. However, not all types of video cards are supported.
The Python Matcher service performs matching with lists. Matching requires CPU resources, however, you also should allocate as much RAM as possible for each Python Matcher instance. The RAM is used to store descriptors received from a database. Thus matcher does not require to request each descriptor from the database.
When distributing instances on several servers, you should consider the performance of each server. For example, if a large task is executed by several Python Matcher instances, and one of the instances is on the server with low performance, this can slow down the execution of the entire task.
For each instance of the service, you can set the number of workers. The greater the number of workers, the more resources and memory are consumed by the service instance. See the detailed information in the "Worker processes" section of the LUNA PLATFORM administrator manual.
Launching several containers#
There are two steps required for launching several instances of the same LP service
- Run several containers of the service
You must launch the required number of service by using the corresponding command for the service.
For example, for the API service you must run the following command with updated parameters.
docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=<port> \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/<folder_name>:/srv/logs \
--name=<name> \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-api:v.6.16.10
When running several similar containers the following parameters of the containers must differ:
--env=PORT=<port>
- the specified port for similar containers must differ. You must specify an available port for the instance. For example, "5001", "5002". The "5000" port will be specified for the Nginx balancer.
/tmp/logs/<folder_name>:/srv/logs
- the specified folder name for logs must differ to distinguish logs for different service instances.
--name=<container_name>
- the name of the launched container must differ as it is prohibited to launch two containers with the same name. For example, "api_1", "api_2".
--gpus device=0
- CORE services usually utilize different GPU devices. Thus you should specify different device numbers.
- Configure your balancer (e.g., Nginx) for routing requests to the services.
For each scaled LP service, you must set a port where Nginx will listen to service requests and real ports of each service instance where Nginx will redirect the requests.
An example of Nginx configuration file can be found here:
“/var/lib/luna/current/extras/conf/nginx.conf”.
You can use another balancer, but its utilization is not described in this documentation.
VLMatch library compilation for Oracle#
This section describes VLMatch library building and new function appliance to the Oracle database.
For VLMatch UDx function compilation one needs to:
-
Install required environment, see requirements:
-
Install the gcc/g++ 4.8 or higher
yum -y install gcc-c++.x86_64
- Change
SDK_HOME
variable - oracle sdk root (default is$ORACLE_HOME/bin
, check$ORACLE_HOME
environment variable is set) in the makefile. -
Go to the directory and run the "make.sh" file:
cd /var/lib/luna/current/extras/VLMatch/oracle/
chmod +x make.sh
./make.sh
-
Define the library and the function inside the database (from database console):
CREATE OR REPLACE LIBRARY VLMatchSource AS '$ORACLE_HOME/bin/VLMatchSource.so';
CREATE OR REPLACE FUNCTION VLMatch(descriptorFst IN RAW, descriptorSnd IN RAW, length IN BINARY_INTEGER)
RETURN BINARY_FLOAT
AS
LANGUAGE C
LIBRARY VLMatchSource
NAME "VLMatch"
PARAMETERS (descriptorFst BY REFERENCE, descriptorSnd BY REFERENCE, length UNSIGNED SHORT, RETURN FLOAT);
Test function within call (from database console):
SELECT VLMatch(HEXTORAW('1234567890123456789012345678901234567890123456789012345678901234'), HEXTORAW('0123456789012345678901234567890123456789012345678901234567890123'), 32) FROM DUAL;
The result should be equal to "0.4765625".