Content

Migration from 3.3.8 to v.5.56.0

Default ports for services

Default ports of services
Service name Port
LUNA PLATFORM API 5000
LUNA PLATFORM Admin 5010
LUNA PLATFORM Image Store 5020
LUNA PLATFORM Faces 5030
LUNA PLATFORM Events 5040
LUNA PLATFORM Tasks 5050
LUNA PLATFORM Tasks Worker 5051
LUNA PLATFORM Configurator 5070
LUNA PLATFORM Sender 5080
LUNA PLATFORM Handlers 5090
LUNA PLATFORM Python Matcher 5100
LUNA PLATFORM Licenses 5120
LUNA PLATFORM Backport 4 5130
LUNA PLATFORM Backport 3 5140
LUNA PLATFORM Accounts 5170
LUNA PLATFORM Lambda 5210
LUNA PLATFORM Remote SDK 5220
LUNA PLATFORM 3 User Interface 4100
LUNA PLATFORM 4 User Interface 4200
Oracle DB 1521
PostgreSQL 5432
Redis DB 6379
InfluxDB 8086
Grafana 3000

Configuration names for services

The table below includes the service names in the Configurator service. Use these parameters to configure your services.

Service names in the Configurator service in the “Service name” field
Service Service name in Configurator
API luna-api
Licenses luna-licenses
Faces luna-faces
Image Store luna-image-store
Accounts luna-accounts
Tasks luna-tasks
Events luna-events
Sender luna-sender
Admin luna-admin
Handlers luna-handlers
Lambda luna-lambda
Python Matcher luna-python-matcher
Backport 3 luna-backport3
Backport 4 luna-backport4

Settings for the Configurator service are set in its configuration file.

System requirements

LUNA PLATFORM is delivered in Docker containers and can be launched on CPU and GPU. Docker images of the LP containers are required for the installation. Internet connection is required on the server for Docker images download, or the images should be downloaded on any other device and moved to the server. It is required to manually specify login and password for Docker images downloading.

LUNA PLATFORM can be launched with a Docker Compose script.

The following Docker and Docker Compose versions are recommended for LP utilization:

Launching LUNA PLATFORM containers is officially supported on CentOS 7/8. Correct work on other systems is not guaranteed. All the procedures in the installation manual are described for CentOS 7.

LUNA PLATFORM service containers use the CentOS Linux 8.3.2011 operating system.

Processors

The configuration below guarantees software package minimum power operating and cannot be used for the production system. System requirements for the production system are calculated based on the intended system load.

CPU

The following minimum system requirements should be met for the LUNA PLATFORM software package installation:

It is recommended using SSD for databases and Image Store service.

GPU

For GPU acceleration an NVIDIA GPU is required. The following architectures are supported:

Compute Capability 6.1 or higher is required.

A minimum of 6GB or dedicated video RAM is required. 8 GB or more VRAM recommended.

CUDA of version 11.4 should be installed on the server with the Remote SDK service. The recommended NVIDIA driver is r470.

Third-party applications

The following third-party services are used by default with LUNA PLATFORM 5.

You can also use the Oracle database instead of PostgreSQL for all services except the Events service. The installation and configuration of Oracle are not described in this manual.

Balancers and other software can be used when scaling the system to provide fail-safety. The installation guide provides recommendations on launching Nginx container with a configuration file to balance requests to the API, Faces, Image Store, and Events services.

The following third-party applications versions are recommended for LP launching:

These versions were tested by VisionLabs specialists. Newer versions can be used if needed, but they are not guaranteed to work.

It is recommended to use the unzip package to unpack the distribution. The command to download the package is given in the installation manual.

If you need to use an external database and the VLMatch function, you need to download additional dependencies described in the “External DB” section of the installation manual.

PostgreSQL, Redis, InfluxDB, Grafana and Nginx docker containers can be downloaded from the VisionLabs registry.

Introduction

This document describes the general steps for upgrading from LUNA PLATFORM 3 distribution (version 3.3.8) to LUNA PLATFORM 5 with Backport 3 service. See the “Backports” section in administrator manual for information about the Backport 3 service.

The database migration procedures are performed using a script. The script was tested on the LUNA PLATFORM 3 of version 3.3.8. It was not tested on the other LUNA PLATFORM 3 versions. See “Migration from LUNA PLATFORM 3 to Backport 3”.

You should update LUNA PLATFORM to the version 3.3.8 if you have an earlier version.

This instruction describes migration from Aerospike and PostgreSQL (LUNA PLATFORM 3) databases to the PostgreSQL (LUNA PLATFORM 5) databases and full installation of LUNA PLATFORM 5. The instruction provides an example of commands for migrating the PostgreSQL database from version 9.6 running on the server to version 16 running in a Docker container. If necessary, you can migrate to the version 16 running on the server as a service (not described in this documentation).

This document describes migration from LUNA PLATFORM 3.3.8 installed in the default configuration. Note that your LUNA PLATFORM configuration and scaling may differ. In this case, use this manual as an example of the general approach to LUNA PLATFORM migration.

A network license is required to use the LUNA PLATFORM in Docker containers. The license is provided by VisionLabs on request separately from the delivery. The license key is created using the fingerprint of the system. This fingerprint is created based on information about the hardware characteristics of the server. Thus, the received license key will work only on the same server from which the system fingerprint was obtained. LUNA PLATFORM can be activated using one of two utilities - HASP or Guardant. The section “Activate license” provides instructions for activating the license key for each method.

The document describes installation of all the services on a single service.

You should install InfluxDB if monitoring is required.

For a successful upgrade, you need to perform the actions from the sections “Before upgrade” and “Services launch”. The section “Additional information” provides useful information on the description of service launch parameters, Docker commands, information on launching the Python Matcher Proxy service for using matching plugins and other.

This document includes an example of LUNA PLATFORM deployment. It implements LUNA PLATFORM minimum power operating for demonstration purposes and cannot be used for the production system.

All the provided commands should be executed using the Bash shell (when you launch commands directly on the server) or in a program for working with network protocols (when you remotely connect to the server), for example, Putty.

This document does not include a tutorial for Docker usage. Please refer to the Docker documentation to find more information about Docker:

https://docs.docker.com

A license file is required for LUNA PLATFORM activation. The file is provided by VisionLabs separately upon request.

All actions described in this manual must be performed by the root user. This document does not describe the creation of the user with administrator privileges and the following installation by this user.

Before upgrade

Make sure that you are the root user before upgrade!

Before launching the LUNA PLATFORM, you must perform the following actions:

  1. Create backups.
  2. Delete old symbolic link.
  3. Unpack the distribution of the new version of LUNA PLATFORM.
  4. Create new symbolic link.
  5. Change group and owner for new directories.
  6. Move Image Store buckets.
  7. Configure SELinux and Firewall if not previously configured.
  8. Create log directories for new services, if logging to a file was previously used.
  9. Activate license.
  10. Install Docker.
  11. Set up GPU computing if you plan to use GPU.
  12. Login to VisionLabs registry if authorization was not previously performed.

Backups creation

Create backups for all the databases used with LUNA PLATFORM before performing the migration procedures. You can restore your data if any problems occur during the migration.

It is recommended to create backups for Image Store buckets.

Backups creation for databases and buckets is not described in this document.

Go to the “luna” directory.

cd /var/lib/luna

Delete the “current” symbolic link.

rm -f current

Distribution unpacking

The distribution package is an archive luna_v.5.56.0, where v.5.56.0 is a numerical identifier, describing the current LUNA PLATFORM version.

The archive includes configuration files, required for installation and exploitation. It does not include Docker images for the services. They should be downloaded from the Internet.

Move the distribution package to the directory on your server before the installation. For example, move the files to /root/ directory. The directory should not contain any other distribution or license files except the target ones.

Move the distribution to the created directory.

mv /root/luna_v.5.56.0.zip /var/lib/luna

Install the unzip archiver if it is necessary.

yum install -y unzip

Go to the folder with distribution.

cd /var/lib/luna

Unzip files.

unzip luna_v.5.56.0.zip

Create a symbolic link.

The link indicates that the current version of the distribution file is used to run LUNA PLATFORM.

ln -s luna_v.5.56.0 current

Changing group and owner for directories

LP services are launched inside the containers by the “luna” user. Therefore, it is required to set permissions for this user to use the mounted volumes.

Go to the LP “example-docker” directory.

cd /var/lib/luna/current/example-docker/

Create a directory to store settings.

mkdir luna_configurator/used_dumps

Set permissions for the user with UID 1001 and group 0 to use the mounted directories.

chown -R 1001:0 luna_configurator/used_dumps

Move Image Store buckets

LUNA PLATFORM 5 is supposed to store buckets in the root directory /var/lib/luna/ to simplify the process of subsequent updates.

Create a directory to store Image Store buckets.

mkdir -p /var/lib/luna/image_store

Move the contents of the Image Store bucket directory to the new bucket storage directory.

mv /var/lib/luna/luna_v.3.3.8/luna-image-store/luna_image_store/local_storage/* /var/lib/luna/image_store

Set permissions for the user with UID 1001 and group 0 to use the mounted directories.

chown -R 1001:0 /var/lib/luna/image_store

SELinux and Firewall

You must configure SELinux and Firewall so that they do not block LUNA PLATFORM services.

SELinux and Firewall configurations are not described in this guide.

If SELinux and Firewall are not configured, the installation cannot be performed.

Create log directory for new services

Skip this section if no logs were previously stored on the server.

In the version of LUNA PLATFORM 5, new services have appeared for which you need to create directories with logs.

See “Logging to server” section if you have not previously used logging to a file, but want to enable it.

Following are the commands to create directories for all existing services. These commands will create and assign permissions only to missing directories.

mkdir -p /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/accounts /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/remote-sdk /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4
chown -R 1001:0 /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/accounts /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/remote-sdk /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4

If you need to use the Python Matcher Proxy service, then you need to additionally create the /tmp/logs/python-matcher-proxy directory and set its permissions.

License activation

To activate/upgrade the license, follow these steps:

Actions from License activation manual

Open the license activation manual and follow the necessary steps.

Note: This action is mandatory. The license will not work without following the steps to activate the license from the corresponding manual.

Docker installation

The Docker installation is described in the official documentation

You do not need to install Docker if you already have an installed Docker 20.10.8 on your server. Not guaranteed to work with higher versions of Docker.

Quick installation commands are listed below.

Check the official documentation for updates if you have any problems with the installation.

Install dependencies.

yum install -y yum-utils device-mapper-persistent-data lvm2

Add repository.

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install Docker.

yum -y install docker-ce docker-ce-cli containerd.io

Launch Docker.

systemctl start docker
systemctl enable docker

Check Docker status.

systemctl status docker

Calculations using GPU

You can use GPU for the general calculations performed by Remote SDK.

Skip this section if you are not going to utilize GPU for your calculations.

You need to install NVIDIA Container Toolkit to use GPU with Docker containers. The example of the installation is given below.

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | tee /etc/yum.repos.d/nvidia-docker.repo
yum install -y nvidia-container-toolkit
systemctl restart docker

Check the NVIDIA Container toolkit operating by running a base CUDA container (this container is not provided in the LP distribution and should be downloaded from the Internet):

docker run --rm --gpus all nvidia/cuda:11.4.3-base-centos7 nvidia-smi

See the NVIDIA documentation for additional information.

Attributes extraction on the GPU is engineered for maximum throughput. The input images are processed in batches. This reduces computation cost per image but does not provide the shortest latency per image.

GPU acceleration is designed for high load applications where request counts per second consistently reach thousands. It won’t be beneficial to use GPU acceleration in non-extensively loaded scenarios where latency matters.

Login to registry

When launching containers, you should specify a link to the image required for the container launching. This image will be downloaded from the VisionLabs registry. Before that, you should login to the registry.

Login and password can be requested from the VisionLabs representative.

Enter login <username>.

docker login dockerhub.visionlabs.ru --username <username>

After running the command, you will be prompted for a password. Enter password.

In the docker login command, you can enter the login and password at the same time, but this does not guarantee security because the password can be seen in the command history.

Services launch

This section gives examples for:

LUNA PLATFORM services must be launched in the following sequence:

The Lambda service (disabled by default) can be launched after Licenses and Configurator services.

Next, you need to launch the Backport 3 service and its user interface:

It is recommended to launch containers one by one and wait for the container status to become “up” (use the docker ps command).

Some of these services are optional and you can disable their use. It is recommended to use Events, Tasks, Sender and Admin services by default. See the “Optional services usage” section for details.

When launching each service, certain parameters are used, for example, --detach, --network, etc. See the section “Launching parameters description” for more detailed information about all launch parameters of LUNA PLATFORM services and databases.

See the “Docker commands” section for details about working with containers.

Monitoring configuration

Monitoring LUNA PLATFORM services requires running the Influx 2.0.8-alpine database. Below are the commands to launch the InfluxDB container.

For more information, see the “Monitoring” section in the administrator manual.

If necessary, you can configure the visualization of monitoring data using the LUNA Dashboards service, which includes a configured Grafana data visualization system. In addition, you can launch the Grafana Loki tool for advanced work with logs. See the instructions for launching LUNA Dashboards and Grafana Loki in the “Monitoring and logs visualization using Grafana” section.

Migration from version 1

If necessary, you can upgrade from the InfluxDB OSS 1 version.

The process of migrating InfluxDB from version 1 is not described in this documentation. InfluxDB provides built-in tools for migrating from version 1 to version 2. See the documentation:

https://docs.influxdata.com/influxdb/v2.0/upgrade/v1-to-v2/docker/

InfluxDB OSS 2

You can use InfluxDB OSS 2 as a service, or run it in a Docker container.

If you plan to use InfluxDB OSS 2 as a service, skip this step and make sure you have migrated from InfluxDB OSS 1.

To run InfluxDB OSS 2 in a Docker container, follow the steps below:

systemctl stop influxdb.service

Use the docker run command with these parameters:

docker run \
-e DOCKER_INFLUXDB_INIT_MODE=setup \
-e DOCKER_INFLUXDB_INIT_BUCKET=luna_monitoring \
-e DOCKER_INFLUXDB_INIT_USERNAME=luna \
-e DOCKER_INFLUXDB_INIT_PASSWORD=password \
-e DOCKER_INFLUXDB_INIT_ORG=luna \
-e DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=kofqt4Pfqjn6o0RBtMDQqVoJLgHoxxDUmmhiAZ7JS6VmEnrqZXQhxDhad8AX9tmiJH6CjM7Y1U8p5eSEocGzIA== \
-v /etc/localtime:/etc/localtime:ro \
-v /var/lib/luna/influx:/var/lib/influxdb2 \
--restart=always \
--detach=true \
--network=host \
--name influxdb \
dockerhub.visionlabs.ru/luna/influxdb:2.0.8-alpine

If you need to set the custom settings of the InfluxDB (for example, set the IP address and port when launching InfluxDB on separate server), then you need to change them in the configurations of each LUNA PLATFORM service. See the section “Set custom InfluxDB settings” for more information.

Run third-party services

This section describes the launching of databases and message queues in docker containers. They must be launched before LP services.

PostgreSQL

Migrate PostgreSQL 9.6 to PostgreSQL 16

In LUNA PLATFORM 5, the VisionLabs image for PostgreSQL has been updated from version 9.6 to version 16.

If this image was previously used, then you need to perform the migration yourself according to official documentation. If necessary, you can continue using PostgreSQL 9.6.

Mounting PostgreSQL 9.6 data into a container for PostgreSQL 16 will result in an error.

Launch PostgreSQL

Note: Make sure that the old PostgreSQL is deleted.

Use the following command to launch PostgreSQL.

docker run \
--env=POSTGRES_USER=luna \
--env=POSTGRES_PASSWORD=luna \
--shm-size=1g \
-v /var/lib/luna/postgresql/data/:/var/lib/postgresql/data/ \
-v /var/lib/luna/current/example-docker/postgresql/entrypoint-initdb.d/:/docker-entrypoint-initdb.d/ \
-v /etc/localtime:/etc/localtime:ro \
--name=postgres \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/postgis-vlmatch:16

-v /var/lib/luna/current/example-docker/postgresql/entrypoint-initdb.d/:/docker-entrypoint-initdb.d/ \ - The “docker-entrypoint-initdb.d” script includes the commands for the creation of services databases. During database creation, a default username and password are automatically used.

-v /var/lib/luna/current/example-docker/postgresql/data/:/var/lib/postgresql/data/ - The volume command enables you to mount the “data” folder to the PostgreSQL container. The folder on the server and the folder in the container will be synchronized. The PostgreSQL data from the container will be saved to this directory.

--network=host - If you need to change the port for PotgreSQL, you should change this string to -p 5440:5432. Where the first port 5440 is the local port and 5432 is the port used inside the container.

You should create all the databases for LP services manually if you are going to use an already installed PostgreSQL.

Redis

If you already have Redis installed, skip this step.

Use the following command to launch Redis.

docker run \
-v /etc/localtime:/etc/localtime:ro \
--name=redis \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/redis:7.2

Configurator

Optional services usage

The listed below services are not mandatory for LP:

You can disable them if their functionality is not required for your tasks.

Use the “ADDITIONAL_SERVICES_USAGE” section in the API service settings in the Configurator service to disable unnecessary services.

You can use the dump file provided in the distribution package to enable/disable services before Configurator launch.

vi /var/lib/luna/current/extras/conf/platform_settings.json

Disabling any of the services has certain consequences. For more information, see the “Disableable services” section of the administrator manual.

Configurator DB tables creation

Use the docker run command with these parameters to create the Configurator database tables.

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /var/lib/luna/current/example-docker/luna_configurator/configs/luna_configurator_postgres.conf:/srv/luna_configurator/configs/config.conf \
-v /var/lib/luna/current/extras/conf/platform_settings.json:/srv/luna_configurator/used_dumps/platform_settings.json \
--network=host \
-v /tmp/logs/configurator:/srv/logs \
--rm \
--entrypoint bash \
dockerhub.visionlabs.ru/luna/luna-configurator:v.2.1.80 \
-c "python3 ./base_scripts/db_create.py; cd /srv/luna_configurator/configs/configs/; python3 -m configs.migrate --config /srv/luna_configurator/configs/config.conf head; cd /srv; python3 ./base_scripts/db_create.py --dump-file /srv/luna_configurator/used_dumps/platform_settings.json"

Here:

Run Configurator container

Use the docker run command with these parameters to launch Configurator:

docker run \
--env=PORT=5070 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /var/lib/luna/current/example-docker/luna_configurator/configs/luna_configurator_postgres.conf:/srv/luna_configurator/configs/config.conf \
-v /tmp/logs/configurator:/srv/logs \
--name=luna-configurator \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-configurator:v.2.1.80 

At this stage, you can activate logging to file if you need to save them on the server (see the “Logging to server” section).

Migration from LUNA PLATFORM 3 to Backport 3

This section describes accounts, descriptors, and persons migration from LUNA PLATFORM 3 databases to LUNA PLATFORM 5 databases.

Edit configuration file

You need to set up the following configuration file before starting the migration:

vi /var/lib/luna/current/extras/conf/migration_config.conf

Enter the following information:

Before migration

  1. The server where the migrations script is launched should have a connection to all the specified databases and the Broker service.

  2. Make sure that the Broker service (LUNA PLATFORM 3) is launched. It is used to get descriptors from the database.

It is not necessary to launch the LUNA PLATFORM 5 services to perform migrations.

The Faces (LUNA PLATFORM 3) and Faces (LUNA PLATFORM 5) database names are similar by default (luna_faces). You should fix it using one of the following ways:

The second method is described in this manual below.

Make sure that the databases for the Backport 3 service (LUNA PLATFORM 5) and the Faces service (LUNA PLATFORM 5) are empty (there are no entries) before starting the migration.

Do not launch the creation of database tables for the Faces service before changing the database name/PostgreSQL address in the “LUNA_FACES_DB” section of the Faces service configuration file/Configurator. Otherwise, you can lose the data stored in the Faces database of LUNA PLATFORM 3.

Faces DB creation for LUNA PLATFORM 5

Create a new database.

docker exec -it postgres psql -U luna -c "CREATE DATABASE luna_faces_5;"

Grant privileges to the database user.

docker exec -it postgres psql -U luna -c "GRANT ALL PRIVILEGES ON DATABASE luna_faces_5 TO luna;"

Allow user to authorize in the DB.

docker exec -it postgres psql -U luna -c "ALTER ROLE luna WITH LOGIN;"

Add VLMatch function to perform matching.

docker exec -it postgres psql -U luna -d luna_faces_5 -c "CREATE OR REPLACE FUNCTION VLMatch(bytea, bytea, int) RETURNS float8 AS '/srv/VLMatchSource.so', 'VLMatch' LANGUAGE C PARALLEL SAFE;";

Change the utilized DB

Now you should specify the “luna_faces_5” DB name in the settings of the Faces service.

Faces DB tables creation

Use the following command to create the Faces DB tables:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/faces:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-faces:v.4.10.2 \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1

Backport 3 DB tables creation

Use the following command to create DB tables for Backport 3:

docker run \
-v /etc/localtime:/etc/localtime:ro  \
-v /tmp/logs/backport3:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-backport3:v.0.10.2 \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1

Accounts DB tables creation

Use the following command to create Accounts DB tables:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/accounts:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-accounts:v.0.2.2 \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1

Migration launch

All the migration procedures are performed using the “start_migration.py” script. See the “Migration script description” section below for additional information about the script.

Run the script using the Backport 3 container:

docker run \
--rm -t \
-v /tmp/logs/backport3:/srv/logs \
-v /var/lib/luna/current/extras/conf/migration_config.conf:/srv/base_scripts/migrate_backport3/config/config.conf \
-v /var/lib/luna/image_store:/local_storage \
--network=host \
--entrypoint bash dockerhub.visionlabs.ru/luna/luna-backport3:v.0.10.2 -c "cd ./base_scripts/migrate_backport3 && pip3 install -r requirements.txt && python3 ./start_migration.py"

Here:

The configuration file /var/lib/luna/current/extras/conf/migration_config.conf is added to the container for the script launching.

You can optionally use the --skip_missing_descriptors parameter, which will enable you to ignore missing descriptors in the LP 3 database.

Migration script description

Data transfer will be performed in the following order:

When migrating API and Faces service databases, all LP 3 accounts will be migrated and stored in the database of the Accounts service. All migrated accounts will be of type “user”. The fields “password”, “email” and “organization_name” will be transferred to the “account” table of the Accounts database under new names - “password”, “login” and “description” respectively. Tokens will stored in the Backport3 database, but their identifier will also be entered in the Accounts database, where the necessary permissions will be automatically set.

When the script is launched, the log files “luna-backport3_ERROR_migration.txt” and “luna-backport3_WARNING_migration.txt” are created. The files include information about all the errors and warnings that occurred during the migration process.

The files are saved to “/srv/logs/” directory of the container.

Run the following script to get help:

docker run --rm -t \
--network=host \
--entrypoint bash dockerhub.visionlabs.ru/luna/luna-backport3:v.0.10.2 -c "cd ./base_scripts/migrate_backport3 && pip3 install -r requirements.txt && python3 ./start_migration.py --help"

To start individual migration steps, pass the --migrate command line argument.

The argument takes the following parameters:

Code example:

docker run \
--rm -t \
-v /tmp/logs/backport3:/srv/logs \
-v /var/lib/luna/current/extras/conf/migration_config.conf:/srv/base_scripts/migrate_backport3/config/config.conf \
-v /var/lib/luna/image_store:/local_storage \
--network=host \
--entrypoint bash dockerhub.visionlabs.ru/luna/luna-backport3:v.0.10.2 -c "cd ./base_scripts/migrate_backport3 && pip3 install -r requirements.txt && python3 ./start_migration.py --migrate stage_1"

If you are migrating from stage_3, check that the “face” and “attribute” tables in the Faces (LUNA PLATFORM 5) database have entries.

If something went wrong during stage_3, use the argument --lower_boundary to specify the last failed face ID to continue migration. The face ID can be found in migration logs.

For example:

docker run --rm -t -v \
/tmp/logs/backport3:/srv/logs \
-v /var/lib/luna/current/extras/conf/migration_config.conf:/srv/base_scripts/migrate_backport3/config/config.conf \
--network=host \
--entrypoint bash dockerhub.visionlabs.ru/luna/luna-backport3:v.0.10.2 -c "cd ./base_scripts/migrate_backport3 && pip3 install -r requirements.txt && python3 ./start_migration.py --lower_boundary 02e7b0db-b3c3-4446-bbdd-0f0d9a566058"

Stop LUNA PLATFORM 3 services

Stop and disable all the LUNA PLATFORM 3 services.

systemctl stop luna-image-store luna-faces luna-broker luna-extractor@1 luna-matcher@1 luna-stat-lpse.service luna-stat-sm.service luna-api luna-admin_back luna-admin_tasks aerospike
systemctl disable luna-image-store luna-faces luna-broker luna-extractor@1 luna-matcher@1 luna-stat-lpse.service luna-stat-sm.service luna-api luna-admin_back luna-admin_tasks aerospike
systemctl status luna-image-store luna-faces luna-broker luna-extractor@1 luna-matcher@1 luna-stat-lpse.service luna-stat-sm.service luna-api luna-admin_back luna-admin_tasks aerospike

Image Store

Samples migrations

Samples migration is required to add an account for each sample.

Create a backup of all the samples buckets before launching the following script.

Implied that Image Store from LUNA PLATFORM 3 will be used with LUNA PLATFORM 5. Storage transfer not provided during the migration.

You should use existing buckets during the LUNA PLATFORM 5 Image Store launching.

Change the default bucket used for the samples storage to “visionlabs-warps”.

By default, the samples bucket in LUNA PLATFORM 3 was called “visionlabs-warps”.

"bucket": "visionlabs-warps"

Portraits migration

Backport uses samples as portraits by default. Thus it is not required to store portraits and samples simultaneously.

If samples will not be used as portraits, its required to migrate portraits (run migration with --migrate_portraits flag).

If portraits are required, you should turn off the USE_SAMPLES_AS_PORTRAITS setting of Backport 3.

You should follow one of these steps if you are going to use portraits:

You can configure all the listed settings in the Configurator service of LUNA PLATFORM 5 or configuration files of the corresponding services (if the Configurator service is not utilized).

Image Store container launch

Note: If you are not going to use the Image Store service, do not launch this container and disable the service utilization in Configurator. See section “Optional services usage”.

Use the following command to launch the Image Store service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5020 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /var/lib/luna/image_store/:/srv/local_storage/ \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/image-store:/srv/logs \
--name=luna-image-store \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-image-store:v.3.10.2

Here -v /var/lib/luna/image_store/:/srv/local_storage/ is the data from the specified folder is added to the Docker container when it is launched. All the data from the specified Docker container folder is saved to this directory.

If you already have a directory with LP buckets you should specify it instead of /var/lib/luna/image_store/.

Buckets creation

Buckets are required to store data in Image Store. The Image Store service should be launched before the commands execution.

When upgrading from the previous version, it is recommended to launch the bucket creation commands one more time. Hence you make sure that all the required buckets were created.

If the error with code 13006 appears during launching of the listed above commands, the bucket is already created.

There are two ways to create buckets in LP.

Run the listed below scripts to create buckets.

Run this script to create general buckets:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/api:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-api:v.6.23.0 \
python3 ./base_scripts/lis_bucket_create.py -ii --luna-config http://localhost:5070/1

If you are going to use the Tasks service, use the following command to additionally create the “task-result” in the Image Store service:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/tasks:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-tasks:v.3.19.2 \
python3 ./base_scripts/lis_bucket_create.py -ii --luna-config http://localhost:5070/1

If you are going to use the portraits, use the following command to additionally create the “portraits”.

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/api:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-backport3:v.0.10.2 \
python3 ./base_scripts/lis_bucket_create.py -ii --luna-config http://localhost:5070/1

Use direct requests to create required buckets.

The curl utility is required for the following requests.

The “visionlabs-samples” bucket is used for face samples storage. The bucket is required for LP utilization.

curl -X POST http://127.0.0.1:5020/1/buckets?bucket=visionlabs-samples

The “portraits” bucket is used for portraits storage. The bucket is required for Backport 3 utilization.

curl -X POST http://127.0.0.1:5020/1/buckets?bucket=portraits

The “visionlabs-bodies-samples” bucket is used for human bodies samples storage. The bucket is required for LP utilization.

curl -X POST http://127.0.0.1:5020/1/buckets?bucket=visionlabs-bodies-samples

The “visionlabs-image-origin” bucket is used for source images storage. The bucket is required for LP utilization.

curl -X POST http://127.0.0.1:5020/1/buckets?bucket=visionlabs-image-origin

The “visionlabs-objects” bucket is used for objects storage. The bucket is required for LP utilization.

curl -X POST http://127.0.0.1:5020/1/buckets?bucket=visionlabs-objects

The “task-result” bucket for the Tasks service. Do not use it if you are not going to use the Tasks service.

curl -X POST http://127.0.0.1:5020/1/buckets?bucket=task-result

Accounts

Accounts container launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5170 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/accounts:/srv/logs \
--name=luna-accounts \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-accounts:v.0.2.2

Licenses

Note: To use a trial license, it is required to launch the Licenses service on the same server where trial license is being used.

Specify license settings using Configurator

Follow the steps below to set the settings for HASP-key or Guardant-key.

Specify HASP license settings

Note: Perform these actions only if the HASP key is used. See the “Specify Guardant license settings” section if the Guardant key is used.

To set the license server address, follow these steps:

If the license is activated using the HASP key, then two parameters “vendor” and “server_address” must be specified. If you want to change the HASP protection to Guardant, then you need to add the “license_id” field.

Specify Guardant license settings

Note: Perform these actions only if the Guardant key is used. See the “Specify HASP license settings” section if the HASP key is used.

To set the license server address, follow these steps:

If the license is activated using the Guardant key, then three parameters “vendor”, “server_address” and “license_id” must be specified. If you want to change the Guardant protection to HASP, then you need to delete the “license_id” field.

Licenses container launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5120 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/licenses:/srv/logs \
--name=luna-licenses \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-licenses:v.0.9.5

Faces

Faces container launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5030 \
--env=WORKER_COUNT=2 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/faces:/srv/logs \
--name=luna-faces \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-faces:v.4.10.2

Events

Events DB tables creation

Note: If you are not going to use the Events service, do not launch this container and disable the service utilization in Configurator. See section “Optional services usage”.

Use the following command to create the Events DB tables:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/events:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-events:v.4.11.3 \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1

Events container launch

Note: If you are not going to use the Events service, do not launch this container and disable the service utilization in Configurator. See section “Optional services usage”.

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5040 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/events:/srv/logs \
--name=luna-events \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-events:v.4.11.3

Python Matcher services

For matching tasks, you can use either only the Python Matcher service, or additionally use the Python Matcher Proxy service, which redirects matching requests to either the Python Matcher service or matching plugins. This section describes how to use Python Matcher without Python Matcher Proxy.

You need to use the Python Matcher Proxy service only if you are going to use matching plugins. Using Python Matcher Proxy and running the corresponding docker container are described in the “Use Python Matcher with Python Matcher Proxy” section.

See the description and usage of matching plugins in the administrator manual.

Use Python Matcher without Python Matcher Proxy

The Python Matcher service with matching by the Faces DB is enabled by default after launching.

The Python Matcher service with matching by the Events is also enabled by default. You can disable it by specifying “USE_LUNA_EVENTS = 0” in the “ADDITIONAL_SERVICES_USAGE” settings of Configurator (see “Optional services usage” section). Thus, the Events service will not be used for LUNA PLATFORM.

The Python Matcher that matches using the matcher library is enabled when “CACHE_ENABLED” is set to “true” in the “DESCRIPTORS_CACHE” setting.

A single image is downloaded for the Python Matcher service and the Python Matcher Proxy service.

Python Matcher container launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5100 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/python-matcher:/srv/logs \
--name=luna-python-matcher \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-python-matcher:v.1.8.2

Remote SDK

Remote SDK container launch

You can run the Remote SDK service utilizing CPU (set by default) or GPU.

By default, the Remote SDK service is launched with all estimators and detectors enabled. If necessary, you can disable the use of some estimators or detectors when launching the Remote SDK container. Disabling unnecessary estimators enables you to save RAM or GPU memory, since when the Remote SDK service launches, the possibility of performing these estimates is checked and neural networks are loaded into memory. If you disable the estimator or detector, you can also remove its neural network from the Remote SDK container. See the “Enable/disable several estimators and detectors” section of the administrator manual for more information.

Run the Remote SDK service using one of the following commands according to the utilized processing unit.

Run Remote SDK utilizing CPU

Use the following command to launch the Remote SDK service using CPU:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5220 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/remote-sdk:/srv/logs \
--network=host \
--name=luna-remote-sdk \
--restart=always \
--detach=true \
dockerhub.visionlabs.ru/luna/luna-remote-sdk:v.0.4.0

Run Remote SDK utilizing GPU

The Remote SDK service does not utilize GPU by default. If you are going to use the GPU, then you should enable its use for the Remote SDK service in the Configurator service.

If you need to use the GPU for all estimators and detectors at once, then you need to use the “global_device_class” parameter in the “LUNA_REMOTE_SDK_RUNTIME_SETTINGS” section. All estimators and detectors will use the value of this parameter if the “device_class” parameter of their settings like "LUNA_REMOTE_SDK_<estimator-or-detector-name>_SETTINGS.runtime_settings" is set to “global” (by default for all estimators and detectors).

If you need to use the GPU for a specific estimator or detector, then you need to use the “device_class” parameter in sections like "LUNA_REMOTE_SDK_<estimator/detector-name>_SETTINGS.runtime_settings".

See section “Calculations using GPU” for additional requirements for GPU utilization.

Use the following command to launch the Remote SDK service using GPU:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5220 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--gpus device=0 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/remote-sdk:/srv/logs \
--network=host \
--name=luna-remote-sdk \
--restart=always \
--detach=true \
dockerhub.visionlabs.ru/luna/luna-remote-sdk:v.0.4.0

Here --gpus device=0 is the parameter specifies the used GPU device and enables GPU utilization. A single GPU can be utilized per Remote SDK instance. Multiple GPU utilization per instance is not available.

Run slim version of Remote SDK

You can run a slim version of the Remote SDK service that contains only configuration files without neural networks. It is assumed that the user himself will add the neural networks he needs to the container.

The launch of the slim version of the Remote SDK service is intended for advanced users.

To successfully launch the Remote SDK container with a custom set of neural networks, you need to perform the following actions:

Using the “enable-all-estimators-by-default” flag for the “EXTEND_CMD” variable, you can disable the use of all neural networks (estimators) by default, and then use special flags to explicitly specify which neural networks should be used. If you do not specify this flag or set the value “--enable-all-estimators-by-default=1”, the Remote SDK service will try to find all neural networks in the container. If one of the neural networks is not found, the Remote SDK service will not start.

List of available estimators:

Argument Description
--enable-all-estimators-by-default Enable all estimators by default.
--enable-human-detector Simultaneous detector of bodies and bodies.
--enable-face-detector Face detector.
--enable-body-detector Body detector.
--enable-face-landmarks5-estimator Face landmarks5 estimator.
--enable-face-landmarks68-estimator Face landmarks68 estimator.
--enable-head-pose-estimator Head pose estimator.
--enable-liveness-estimator Liveness estimator.
--enable-fisheye-estimator FishEye effect estimator.
--enable-face-detection-background-estimator Image background estimator.
--enable-face-warp-estimator Face sample estimator.
--enable-body-warp-estimator Body sample estimator.
--enable-quality-estimator Image quality estimator.
--enable-image-color-type-estimator Face color type estimator.
--enable-face-natural-light-estimator Natural light estimator.
--enable-eyes-estimator Eyes estimator.
--enable-gaze-estimator Gaze estimator.
--enable-mouth-attributes-estimator Mouth attributes estimator.
--enable-emotions-estimator Emotions estimator.
--enable-mask-estimator Mask estimator.
--enable-glasses-estimator Glasses estimator.
--enable-eyebrow-expression-estimator Eyebrow estimator.
--enable-red-eyes-estimator Red eyes estimator.
--enable-headwear-estimator Headwear estimator.
--enable-basic-attributes-estimator Basic attributes estimator.
--enable-face-descriptor-estimator Face descriptor extraction estimator.
--enable-body-descriptor-estimator Body descriptor extraction estimator.
--enable-body-attributes-estimator Body attributes estimator.
--enable-people-count-estimator People count estimator.
--enable-deepfake-estimator Deepfake estimator.

See the detailed information on enabling and disabling certain estimators in the section “Enable/disable several estimators and detectors” of the administrator manual.

Below is an example of a command to assign rights to a neural network file:

chown -R 1001:0 /var/lib/luna/current/<neural_network_name>.plan

Example of a command to run Remote SDK container with mounting neural networks for face detection and face descriptor extraction:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5220 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--env=EXTEND_CMD="--enable-all-estimators-by-default=0 --enable-face-detector=1 --enable-face-descriptor-estimator=1" \
-v /var/lib/luna/current/cnn59b_cpu-avx2.plan:/srv/fsdk/data/cnn59b_cpu-avx2.plan \
-v /var/lib/luna/current/FaceDet_v3_a1_cpu-avx2.plan:/srv/fsdk/data/FaceDet_v3_a1_cpu-avx2.plan \
-v /var/lib/luna/current/FaceDet_v3_redetect_v3_cpu-avx2.plan:/srv/fsdk/data/FaceDet_v3_redetect_v3_cpu-avx2.plan \
-v /var/lib/luna/current/slnet_v3_cpu-avx2.plan:/srv/fsdk/data/slnet_v3_cpu-avx2.plan \
-v /var/lib/luna/current/LNet_precise_v2_cpu-avx2.plan:/srv/fsdk/data/LNet_precise_v2_cpu-avx2.plan \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/remote-sdk:/srv/logs \
--network=host \
--name=luna-remote-sdk \
--restart=always \
--detach=true \
dockerhub.visionlabs.ru/luna/luna-remote-sdk:v.0.4.0

Handlers

Note: If you are not going to use the Handlers service, do not launch this container and disable the service utilization in Configurator. See section “Optional services usage”.

Handlers DB tables creation

Use the following command to create the Handlers DB tables:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/handlers:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-handlers:v.3.4.2 \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1

Handlers container launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5090 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/handlers:/srv/logs \
--name=luna-handlers \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-handlers:v.3.4.2

Tasks

Note: If you are not going to use the Tasks service, do not launch the Tasks container and the Tasks Worker container. Disable the service utilization in Configurator. See section “Optional services usage”.

Tasks DB tables creation

Use the following command to create Tasks DB tables:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/tasks:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-tasks:v.3.19.2 \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1

Tasks and Tasks Worker containers launch

Tasks service image includes the Tasks service and the Tasks Worker. They both must be launched.

The “task-result” bucket should be created for the Tasks service before the service launch. The buckets creation is described in the “Buckets creation”.

If it is necessary to use the Estimator task using a network disk, then you should first mount the directory with images from the network disk into special directories of Tasks and Tasks Worker containers. See the “Estimator task” section in the administrator manual for details.

Tasks Worker launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5051 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--env=SERVICE_TYPE="tasks_worker" \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/tasks-worker:/srv/logs \
--name=luna-tasks-worker \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-tasks:v.3.19.2

Tasks launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5050 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/tasks:/srv/logs \
--name=luna-tasks \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-tasks:v.3.19.2

Sender

Sender container launch

Note: If you are not going to use the Sender service, do not launch this container and disable the service utilization in Configurator. See section “Optional services usage”.

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5080 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/sender:/srv/logs \
--name=luna-sender \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-sender:v.2.10.2

API

API container launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5000 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--name=luna-api \
--restart=always \
--detach=true \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/api:/srv/logs \
--network=host \
dockerhub.visionlabs.ru/luna/luna-api:v.6.23.0

Admin

Admin container launch

Note: If you are not going to use the Admin service, do not launch this container.

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5010 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/admin:/srv/logs \
--name=luna-admin \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-admin:v.5.5.2 

Monitoring data about the number of executed requests is saved in the luna-admin bucket of the InfluxDB. To enable data saving use the following command:

docker exec -it luna-admin python3 ./base_scripts/influx2_cli.py create_usage_task --luna-config http://127.0.0.1:5070/1

Backport 3

The section describes launching of Backport 3 service.

The service is not mandatory for utilizing LP5 and is required for emulation of LP 3 API only.

Backport 3 container launch

Use the following command to launch the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5140 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--name=luna-backport3 \
--restart=always \
--detach=true \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/backport3:/srv/logs \
--network=host \
dockerhub.visionlabs.ru/luna/luna-backport3:v.0.10.2

User Interface 3

The User Interface 3 is used with the Backport 3 service only.

User Interface 3 container launch

Use the following command to launch the service:

docker run \
--env=PORT=4100 \
--env=LUNA_API_URL=http://127.0.0.1:5140 \
--name=luna-ui-3 \
--restart=always \
--detach=true \
--network=host \
-v /etc/localtime:/etc/localtime:ro \
dockerhub.visionlabs.ru/luna/luna3-ui:v.0.5.10

Here:

Lambda

Working with the Lambda service is possible only when deploying LUNA PLATFORM services in Kubernetes. To use it, you need to deploy LUNA PLATFORM services in Kubernetes yourself or consult VisionLabs specialists. Use the commands below as reference information.

Note: If you are not going to use the Lambda service, do not run this container.

Enable the use of the Lambda service (see the section “Using optional services”).

Prepare Docker registry

It is necessary to prepare a registry for storing Lambda docker images. Transfer the base images and Kaniko executor image to your registry using the following commands.

Upload the images from the remote repository to the local image storage.

docker pull dockerhub.visionlabs.ru/luna/lpa-lambda-base-fsdk:v.0.0.45
docker pull dockerhub.visionlabs.ru/luna/lpa-lambda-base:v.0.0.45
docker pull dockerhub.visionlabs.ru/luna/kaniko-executor:latest

Add new names to the images by replacing new-registry on their own. The names of the base images in the user registry must be the same as in the dockerhub.visionlabs.ru/luna registry.

docker tag dockerhub.visionlabs.ru/luna/lpa-lambda-base-fsdk:v.0.0.45 new-registry/lpa-lambda-base-fsdk:v.0.0.45
docker tag dockerhub.visionlabs.ru/luna/lpa-lambda-base:v.0.0.45 new-registry/lpa-lambda-base:v.0.0.45
docker tag dockerhub.visionlabs.ru/luna/kaniko-executor:latest new-registry/kaniko-executor:latest

Push local images to your remote repository by replacing new-registry on their own.

docker push new-registry/lpa-lambda-base-fsdk:v.0.0.45
docker push new-registry/lpa-lambda-base:v.0.0.45
docker push new-registry/kaniko-executor:latest

Create Lambda database

Use the following command to create a Lambda database in PostgreSQL:

docker exec -it postgres psql -U luna -c "CREATE DATABASE luna_lambda;"

Lambda DB tables creation

Use the following command to create the Lambda DB tables:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/lambda:/srv/logs \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/luna-lambda:v.0.2.0 \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1

Lambda container launch

Use the following command to start the service:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5210 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/lambda:/srv/logs \
--name=luna-lambda \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-lambda:v.0.2.0

Additional information

This section provides the following additional information:

Monitoring and logs visualization using Grafana

Monitoring visualization is performed by the LUNA Dashboards service, which contains the Grafana monitoring data visualization platform with configured LUNA PLATFORM dashboards.

If necessary, you can install customized dashboards for Grafana separately. See the “LUNA Dashboards” section in the administrator manual for more information.

Together with Grafana, you can use the Grafana Loki log aggregation system, which enables you to flexibly work with LUNA PLATFORM logs. The Promtail agent is used to deliver LUNA PLATFORM logs to Grafana Loki (for more information, see the “Grafana Loki” section in the administrator manual).

LUNA Dashboards

Note: To work with Grafana you need to use InfluxDB version 2.

Note: Before updating, make sure that the old LUNA Dashboards container is deleted.

Run LUNA Dashboards container

Use the docker run command with these parameters to run Grafana:

docker run \
--restart=always \
--detach=true \
--network=host \
--name=grafana \
-v /etc/localtime:/etc/localtime:ro \
dockerhub.visionlabs.ru/luna/luna-dashboards:v.0.0.9

Use “http://IP_ADDRESS:3000” to go to the Grafana web interface when the LUNA Dashboards and InfluxDB containers are running.

Grafana Loki

Note: Grafana Loki requires LUNA Dashboards to be running.

Note: Before updating, make sure that the old Grafana Loki and Promtail containers are removed.

Run Grafana Loki container

Use the docker run command with these parameters to run Grafana Loki:

docker run \
--name=loki \
--restart=always \
--detach=true \
--network=host \
-v /etc/localtime:/etc/localtime:ro \
dockerhub.visionlabs.ru/luna/loki:2.7.1

Run Promtail container

Use the docker run command with these parameters to run Promtail:

docker run \
-v /var/lib/luna/current/example-docker/logging/promtail.yml:/etc/promtail/luna.yml \
-v /var/lib/docker/containers:/var/lib/docker/containers \
-v /etc/localtime:/etc/localtime:ro \
--name=promtail \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/promtail:2.7.1 \
-config.file=/etc/promtail/luna.yml -client.url=http://127.0.0.1:3100/loki/api/v1/push -client.external-labels=job=containerlogs,pipeline_id=,job_id=,version=

Here:

Docker commands

Show containers

To show the list of launched Docker containers use the command:

docker ps

To show all the existing Docker containers use the command:

docker ps -a 

Copy files to container

You can transfer files into the container. Use the docker cp command to copy a file into the container.

docker cp <file_location> <container_name>:<folder_inside_container>

Enter container

You can enter individual containers using the following command:

docker exec -it <container_name> bash

To exit the container, use the command:

exit

Images names

You can see all the names of the images using the command:

docker images

Delete image

If you need to delete an image:

docker rmi -f 61860d036d8c

Delete all the existing images.

docker rmi -f $(docker images -q)

Stop container

You can stop the container using the command:

docker stop <container_name>

Stop all the containers:

docker stop $(docker ps -a -q)

Delete container

If you need to delete a container:

docker container rm -f 23f555be8f3a

Delete all the containers.

docker container rm -f $(docker container ls -aq)

Check service logs

You can use the following command to show logs for the service:

docker logs <container_name>

Launching parameters description

When launching a Docker container for a LUNA PLATFORM service you should specify additional parameters required for the service launching.

The parameters specific for a particular container are described in the section about this container launching.

All the parameters given in the service launching example are required for proper service launching and utilization.

Launching services parameters

Example command of launching LP services containers:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=<Port_of_the_launched_service> \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/<service>:/srv/logs/ \
--name=<service_container_name> \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/<service-name>:<version>

The following parameters are used when launching LP services containers:

Links to download the container images you need are available in the description of the corresponding container launching.

Service arguments

Each service in LUNA PLATFORM has its own launch arguments. These arguments can be passed through:

Some arguments can only be passed by setting a flag. For the Handlers and Remote SDK services, it is possible to use the environment variable “EXTEND_CMD” to explicitly pass flags. See the example of using the “EXTEND_CMD” variable in the “Run slim version of Remote SDK” section.

For example, using the --help flag you can get a list of all available arguments. An example of passing an argument to an API service:

docker run --rm dockerhub.visionlabs.ru/luna/luna-api:v.6.23.0 python3 /srv/luna_api/run.py --help

List of main arguments:

Launch flag Environment variable Description
--port PORT Port on which the service will listen for connections.
--workers WORKER_COUNT Number of workers for the service.
--log_suffix --log_suffix LOG_SUFFIX LOG_SUFFIX Suffix added to log file names (with the option to write logs to a file enabled).
--config-reload RELOAD_CONFIG Enable automatic configuration reload. See “Automatic configurations reload” in the LUNA PLATFORM 5 administrator manual.
--pulling-time RELOAD_CONFIG_INTERVAL Configuration checking period (default 10 seconds). See “Automatic configurations reload” in the LUNA PLATFORM 5 administrator manual.
--luna-config --luna-config CONFIGURATOR_HOST, CONFIGURATOR_PORT Address of the Configurator service for downloading settings. For --luna-config it is sent in the format http://localhost:5070/1. For environment variables, the host and port are set explicitly. If the argument is not given, the default configuration file will be used.
--config None Path to the file with service configurations.
--<config_name> None

Tag of the specified configuration in the Configurator. When setting this configuration, the value of the tagged configuration will be used. Example: --INFLUX_MONITORING TAG_1

Note: You must pre-tag the appropriate configuration in. Configurator.

Note: Only works with the --luna-config flag.

The list of arguments may vary depending on the service.

It is also possible to override the settings of services at their start using environment variables.

The VL_SETTINGS prefix is used to redefine the settings. Examples:

Creating DB parameters

Example command of launching containers for database migration or database creation:

docker run \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/<service>:/srv/logs/ \
--rm \
--network=host \
dockerhub.visionlabs.ru/luna/<service-name>:<version> \
python3 ./base_scripts/db_create.py --luna-config http://localhost:5070/1

The following parameters are used when launching containers for database migration or database creation:

Here:

Logging to server

To enable saving logs to the server, you should:

Create logs directory

Below are examples of commands for creating directories for saving logs and assigning rights to them for all LUNA PLATFORM services.

mkdir -p /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/accounts /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/remote-sdk /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4
chown -R 1001:0 /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/accounts /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/remote-sdk /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4

If you need to use the Python Matcher Proxy service, then you need to additionally create the /tmp/logs/python-matcher-proxy directory and set its permissions.

Logging activation

LP services logging activation

To enable logging to file, you need to set the log_to_file and folder_with_logs settings in the <SERVICE_NAME>_LOGGER section of the settings for each service.

Automatic method (before/after starting Configurator)

To update logging settings, you can use the logging.json settings file provided with the distribution package.

Run the following command after starting the Configurator service:

docker cp /var/lib/luna/current/extras/conf/logging.json luna-configurator:/srv/luna_configurator/used_dumps/logging.json

Update your logging settings with the copied file.

docker exec -it luna-configurator python3 ./base_scripts/db_create.py --dump-file /srv/luna_configurator/used_dumps/logging.json

Manual method (after starting Configurator)

Go to the Configurator service interface (127.0.0.1:5070) and set the logs path in the container in the folder_with_logs parameter for all services whose logs need to be saved. For example, you can use the path /srv/logs.

Set the log_to_file option to true to enable logging to a file.

Configurator service logging activation (before/after Configurator start)

The Configurator service settings are not located in the Configurator user interface, they are located in the following file:

/var/lib/luna/current/example-docker/luna_configurator/configs/luna_configurator_postgres.conf

You should change the logging parameters in this file before starting the Configurator service or restart it after making changes.

Set the path to the logs location in the container in the FOLDER_WITH_LOGS = ./ parameter of the file. For example, FOLDER_WITH_LOGS = /srv/logs.

Set the log_to_file option to true to enable logging to a file.

Mounting directories with logs when starting services

The log directory is mounted with the following argument when starting the container:

-v <server_logs_folder>:<container_logs_folder> \

where <server_logs_folder> is the directory created in the create logs directory step, and <container_logs_folder> is the directory created in the activate logging step.

Example of command to launch the API service with mounting a directory with logs:

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5000 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--name=luna-api \
--restart=always \
--detach=true \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/api:/srv/logs \
--network=host \
dockerhub.visionlabs.ru/luna/luna-api:v.6.23.0

The example container launch commands in this documentation contain these arguments.

Docker log rotation

To limit the size of logs generated by Docker, you can set up automatic log rotation. To do this, add the following data to the /etc/docker/daemon.json file:

{
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m",
        "max-file": "5"
    }
}

This will allow Docker to store up to 5 log files per container, with each file being limited to 100MB.

After changing the file, you need to restart Docker:

systemctl reload docker

The above changes are the default for any newly created container, they do not apply to already created containers.

Set custom InfluxDB settings

If you are going to use InfluxDB OSS 2, then you need to update the monitoring settings in Configurator service.

There are the following settings for InfluxDB OSS 2:

"send_data_for_monitoring": 1,
"use_ssl": 0,
"flushing_period": 1,
"host": "127.0.0.1",
"port": 8086,
"organization": "<ORGANIZATION_NAME>",
"token": "<TOKEN>",
"bucket": "<BUCKET_NAME>",
"version": <DB_VERSION>

You can update InfluxDB settings in the Configurator service by following these steps:

vi /var/lib/luna/current/extras/conf/influx2.json
docker cp /var/lib/luna/current/extras/conf/influx2.json luna-configurator:/srv/
docker exec -it luna-configurator python3 ./base_scripts/db_create.py --dump-file /srv/influx2.json

You can also manually update settings in the Configurator service user interface.

The Configurator service configurations are set separately.

vi /var/lib/luna/current/example-docker/luna_configurator/configs/luna_configurator_postgres.conf
docker restart luna-configurator

Use Python Matcher with Python Matcher Proxy

As mentioned earlier, along with the Python Matcher service, you can additionally use the Python Matcher Proxy service, which will redirect matching requests either to the Python Matcher service or to the matching plugins. Plugins may significantly improve matching processing performance. For example, it is possible to organize the storage of the data required for matching operations and additional objects fields in separate storage using plugins, which will speed up access to the data compared to the use of the standard LUNA PLATFORM database.

To use the Python Matcher service with Python Matcher Proxy, you should additionally launch the appropriate container, and then set a certain setting in the Configurator service. Follow the steps below only if you are going to use matching plugins.

See the description and usage of matching plugins in the administrator manual.

Python Matcher proxy container launch

Use the following command to launch the service:

After starting the container, you need to set the "luna_matcher_proxy":true parameter in the “ADDITIONAL_SERVICES_USAGE” section in the Configurator service.

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=5110 \
--env=WORKER_COUNT=1 \
--env=RELOAD_CONFIG=1 \
--env=RELOAD_CONFIG_INTERVAL=10 \
--env=SERVICE_TYPE="proxy" \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/python-matcher-proxy:/srv/logs \
--name=luna-python-matcher-proxy \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-python-matcher:v.1.8.2

After launching the container, you need to set the following value in the Configurator service.

ADDITIONAL_SERVICES_USAGE = "luna_matcher_proxy":true

System scaling

All LP services are linearly scalable and can be located on several services.

You can run additional containers with LP services to improve performance and fail-safety. The number of services and the characteristics of servers depend on your tasks.

To increase performance, you may either improve the performance of a single server or increase the number of servers used by distributing most resource-intensive components of the system.

Balancers are used for the distribution of requests among the launched service instances. This approach provides the necessary processing speed and the required fail-safety level for specific customer’s tasks. In the case of a node failure, the system will not stop: requests will be redirected to another node.

The image below shows two instances of the Faces service balanced by Nginx. Nginx receives requests on port 5030 and routes them to Faces instances. The faces services are launched on ports 5031 and 5032.

Faces service balancing
Faces service balancing

It is strongly recommended to regularly back up databases to a separate server regardless of the fail-safety level of the system. It allows you not to lose data in case of unforeseen circumstances.

MQs, databases, and balancers used by LUNA PLATFORM are products of third-party developers. You should configure them according to the recommendations of the corresponding vendors.

The Remote SDK service and the Python Matcher service perform the most resource-intensive operations.

The Remote SDK service performs mathematical image transformations and descriptors extraction. The operations require significant computational resources. Both CPU and GPU can be used for computations.

GPU usage is preferable since it improves the processing of requests. However, not all types of video cards are supported.

The Python Matcher service performs matching with lists. Matching requires CPU resources, however, you also should allocate as much RAM as possible for each Python Matcher instance. The RAM is used to store descriptors received from a database. Thus matcher does not require to request each descriptor from the database.

When distributing instances on several servers, you should consider the performance of each server. For example, if a large task is executed by several Python Matcher instances, and one of the instances is on the server with low performance, this can slow down the execution of the entire task.

For each instance of the service, you can set the number of workers. The greater the number of workers, the more resources and memory are consumed by the service instance. See the detailed information in the “Worker processes” section of the LUNA PLATFORM administrator manual.

Launching several containers

There are two steps required for launching several instances of the same LP service

  1. Run several containers of the service.

You must launch the required number of service by using the corresponding command for the service.

For example, for the API service you must run the following command with updated parameters.

docker run \
--env=CONFIGURATOR_HOST=127.0.0.1 \
--env=CONFIGURATOR_PORT=5070 \
--env=PORT=<port> \
-v /etc/localtime:/etc/localtime:ro \
-v /tmp/logs/<folder_name>:/srv/logs \
--name=<name> \
--restart=always \
--detach=true \
--network=host \
dockerhub.visionlabs.ru/luna/luna-api:v.6.23.0

When running several similar containers the following parameters of the containers must differ:

  1. Configure your balancer (e.g., Nginx) for routing requests to the services.

For each scaled LP service, you must set a port where Nginx will listen to service requests and real ports of each service instance where Nginx will redirect the requests.

An example of Nginx configuration file can be found here:

“/var/lib/luna/current/extras/conf/nginx.conf”.

You can use another balancer, but its utilization is not described in this documentation.

VLMatch library compilation for Oracle

Note: The following instruction describes installation for Oracle 21c.

You can find all the required files for the VLMatch user-defined extension (UDx) compilation in the following directory:

/var/lib/luna/current/extras/VLMatch/oracle

For VLMatch UDx function compilation one needs to:

  1. Install required environment, see requirements:
sudo yum install gcc g++ 
  1. Change SDK_HOME variable - oracle sdk root (default is $ORACLE_HOME/bin, check $ORACLE_HOME environment variable is set) in the makefile:
vi /var/lib/luna/current/extras/VLMatch/oracle/make.sh
  1. Open the directory and run the file “make.sh”.
cd /var/lib/luna/current/extras/VLMatch/oracle
chmod +x make.sh
./make.sh
  1. Define the library and the function inside the database (from database console):
CREATE OR REPLACE LIBRARY VLMatchSource AS '$ORACLE_HOME/bin/VLMatchSource.so';
CREATE OR REPLACE FUNCTION VLMatch(descriptorFst IN RAW, descriptorSnd IN RAW, length IN BINARY_INTEGER)
   RETURN BINARY_FLOAT 
AS
   LANGUAGE C
   LIBRARY VLMatchSource
   NAME "VLMatch"
   PARAMETERS (descriptorFst BY REFERENCE, descriptorSnd BY REFERENCE, length UNSIGNED SHORT, RETURN FLOAT);
  1. Test function within call (from database console):
SELECT VLMatch(HEXTORAW('1234567890123456789012345678901234567890123456789012345678901234'), HEXTORAW('0123456789012345678901234567890123456789012345678901234567890123'), 32) FROM DUAL;

The result returned by the database must be “0.4765625”.