Skip to content

Before installation#

Make sure that you are the root user before starting installation!

Distribution unpacking#

The distribution package is an archive luna_v.5.24.0, where v.5.24.0 is a numerical identifier, describing the current LUNA PLATFORM version.

The archive includes configuration files, required for installation and exploitation. It does not include Docker images for the services. They should be downloaded from the Internet.

Move the distribution package to the directory on your server before the installation. For example, move the files to /root/ directory. The directory should not contain any other distribution or license files except the target ones.

Create directory for distribution file unpacking

mkdir -p /var/lib/luna

Move the distribution to the created directory

mv /root/luna_v.5.24.0.zip /var/lib/luna

Install the unzip archiver if it is necessary

yum install -y unzip

Go to the folder with distribution

cd /var/lib/luna

Unzip files

unzip luna_v.5.24.0.zip

Create a symbolic link. The link indicates that the current version of the distribution file is used to run LUNA PLATFORM.

ln -s luna_v.5.24.0 current

Create logs directory#

Skip this section if it is not required to save logs to the server.

To save logs on the server, you need to create an appropriate directory, if it has not been created yet.

All the service logs will be copied to this directory.

mkdir -p /tmp/logs
chown -R 1001:0 /tmp/logs

If the necessary directories for logs have not been created yet, then you need to create them manually and set permissions.

mkdir -p /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4
chown -R 1001:0 /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4

If you need to use the Python Matcher Proxy service, then you need to additionally create the /tmp/logs/python-matcher-proxy directory and set its permissions.

SELinux and Firewall#

You must configure SELinux and Firewall so that they do not block LUNA PLATFORM services.

SELinux and Firewall configurations are not described in this guide.

If SELinux and Firewall are not configured, the installation cannot be performed

License key activation#

The HASP service is used for LUNA PLATFORM licensing. Without a license, you will be unable to run and use LUNA services.

There are two keys for LUNA PLATFORM:

  • general HASP key that enables you to use LUNA PLATFORM. It uses the haspvlib_x86_64_111186.so vendor library;
  • optional HASP key for Liveness V1 service, if you need to use the Liveness V1 service. It uses the haspvlib_107506.so vendor library.

You can find the vendor libraries in the "/var/hasplm/" directory.

License keys are provided by VisionLabs separately upon request. The utilized Liveness version is specified in the LUNA PLATFORM license key.

A network license is required to use LUNA PLATFORM in Docker containers.

The license key is created using the fingerprint. The fingerprint is created based on the information about hardware characteristics of the server. Therefore, the received license key will only work on the same server where the fingerprint was obtained. There is a possibility that a new license key will be required when you perform any changes on the license server.

Follow these steps:

  • Install HASP utility on your server. HASP utility is usually installed on a separate server;
  • Start the HASP utility;
  • Create the fingerprint of your server and send it to VisionLabs;
  • Activate your key, received from VisionLabs;
  • Specify your HASP server address in a special file.

The Sentinel Keys tab of the user interface ( <server_host_address>:1947) shows activated keys.

Liveness V2#

If you are going to use the Liveness V2:

  • the Liveness feature should be set to "2" in your LUNA PLATFORM HASP key. This feature is set by VisionLabs engineers when the license is requested.
  • the liveness: false should be set in the "ADDITIONAL_SERVICES_USAGE" setting in Configurator. It is set to "false" by default. Therefore, Liveness V2 is activated by default when the Liveness feature is set to "2".

Liveness V2 does not require an additional license key. It is not required to launch any additional services to use Liveness V2. It is a part of the Handlers service.

See the "Liveness description" section in LP_Administrator_Manual.pdf for additional information about Liveness.

LUNA PLATFORM license#

LP uses HASP utility of a certain version. If an older version of HASP utility is installed, it is required to delete it before installation of a new version. See Delete LP hasp utility.

Install HASP utility for LP#

Go to the HASP directory.

cd /var/lib/luna/current/extras/hasp/

Install HASP utility on you server.

yum -y install /var/lib/luna/current/extras/hasp/aksusbd-*.rpm

Launch HASP utility.

systemctl daemon-reload
systemctl start aksusbd
systemctl enable aksusbd
systemctl status aksusbd

Configure HASP utility#

You can configure the HASP utility using the "/etc/hasplm/hasplm.ini" file.

Note! You do not need to perform this action if you already have the configured INI file for the HASP utility.

Delete the old file if necessary.

rm -rf /etc/hasplm/hasplm.ini

Copy the INI file with configurations. Its parameters are not described in this document.

cp /var/lib/luna/current/extras/hasp/hasplm.ini /etc/hasplm/

Add LP vendor library#

Copy LP vendor library (x32 and x64). This library is required for using LP license key.

cp /var/lib/luna/current/extras/hasp/haspvlib_111186.so /var/hasplm/
cp /var/lib/luna/current/extras/hasp/haspvlib_x86_64_111186.so /var/hasplm/

Restart the utility

systemctl restart aksusbd

Create fingerprint for LUNA PLATFORM#

Go to the HASP directory.

cd /var/lib/luna/current/extras/hasp/

Add permissions to the script.

chmod +x LicenseAssist

Run the script

./LicenseAssist fingerprint fingerprint_111186.c2v

The fingerprint is saved to file "fingerprint_111186.c2v".

Send the file to VisionLabs. You license key will be created using this fingerprint.

Add a license file manually using user interface#

  • Go to: :1947 (if access is denied check your Firewall/ SELinux settings (the procedure is not described in this document);

  • Select the Update/Attach at the left pane;

  • Press the "Select File..." button and select a license file(s) in the appeared window;

  • Press the "Apply File" button.

License file is added manually
License file is added manually

Specify license server address for LP#

Specify your license service IP address in the configuration file in the directory "/var/lib/luna/current/example-docker/hasp_redirect/". Change address to the HASP server in the following documents:

vi /var/lib/luna/current/example-docker/hasp_redirect/hasp_111186.ini

Change the server address in "hasp_111186.ini" file.

serveraddr = <HASP_server_address>

The "hasp_111186.ini" file is used by the Licenses service upon its container launch. It is required to restart the launched container when the server is changed.

HASP_server_address - the IP address of the server with your HASP key. You must use an IP address, not a server name.

Liveness V1 license#

Perform the following actions if you are going to use the Liveness V1 service.

Liveness V2 does not require an additional key.

Note! Do not perform the actions below before LP license key activation!

Install HASP utility for Liveness V1#

This action is performed on server different from LP license server.

Go to the HASP directory.

cd /var/lib/luna/current/extras/hasp/

Install HASP utility on you server.

tar -xvf aksusbd-7.103.1.tar
cd aksusbd-7.103.1
./dinst
systemctl status aksusbd

Add configuration file to the utility#

Note! You do not need to perform this action if you already have the configured INI file for the HASP utility.

Copy the INI file with configurations

rm -rf /etc/hasplm/hasplm.ini
cp /var/lib/luna/current/extras/hasp/hasplm.ini /etc/hasplm/

Add LP vendor library#

Copy LP vendor library (x32 and x64). This library is required for using LP license key.

Note! This action is performed if you install both keys for LP and Liveness V1 on a single server.

cp /var/lib/luna/current/extras/hasp/haspvlib_111186.so /var/hasplm/
cp /var/lib/luna/current/extras/hasp/haspvlib_x86_64_111186.so /var/hasplm/

Restart the utility

systemctl restart aksusbd

Create fingerprint for Liveness V1#

Go to the HASP directory.

cd /var/lib/luna/current/extras/hasp/

Add permissions to the script.

chmod +x Liveness_FP_tool.dms

Run the script to create a fingerprint.

./Liveness_FP_tool.dms f fingerprint_107506.c2v

The fingerprint is saved to file "fingerprint_107506.c2v".

Send the file to VisionLabs. You license key will be created using this fingerprint.

Add a license file manually using user interface#

  • Go to: :1947 (if access is denied check your Firewall/ SELinux settings (the procedure is not described in this document);

  • Select the Update/Attach at the left pane;

  • Press the "Select File..." button and select a license file(s) in the appeared window;

  • Press the "Apply File" button.

License file is added manually
License file is added manually

Specify license server address for Liveness V1#

Specify your license service IP address in the configuration file in the directory "/var/lib/luna/current/example-docker/hasp_redirect/". Change address to the HASP server in the following documents:

vi /var/lib/luna/current/example-docker/hasp_redirect/hasp_107506.ini

Change the address in "hasp_107506.ini" file:

serveraddr = <HASP_server_address>

The "hasp_107506.ini" file is used by the Liveness V1 service upon its container launch.

HASP_server_address - the IP address of the server with your HASP key. You must use an IP address, not a server name.

Delete LP hasp utility#

This action is performed to delete HASP utility.

Stop and disable the utility.

systemctl stop aksusbd
systemctl disable aksusbd
systemctl daemon-reload
yum -y remove aksusbd haspd

Docker installation#

The Docker installation is described in the official documentation:

https://docs.docker.com/engine/install/centos/.

You do not need to install Docker if you already have an installed Docker of the latest version on your server.

Quick installation commands are listed below.

Check the official documentation for updates if you have any problems with the installation.

Install dependencies.

yum install -y yum-utils device-mapper-persistent-data lvm2

Add repository.

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install Docker.

yum -y install docker-ce docker-ce-cli containerd.io

Launch Docker.

systemctl start docker
systemctl enable docker

Check Docker status.

systemctl status docker

Docker Compose installation#

Install Docker Compose.

curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

See the official documentation for details:

https://docs.docker.com/compose/install/

Calculations using GPU#

You can use GPU for the general calculations performed by Handlers.

Skip this section if you are not going to utilize GPU for your calculations.

CUDA of version 11.2.1 is already installed in the Docker container of Handlers.

Docker Compose v1.28.0+ is required to use the GPU.

You need to install NVIDIA Container Toolkit to use GPU with Docker containers. The example of the installation is given below.

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | tee /etc/yum.repos.d/nvidia-docker.repo

Install the nvidia-docker2 package (and dependencies) after updating the package listing:

yum clean expire-cache
yum install -y nvidia-docker2
systemctl restart docker

Check the NVIDIA Container toolkit operating by running a base CUDA container (this container is not provided in the LP distribution and should be downloaded from the Internet):

docker run --rm --gpus all nvidia/cuda:11.2.1-base nvidia-smi

Next, you should additionally add a deploy section to the handlers field in the docker-compose.yml file.

vi /var/lib/luna/current/example-docker/docker-compose.yml
 handlers:
    image: $/luna-handlers:$
    deploy:
      resources:
        reservations:
          devices:
          - driver: nvidia
            count: all
            capabilities: [gpu]
    restart: always
    network_mode: host
    environment:
      WORKER_COUNT: 1
      RELOAD_CONFIG: 1
      CONFIGURATOR_HOST: 127.0.0.1
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /tmp/logs/handlers:/srv/logs
    healthcheck:
      test: [ "CMD", "curl", "--fail", "127.0.0.1:5090/version" ]
      start_period: 10s
      interval: 5s
      timeout: 10s
      retries: 10

driver - this field specifies the driver for the reserved device(s);

count - this field specifies the number of GPU devices that should be reserved (providing the host holds that number of GPUs);

capabilities - this field expresses both generic and driver specific capabilities. It must be set, otherwise, an error will be returned when deploying the service.

See the documentation for additional information:

https://docs.docker.com/compose/gpu-support/#enabling-gpu-access-to-service-containers.

Attributes extraction on the GPU is engineered for maximum throughput. The input images are processed in batches. This reduces computation cost per image but does not provide the shortest latency per image.

GPU acceleration is designed for high load applications where request counts per second consistently reach thousands. It won’t be beneficial to use GPU acceleration in non-extensively loaded scenarios where latency matters.

Login to registry#

When launching containers, you should specify a link to the image required for the container launching. This image will be downloaded from the VisionLabs registry. Before that, you should login to the registry.

Enter login .

After running the command, you will be prompted for a password. Enter password.

The login and password are received from VisionLabs.

In the docker login command, you can enter the login and password at the same time, but this does not guarantee security because the password can be seen in the command history.