Skip to content

Before installation#

Make sure that you are the root user before starting installation!

Distribution unpacking#

The distribution package is an archive luna_v.5.32.0, where v.5.32.0 is a numerical identifier, describing the current LUNA PLATFORM version.

The archive includes configuration files, required for installation and exploitation. It does not include Docker images for the services. They should be downloaded from the Internet.

Move the distribution package to the directory on your server before the installation. For example, move the files to /root/ directory. The directory should not contain any other distribution or license files except the target ones.

Create directory for distribution file unpacking

mkdir -p /var/lib/luna

Move the distribution to the created directory

mv /root/luna_v.5.32.0.zip /var/lib/luna

Install the unzip archiver if it is necessary

yum install -y unzip

Go to the folder with distribution

cd /var/lib/luna

Unzip files

unzip luna_v.5.32.0.zip

Create a symbolic link. The link indicates that the current version of the distribution file is used to run LUNA PLATFORM.

ln -s luna_v.5.32.0 current

Create logs directory#

Skip this section if it is not required to save logs to the server.

To save logs on the server, you need to create an appropriate directory, if it has not been created yet.

All the service logs will be copied to this directory.

mkdir -p /tmp/logs
chown -R 1001:0 /tmp/logs

If the necessary directories for logs have not been created yet, then you need to create them manually and set permissions.

mkdir -p /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/accounts /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4
chown -R 1001:0 /tmp/logs/configurator /tmp/logs/image-store /tmp/logs/accounts /tmp/logs/faces /tmp/logs/licenses /tmp/logs/events /tmp/logs/python-matcher /tmp/logs/handlers /tmp/logs/tasks /tmp/logs/tasks-worker /tmp/logs/sender /tmp/logs/api /tmp/logs/admin /tmp/logs/backport3 /tmp/logs/backport4

If you need to use the Python Matcher Proxy service, then you need to additionally create the /tmp/logs/python-matcher-proxy directory and set its permissions.

SELinux and Firewall#

You must configure SELinux and Firewall so that they do not block LUNA PLATFORM services.

SELinux and Firewall configurations are not described in this guide.

If SELinux and Firewall are not configured, the installation cannot be performed

License key activation#

The HASP service is used for LUNA PLATFORM licensing. Without a license, you will be unable to run and use LUNA services.

LP license includes the following features:

  • License expiration date.
  • The maximum number of faces with descriptors available.
  • Info about the availability of functionality for determining whether the person in the photo is real or if there is the presentation attack (see the "Liveness description" section in the administrator manual).
  • Info about the availability of functionality for checking the image according to ISO/IEC 19794-5 standard or checking the image with manual setting the thresholds (see the "Image Check" section in the administrator manual).
  • Info about the availability of functionality for estimating the parameters of bodies (see the "Body parameters" section in the administrator manual).

There are two keys for LUNA PLATFORM:

  • general HASP key that enables you to use LUNA PLATFORM. It uses the haspvlib_x86_64_30147.so vendor library;
  • optional HASP key for Liveness V1 service, if you need to use the Liveness V1 service. It uses the haspvlib_107506.so vendor library.

You can find the vendor libraries in the "/var/hasplm/" directory.

License keys are provided by VisionLabs separately upon request.

A network license is required to use LUNA PLATFORM in Docker containers.

The license key is created using the fingerprint. The fingerprint is created based on the information about hardware characteristics of the server. Therefore, the received license key will only work on the same server where the fingerprint was obtained.

There is a possibility that a new license key will be required when you perform any changes on the license server.

Follow these steps:

  • Install HASP utility on your server. HASP utility is usually installed on a separate server;
  • Start the HASP utility;
  • Create the fingerprint of your server and send it to VisionLabs;
  • Activate your key, received from VisionLabs;
  • Specify your HASP server address in a special file.

The Sentinel Keys tab of the user interface (<server_host_address>:1947) shows activated keys.

To use the Liveness V1 service, in addition to the actions described below, you must perform the actions from th "Use Liveness V1 section.

Install HASP utility for LP#

LP uses HASP utility of a certain version. If an older version of HASP utility is installed, it is required to delete it before installation of a new version. See Delete LP hasp utility.

Go to the HASP directory.

cd /var/lib/luna/current/extras/hasp/

Install HASP utility on you server.

yum -y install /var/lib/luna/current/extras/hasp/aksusbd-*.rpm

Launch HASP utility.

systemctl daemon-reload
systemctl start aksusbd
systemctl enable aksusbd
systemctl status aksusbd

Configure HASP utility#

You can configure the HASP utility using the "/etc/hasplm/hasplm.ini" file.

Note! You do not need to perform this action if you already have the configured INI file for the HASP utility.

Delete the old file if necessary.

rm -rf /etc/hasplm/hasplm.ini

Copy the INI file with configurations. Its parameters are not described in this document.

cp /var/lib/luna/current/extras/hasp/hasplm.ini /etc/hasplm/

Add LP vendor library#

Copy LP vendor library (x32 and x64). This library is required for using LP license key.

cp /var/lib/luna/current/extras/hasp/haspvlib_30147.so /var/hasplm/
cp /var/lib/luna/current/extras/hasp/haspvlib_x86_64_30147.so /var/hasplm/

Restart the utility

systemctl restart aksusbd

Create fingerprint for LUNA PLATFORM#

Go to the HASP directory.

cd /var/lib/luna/current/extras/hasp/licenseassist

Run the script

./LicenseAssist fingerprint > fingerprint_30147.c2v

The fingerprint is saved to file "fingerprint_30147.c2v".

Send the file to VisionLabs. You license key will be created using this fingerprint.

You can also save the system fingerprint from the user interface at :1947 by clicking the "Fingerprint" button on the "Sentinel Keys" tab.

Add license file manually using user interface#

  • Go to: :1947 (if access is denied check your Firewall/ SELinux settings (the procedure is not described in this document);

  • Select the Update/Attach at the left pane;

  • Press the "Select File..." button and select a license file(s) in the appeared window;

  • Press the "Apply File" button.

Adding a license file
Adding a license file

Specify license server address for LP#

Specify your license service IP address in the configuration file in the directory "/var/lib/luna/current/example-docker/hasp_redirect/". Change address to the HASP server in the following documents:

vi /var/lib/luna/current/example-docker/hasp_redirect/hasp_30147.ini

Change the server address in "hasp_30147.ini" file.

serveraddr = <HASP_server_address>

The "hasp_30147.ini" file is used by the Licenses service upon its container launch. It is required to restart the launched container when the server is changed.

HASP_server_address - the IP address of the server with your HASP key. You must use an IP address, not a server name.

Delete LP hasp utility#

Note. Delete the HASP utility only if you need to install a newer version. Otherwise, skip this step.

Stop and disable the utility.

systemctl stop aksusbd
systemctl disable aksusbd
systemctl daemon-reload
yum -y remove aksusbd haspd

Docker installation#

The Docker installation is described in the official documentation:

https://docs.docker.com/engine/install/centos/.

You do not need to install Docker if you already have an installed Docker of the latest version on your server.

Quick installation commands are listed below.

Check the official documentation for updates if you have any problems with the installation.

Install dependencies.

yum install -y yum-utils device-mapper-persistent-data lvm2

Add repository.

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install Docker.

yum -y install docker-ce docker-ce-cli containerd.io

Launch Docker.

systemctl start docker
systemctl enable docker

Check Docker status.

systemctl status docker

Docker Compose installation#

Install Docker Compose.

curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Calculations using GPU#

You can use GPU for the general calculations performed by Handlers.

Skip this section if you are not going to utilize GPU for your calculations.

CUDA of version 11.4 is already installed in the Docker container of Handlers.

Docker Compose v1.28.0+ is required to use the GPU.

You need to install NVIDIA Container Toolkit to use GPU with Docker containers. The example of the installation is given below.

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | tee /etc/yum.repos.d/nvidia-docker.repo

Install the nvidia-docker2 package (and dependencies) after updating the package listing:

yum clean expire-cache
yum install -y nvidia-docker2
systemctl restart docker

Check the NVIDIA Container toolkit operating by running a base CUDA container (this container is not provided in the LP distribution and should be downloaded from the Internet):

docker run --rm --gpus all nvidia/cuda:11.4-base nvidia-smi

Next, you should additionally add a deploy section to the handlers field in the docker-compose.yml file.

vi /var/lib/luna/current/example-docker/docker-compose.yml
 handlers:
    image: $/luna-handlers:$
    deploy:
      resources:
        reservations:
          devices:
          - driver: nvidia
            count: all
            capabilities: [gpu]
    restart: always
    network_mode: host
    environment:
      WORKER_COUNT: 1
      RELOAD_CONFIG: 1
      CONFIGURATOR_HOST: 127.0.0.1
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /tmp/logs/handlers:/srv/logs
    healthcheck:
      test: [ "CMD", "curl", "--fail", "127.0.0.1:5090/version" ]
      start_period: 10s
      interval: 5s
      timeout: 10s
      retries: 10

driver - this field specifies the driver for the reserved device(s);

count - this field specifies the number of GPU devices that should be reserved (providing the host holds that number of GPUs);

capabilities - this field expresses both generic and driver specific capabilities. It must be set, otherwise, an error will be returned when deploying the service.

See the documentation for additional information:

https://docs.docker.com/compose/gpu-support/#enabling-gpu-access-to-service-containers.

Attributes extraction on the GPU is engineered for maximum throughput. The input images are processed in batches. This reduces computation cost per image but does not provide the shortest latency per image.

GPU acceleration is designed for high load applications where request counts per second consistently reach thousands. It won’t be beneficial to use GPU acceleration in non-extensively loaded scenarios where latency matters.

Login to registry#

When launching containers, you should specify a link to the image required for the container launching. This image will be downloaded from the VisionLabs registry. Before that, you should login to the registry.

Enter login .

After running the command, you will be prompted for a password. Enter password.

The login and password are received from VisionLabs.

In the docker login command, you can enter the login and password at the same time, but this does not guarantee security because the password can be seen in the command history.