Skip to content

Before launch#

Make sure you are the root user before launch!

Before launch FaceStream, you need to do the following:

  1. Unpack the FaceStream distribution.
  2. Create symbolic link.
  3. Install Docker.
  4. Install Docker Compose if you plan to start FaceStream using Docker Compose script.
  5. Choose logging method.
  6. Set up GPU computing if you plan to use GPU.
  7. Login to VisionLabs registry.
  8. Make sure that the license contains a parameter that determines the number of streams to be processed by the LUNA Streams service.

After the steps have been performed, you can start manually or automatically launching LUNA Streams and FaceStream.

Unpack distribution#

It is recommended to move the archive to a pre-created directory for FaceStream and unpack the archive there.

The following commands should be performed under the root user.

Create a directory for FaceStream.

mkdir -p /var/lib/fs

Move the archive to the created directory. It is considered that the archive is saved to the "/root" directory.

mv /root/facestream_docker_v.5.1.46.zip /var/lib/fs/

Go to the directory.

cd /var/lib/fs/

Install the unzip utility if it is not installed.

yum install unzip

Unpack the archive.

unzip facestream_docker_v.5.1.46.zip

Create a symbolic link. The link indicates that the current version of the distribution file is used to run the software package.

ln -s facestream_docker_v.5.1.46 fs-current

Install Docker#

Docker is required for launching of the FaceStream container.

The Docker installation is described in the official documentation:

https://docs.docker.com/engine/install/centos/.

You do not need to install Docker if you already have an installed Docker 20.10.8 on your server. Not guaranteed to work with higher versions of Docker.

Quick installation commands are listed below.

Check the official documentation for updates if you have any problems with the installation.

Install dependencies.

yum install -y yum-utils device-mapper-persistent-data lvm2

Add repository.

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install Docker.

yum -y install docker-ce docker-ce-cli containerd.io

Launch Docker.

systemctl start docker
systemctl enable docker

Check Docker status.

systemctl status docker

Install Docker Compose#

Note. Install Docker Compose only if you are going to use the FaceStream automatic launching script.

Install Docker Compose.

curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

See the official documentation for details:

https://docs.docker.com/compose/install/

Choose logging method#

There are two methods to output logs

  • Standard log output (stdout).
  • Log output to a file.

Log output settings for LUNA PLATFORM services and LUNA Streams service are set in section <SERVICE_NAME>_LOGGER of LUNA Configurator service.

Log output settings for FaceStream are set in the settings logging of section FACE_STREAM_CONFIG of LUNA Configurator service.

If necessary, you can use both methods of displaying logs.

Logging to stdout#

This method is used by default and requires no further action.

It is recommended to configure Docker log rotation to limit log sizes (see "Docker log rotation").

Logging to file#

Note. When you enable saving logs to a file, you should remember that logs occupy a certain place in the storage, and the process of logging to a file negatively affects system performance.

To use this method, you need to perform the following additional actions:

  • Before launching the services: create directories for logs on the server.
  • After launching the services: activate log recording and set the location of log storage inside LP service containers.
  • Furing the launch of services: configure synchronization of log directories in the container with logs on the server using the volume argument at the start of each container.

In the Docker Compose script, synchronization of directories with folders is not configured. You need to manually add folder mounting to the docker-compose.yml file.

See the instructions for enabling logging to files in the "Logging to server" section.

Install GPU dependencies#

Skip this section if you are not going to utilize GPU for your calculations.

You need to install NVIDIA Container Toolkit to use GPU with Docker containers.

The example of the installation is given below.

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | tee /etc/yum.repos.d/nvidia-docker.repo
yum install -y nvidia-container-toolkit
systemctl restart docker

Check the NVIDIA Container toolkit operating by running a base CUDA container (this container is not provided in the FaceStream distribution and should be downloaded from the Internet):

docker run --rm --gpus all nvidia/cuda:11.4.3-base-centos7 nvidia-smi

See the documentation for additional information:

https://github.com/NVIDIA/nvidia-docker#centos-7x8x-docker-ce-rhel-7x8x-docker-ce-amazon-linux-12.

Attributes extraction on the GPU is engineered for maximum throughput. The input images are processed in batches. This reduces computation cost per image but does not provide the shortest latency per image.

GPU acceleration is designed for high load applications where request counts per second consistently reach thousands. It won’t be beneficial to use GPU acceleration in non-extensively loaded scenarios where latency matters.

Actions to launch FaceStream with GPU through Docker Compose#

To launch FaceStream with GPU through Docker Compose, it is necessary, in addition to the above actions, to add the deploy section in the facestream field to the docker-compose.yml file.

Before starting the FaceStream container with GPU, it is required to enable GPU for calculations in the FaceStream settings using the "enable_gpu_processing" parameter (see the "FaceStream configuration" section in the administrator manual).

vi /var/lib/fs/fs-current/example-docker/docker-compose.yml
  facestream:
    image: ${REGISTRY_ADDRESS}:${DOCKER_REGISTRY_PORT}/facestream:${FS_VER}
    deploy:
      resources:
        reservations:
          devices:
          - driver: nvidia
            count: all
            capabilities: [gpu]
    restart: always
    environment:
      CONFIGURATOR_HOST: ${HOST_CONFIGURATOR}
      CONFIGURATOR_PORT: 5070

driver - this field specifies the driver for the reserved device(s);

count - this field specifies the number of GPU devices that should be reserved (providing the host holds that number of GPUs);

capabilities - this field expresses both generic and driver specific capabilities. It must be set, otherwise, an error will be returned when deploying the service.

See the documentation for additional information:

https://docs.docker.com/compose/gpu-support/#enabling-gpu-access-to-service-containers.

Login to registry#

When launching containers, you should specify a link to the image required for the container launching. This image will be downloaded from the VisionLabs registry. Before that, you should login to the registry.

Login and password can be requested from the VisionLabs representative.

Enter login .

After running the command, you will be prompted for a password. Enter password.

In the docker login command, you can enter the login and password at the same time, but this does not guarantee security because the password can be seen in the command history.

Check for license#

If the LUNA PLATFORM services are already launched and the license with a parameter that determines the streams number for LUNA Streams operation is already activated, then you need to make sure that the current LUNA PLATFORM key contains this parameter. The information can be provided by VisionLabs specialists.

If this parameter is not contained in the key, then you need to request a new key and contact VisionLabs specialists for advice on updating the license key.