Skip to content

LUNA PLATFORM v.5.84.0#

LP changes

  • SDK was updated to version 5.23.0.

    • Deepfake estimator was updated to version 5.
    • OneShotLiveness estimator was updated to version 8.
    • The face descriptor neural network of version 52 is no longer supported.
    • The body descriptor of version 110 was removed from the distribution package. The version 116 is now used by default ("DEFAULT_HUMAN_DESCRIPTOR_VERSION" setting of Remote SDK).

    Important: if the version 110 was used by default ("DEFAULT_HUMAN_DESCRIPTOR_VERSION"), the Remote SDK service will not start without additional actions.

    You should perform one of the following actions:

    • Request the corresponding version of the neural network from VisionLabs and add it to the Remote SDK container.
    • Perform the "Additional extraction" task before the update and then specify the required version in the "DEFAULT_HUMAN_DESCRIPTOR_VERSION" before the Remote SDK launch. Thus, you can use the previous neural network version.
    • Specify the 116 version in "DEFAULT_HUMAN_DESCRIPTOR_VERSION" before the Remote SDK launch. The setting is not updated during the Configurator settings migration. Already created descriptors can no longer be used if the "Additional extraction" task was not performed.
  • A new estimator for face occlusion estimation was added.

    The estimator returns information occlusion of the whole face or its parts.

    This estimator returns information about:

    • Overall face zone
    • Forehead zone
    • Nose zone
    • Eyes zone (for both eyes)
    • Mouth zone
    • Lower face zone

    Resources, where the estimation is performed

    • "/handlers"

      Estimation name — "estimate_face_occlusion".

    • "/verifiers"

      Estimation name — "estimate_face_occlusion".

    Occlusion thresholds

    The threshold for the acceptable percentage of the whole face or each of its zones can be specified in the Configurator service. The thresholds can also be set during the handler or verifier creation.

    In addition, there is the threshold for the hair occlusion estimation. The threshold specifies the acceptable percantage of face occluded by hair. All the hair occlusion above the specified threshold is considered in the overall face occlusion and occlusion of each zone.

    • If the hair threshold is set to 0, hair of any length is considered as an occlusion.
    • If the hair threshold is set to 1, the hair occlusion is not considered.

    The threshoulds can be changed according to your business cases.

    Note Moustache and beard are never considered as a face occlusion.

    Filtration

    Filtration is available. You can specify list of zones that should not be occluded.

    The detection is filtered if the occlusion of any of the specified zones exceeds the threshold.

    The verdict for the occlusion check for the whole face and each of its parts can be received in "face_quality".

  • The human tracking analytics was added to Video Agent.

    A stream or a videofile can be specified as a cource for the analytics.

    AGS and headpose are used for filtration.

  • BasicAuth authorization was added to the Configurator service. It is used for requests that logically require authorization.

    Note: The functionality is in beta test.

    You should specify the following settings in the "LUNA_CONFIGURATOR_AUTHORIZATION" section of Configurator:

    • "USE_AUTHORIZATION" - Enable/disable authorization/
    • "LUNA_CONFIGURATOR_USER" - Login, which is used by other services for authorization.
    • "LUNA_CONFIGURATOR_PASS" - Password, which is used by other services for authorization.

    There are two ways to specify the settings:

    • Use environment settings VL_SETTINGS during service startup. Example:

    env "VL_SETTINGS.LUNA_CONFIGURATOR.USE_AUTHORIZATION=1" env "VL_SETTINGS.LUNA_CONFIGURATOR.LUNA_CONFIGURATOR_USER=luna" env "VL_SETTINGS.LUNA_CONFIGURATOR.LUNA_CONFIGURATOR_PASS=root"

    • specify the settings in the configuration file of Configurator. The file is located in the distribution package: example-docker/luna_configurator/configs/luna_configurator_postgres.conf

    You should specify the login and password for other services so they can perform requests to the Configurator service:

    • Specify "LUNA_CONFIGURATOR_USER" and "LUNA_CONFIGURATOR" settings using the environment setting VL_SETTINGS during the service startup (see the example above).
    • Specify "LUNA_CONFIGURATOR_USER" and "LUNA_CONFIGURATOR" settings in the service configuration file.

    Using authorization allows you to provide an additional level of security and protection from unauthorized access. To ensure encryption, you must use SSL certificates or proxying via Nginx.

  • The rate parameter was added to the videosdk resource for human tracking video analytics. It is used to specify the frequency of frames processing. For example, process each third frame or process frames every 3 seconds.

    The default value of the rate parameter is 1 frame.

    Previously the parameter was available for people counting analytics only.

  • The LUNA_REMOTE_SDK_HUMAN_TRACKER_SETTINGS section was added to the Remote SDK service. The parameters enable TrackEngine parameters specification.

    The following parameters are available:

    • runtime_settings > device_class — Specifies the device type ("cpu", "gpu" or "global").
    • runtime_settings > optimal_batch_size — Specifies the batch size for estimation.
    • estimator_settings > detector_step - Specifies the frequency of face or body detection. The redetection of faces or bodies is performed on the remaining frames.
    • estimator_settings > scale_result_size - Specifies the size in pixel for the image scaling, before detection is performed. The image is scaled by the greater size (width or height).
    • estimator_settings > skip_frames - Specifies the number of frames during which the system awaits for the object to reappear before the track end. If the track was not extended by detection or redetection, the track is finished.
  • The possibility to set the region of interest (DROI) in accordence to the source frame resolution was added to the human tracking analytics in the videosdk resource.

    See DROI description in the "droi" section.

  • The cross-matching requests were accelerated due to batching implementation.

  • Now the face desctiptors are filtered according to the AGS value and not by the detection score for human tracking analytics in the videosdk. Thus, detections of bad quality can be filtered.

    If the AGS estimator is not available, the filtration by the detection score is used.

  • The parametervideo_analytics was added to the get platform features resource. The parameter provides service status check (enabled/disabled).

LP fixed errors

  • The limit for the loaded faces was added to the OpenAPI specification of the API service for request "attach/detach faces to the list".

  • Fixed was the error appeared during the generate stream events request. The number of descriptors didn't match the number of provided detections.

  • Fixed was the error, when there was no possibility to create streams with several similar analytics with different settings.

  • Fixed was the error when the Video Manager service did not process streams with several analytics correctly.

  • Fixed was the error in generate stream events and generate events requests, which led to a new face creation when the detection was not performed.

  • Fixed was the error due to which the values ​​of the "RESPONSE_TIMEOUT" parameters of the "LUNA_MATCHER_PROXY_HTTP_SETTINGS" and "LUNA_PYTHON_MATCHER_HTTP_SETTINGS" sections were not updated properly and the default value of 600 seconds was always applied.