SDK Loop

SDK Task

A task that determinants a set of biometric features estimated from input images or samples. All features are divided into 3 groups:

  • face detection features: face detection, landmarks, eyes, gaze direction, ags.

  • samples(warp) features: emotions, mouth state, warp quality.

  • face attributes: age, gender, ethnicity, descriptor.

Face detection features are estimated on images, other features are estimated on warped images.

Task pipeline

SDK loop is a structure for task processing. There exist 3 stages of task processing:

  • detector stage

  • warp estimator stage

  • extractor stage

All features in estimated in the corresponding stage. Loop determines a pipeline for every task which is an order list of stages. Communication between stage handlers is done through queues. Loop sends a task to a queue which is corresponding first task pipeline stage. The task is put to a queue which corresponding next stage in the pipeline after stage processing. A task is put to a common results queue if it is failed or done. Loop gets a task from the results queue and transfers to a task customer.

Module realize Handlers task. This task logic based on luna-handlers service logic.

class luna_handlers.sdk.sdk_loop.task.HandlersTask(data, params=None, **kwargs)[source]

Handler Task, correlate with luna-handlers APi logic (multiface policy, filtration logic, …)

aggregateBodyAttributes(aggregatedSample)[source]

Aggregate body attributes

aggregateFaceAttributes(aggregatedSample)[source]

Aggregate face attributes

async estimateAggregatedFaceDescriptor(aggregatedSample, version, gcFilter)[source]

Estimate aggregated face descriptor :param aggregatedSample: aggregated face sample :param version: descriptor version :param gcFilter: optional gc filter from task

Returns:

if sample is not filtered (after filtration) false: if sample is filtered, all source samples marks as filtered

Return type:

true

Return type:

bool

estimateAggregatedLivenessV1(faceSamples, aggregatedSample, qualityThreshold, scoreThreshold, livenessFilter)[source]

Estimate aggregated livenessV1.

Parameters:
  • faceSamples – source face samples

  • aggregatedSample – aggregated sample

  • qualityThreshold – quality threshold

  • scoreThreshold – score threshold

  • livenessFilter – liveness filter

Returns:

if sample is not filtered (after filtration) false: if sample is filtered, all source samples marks as filtered

Return type:

true

Return type:

bool

async estimateBodyAttributesFromImage(bodySamples)[source]

Run body attributes estimations on samples which generated from detection. .. attribute:: bodySamples

face samples with detections

Returns:

same body samples with estimation inside

Return type:

list[BodySample]

async estimateBodyWarpAttributes(bodySamplesOnImage)[source]

Estimate body detection attributes

Parameters:

bodySamplesOnImage – body samples with detections

async estimateDetectionAttributes(image)[source]

Estimate detection attributes All without face descriptotr, basic attributes, /detector of luna-handlers logic

async estimateFaceAttributesFromImage(faceSamples)[source]

Run face attributes estimations on samples which generated from detection. .. attribute:: faceSamples

face samples with detections

Returns:

same face samples with estimation inside

Return type:

list[FaceSample]

async estimateFaceDetectionAttributes(faceSamplesOnImage)[source]

Estimate face detection attributes

Parameters:

faceSamplesOnImage – face samples with detections

async estimateFaceWarpAttributes(faceSamples)[source]

Parallel estimate face warp attributes

async estimatePeople(image)[source]

Estimate people

executeAggregatedMultiFacePolicy(images)[source]

Execute multiface policy logic for task with aggregation. :param images: list images

async executeAggregatedTask(images)[source]

Execute task with aggregation

executeMultiFacePolicy(image)[source]

Execute multiface policy logic. :param image: image

async executeOnImage(image)[source]

Execute task on one image (without aggregation)

luna_handlers.sdk.sdk_loop.task.generateMultiEntitiesError(samples, image, error)[source]

Generete error for several detections on image

Return type:

LoopError

Module contains base task

class luna_handlers.sdk.sdk_loop.tasks.task.BaseTask(data, params=None, **kwargs)[source]

A base task class.

The task is an entity for a customization estimation business logic. Task determines what and how will be estimated. For example, user can realize task class with own a multiface policy processing logic, filtration, some else.

monitoringPoint

execution task monitoring point (execution time, error, some else)

Type:

TaskMonitoringPoint

executionTime

task execution time

Type:

float

content

task content

Type:

TaskContent

result

task result

Type:

TaskResult

taskId

pseudo unique id for logging

Type:

task id

async classmethod close(closeEngine=True)[source]

Grace full shut down. :param closeEngine: close external engine or not

final async execute()[source]

Execution task method

Return type:

BaseTask

filterByHeadPosePitch(sample)[source]

Filter sample by pitch angle logic.

Parameters:

sample – face sample

Returns:

None if filter is None or head pose not estimated (body sample) otherwise filtration result

Return type:

Optional[FiltrationResult, None]

filterByHeadPoseRoll(sample)[source]

Filter sample by roll angle logic.

Parameters:

sample – face sample

Returns:

None if filter is None or head pose not estimated (body sample) otherwise filtration result

Return type:

Optional[FiltrationResult, None]

filterByHeadPoseYaw(sample)[source]

Filter sample by yaw angle logic.

Parameters:

sample – face sample

Returns:

None if filter is None or sample not estimated (body sample) otherwise filtration result

Return type:

Optional[FiltrationResult, None]

filterFaceGS(sample)[source]

Filter sample by garbage score logic.

Parameters:

sample – face sample

Returns:

None if filter is None or head pose not estimated (body sample) otherwise filtration result

Return type:

Optional[FiltrationResult, None]

filterSampleByLiveness(sample)[source]

Filter sample by livenessV1 logic.

Parameters:

sample – face sample

Returns:

None if filter is None or sample not estimated (body sample) otherwise filtration result

Return type:

Optional[FiltrationResult, None]

filterSampleByMask(sample)[source]

Filter sample by mask logic :param sample: face sample

Returns:

None if filter is None or mask not estimated (body sample) otherwise filtration result

Return type:

Optional[FiltrationResult, None]

classmethod initialize(engine, monitoringPointClass=None, **kwargs)[source]

Task initialization. Setup globals: engine, monitoring point class

Parameters:
  • engine – engine

  • monitoringPointClass – user defined monitoring class (for custom fields and tags)

  • **kwargs – some args for Task class realization

monitoringPointClass

alias of TaskMonitoringPoint

needEstimate(target)[source]

need estimate target or not

Return type:

bool

async prepare()[source]

Prepare date for estimations

property state: TaskState

Get current task state

Return type:

TaskState

class luna_handlers.sdk.sdk_loop.tasks.task.LivenessV1Params(qualityThreshold=0.5, scoreThreshold=None)[source]

Liveness v1 estimation params

class luna_handlers.sdk.sdk_loop.tasks.task.TaskContent(images, params)[source]

Task content.

params

task params

images

source images for estimations

class luna_handlers.sdk.sdk_loop.tasks.task.TaskEstimationParams(livenessv1=_Nothing.NOTHING, faceDescriptorVersion=None, bodyDescriptorVersion=None)[source]

Task estimation params

class luna_handlers.sdk.sdk_loop.tasks.task.TaskMonitoringPoint(eventTime=None)[source]

Task monitoring point .. attribute:: executionTime

task execution time

type:

float

eventTime

start task timestamp

Type:

float

imageCount

task image count

Type:

int

getFields()[source]

Get monitoring fields

Return type:

dict

getTags()[source]

Get monitoring tags

Return type:

dict

class luna_handlers.sdk.sdk_loop.tasks.task.TaskParams(targets=None, filters=None, estimatorsParams=None, multifacePolicy=MultifacePolicy.allowed, useExifInfo=False, autoRotation=False, aggregate=False)[source]

A container for user defined task parameters. It is one place for all user parameters except images. These parameters are constant across a task group.

We suppose that user defines a set of interest parameters and create task with these parameters. User can to instance this structure and does not calculate to the requiredEstimations on each request.

targets

user defined targets

Type:

set[LoopEstimations]

filters

user defined filters

Type:

Filters

useExifInfo

use exif or not for correct loading image

Type:

bool

estimatorsParams

params for estimations (thresholds and etc)

Type:

TaskEstimationParams

autoRotation

try detect rotated images or not

Type:

bool

aggregate

estimate or not aggregated attributes

Type:

bool

multifacePolicy

multiface policy

requiredEstimations

predicted required estimations for correct processing all params

Type:

set[LoopEstimations]

class luna_handlers.sdk.sdk_loop.tasks.task.TaskResult(images, aggregatedSample)[source]

Task result container .. attribute:: images

loaded images, None if image load is failed by unexpected reason

type:

Optional[list[Image]]

aggregatedSample

aggregated sample

Type:

Optional[AggregatedSample]

error

processing task error

Type:

Optional[LoopError]

class luna_handlers.sdk.sdk_loop.tasks.task.TaskState(value)[source]

Task state enum

luna_handlers.sdk.sdk_loop.tasks.task.getNotFilteredSamples(images)[source]

Helper, get non filtered face and bodies samples from images

Return type:

tuple[list[FaceSample], list[BodySample]]

luna_handlers.sdk.sdk_loop.tasks.task.predictRequiredEstimations(targets, filters=None, needExifInfo=False)[source]

Predict required estimations for correct filters work, autorotation and etc :param targets: user targets :param filters: user filters :param needExifInfo: need or not exif

Returns:

set all estimations.

Return type:

set[LoopEstimations]

Module contains human sample models

class luna_handlers.sdk.sdk_loop.models.sample.AggregatedSample(face=_Nothing.NOTHING, body=_Nothing.NOTHING)[source]

An Aggregated human sample - union of aggregated face sample and aggregated body samples

class luna_handlers.sdk.sdk_loop.models.sample.Sample(face=None, body=None)[source]

A human sample - union of face sample and body samples

Module contains face sample models

class luna_handlers.sdk.sdk_loop.models.face_sample.AggregateAttributesCounter(score, index, count=1, operator=ScoreOperator.concat)[source]

Container class for counting aggregate attributes.

update(score, index)[source]

Update attribute score and set top index. :param score: next attribute score :param index: attribute index

Return type:

None

class luna_handlers.sdk.sdk_loop.models.face_sample.AggregatedEmotions(emotions)[source]

Aggregated emotions

predominateEmotion

predominant aggregated emotion

Type:

Emotion

anger

aggregated anger score

Type:

float

disgust

aggregated disgust score

Type:

float

fear

aggregated fear score

Type:

float

happiness

aggregated happiness score

Type:

float

neutral

aggregated neutral score

Type:

float

sadness

aggregated sadness score

Type:

float

surprise

aggregated surprise score

Type:

float

asDict()[source]

Convert aggregated emotions to dict

Return type:

dict

class luna_handlers.sdk.sdk_loop.models.face_sample.AggregatedFaceSample(samples=_Nothing.NOTHING, liveness=None, descriptor=None, basicAttributes=None, filters=_Nothing.NOTHING, mask=None, emotions=None)[source]

Aggregated face sample contaiter

class luna_handlers.sdk.sdk_loop.models.face_sample.AggregatedMask(masks)[source]

Aggregated mask

predominateMask

predominant aggregated mask

Type:

MaskState

medicalMask

aggregated mask score

Type:

float

occluded

aggregated occluded score

Type:

float

missing

aggregated missing score

Type:

float

asDict()[source]

Convert aggregated mask to dict

Return type:

dict

class luna_handlers.sdk.sdk_loop.models.face_sample.FaceSample(detection=None, eyes=None, gaze=None, emotions=None, mouthState=None, basicAttributes=None, warpQuality=None, mask=None, glasses=None, headPose=None, transformedLandmarks5=None, livenessV1=None, warp=None, descriptor=None, headwear=None, fisheye=None, redEyes=None, eyebrowExpression=None, naturalLight=None, detectionBackground=None, imageColorType=None, filters=_Nothing.NOTHING, dynamicRange=None)[source]

Face attributes containers

class luna_handlers.sdk.sdk_loop.models.face_sample.ScoreOperator(value)[source]

Score operator enum.

Module contains body sample models

class luna_handlers.sdk.sdk_loop.models.body_sample.AggregatedBodySample(samples=_Nothing.NOTHING, descriptor=None, attributes=None, filters=_Nothing.NOTHING)[source]

Aggregated body sample contaiter

class luna_handlers.sdk.sdk_loop.models.body_sample.BodySample(detection=None, descriptor=None, warp=None, attributes=None, filters=_Nothing.NOTHING)[source]

Body attributes containers

Module contains models for work with images

class luna_handlers.sdk.sdk_loop.models.image.Image(origin, sdkImage=None, error=None, exif=None, faceBoxes=None, bodyBoxes=None, orientation=None, peopleCount=None, samples=_Nothing.NOTHING, pillowImage=None, ndarray=None, exifOrientation=None, rawImage=None)[source]

A structure for work with image.

The image contains all temporary entities for estimation targets and provides conversation between them. The image contains estimation results also.

classmethod exifTranspose(image)[source]

Based on genuine function from pillow https://pillow.readthedocs.io/en/latest/_modules/PIL/ImageOps.html#exif_transpose

The only difference is that manipulation is being done inplace, allowing us to leverage internal caching of exif data.

NOTE: Transposed image might have incorrect values in particular tags. For example, orientation, length, width.

Return type:

Image

extractExif()[source]

Extract exif from image.

Set an error if load image is failed. If image contains an error will be return empty dict

Returns:

extracted exif tags

Return type:

dict

getAsNumpy()[source]

Convert image to numpy array.

If occured error function will set up in the image.

Return type:

Optional[ndarray, None]

getAsPillow()[source]

Get image data as pillow object.

Returns:

PIL image

Return type:

Optional[Image, None]

property rawImage: bytes | bytearray | Image | ndarray

Get data for load

Return type:

Union[bytes, bytearray, Image, ndarray]

class luna_handlers.sdk.sdk_loop.models.image.ImageType(value)[source]

Image type enum

class luna_handlers.sdk.sdk_loop.models.image.InputImage(body, imageType, filename='', faceBoxes=None, bodyBoxes=None, pillowImage=None)[source]

Container for input image

getAsPillow()[source]

Get image data as pillow object.

Returns:

PIL image if image was loaded success otherwise error

Return type:

Union[Image, LoopError]

luna_handlers.sdk.sdk_loop.models.image.getAsPillow(data)[source]

Get image data as pillow object.

Returns:

PIL image if image was loaded success otherwise error

Return type:

Union[Image, LoopError]

luna_handlers.sdk.sdk_loop.models.image.getTransposeParam(method)[source]

Get transpose param by exif orientation :param method: method from a image

Returns:

tuple< rotation, neeed or not to flip the image>

Return type:

tuple[RotationAngle, bool]

Module contains monitoring point protocol

class luna_handlers.sdk.sdk_loop.monitoring_utils.monitoring.MonitoringPoint(*args, **kwargs)[source]

Protocol for a monitoring point.

abstract getFields()[source]

Get monitoring fields

Return type:

dict

abstract getTags()[source]

Get monitoring tags

Return type:

dict

Module contains base class for estimation monitoring points

class luna_handlers.sdk.sdk_loop.monitoring_utils.estimation_monitoring.EstimationPoint(eventTime=None, executionTime=0, batchSize=0)[source]

Estimation monitoring point, implementation Monitoring point protocol

executionTime

execution estimations time (batch)

Type:

float

batchSize

batch size for estimations

Type:

int

eventTime

event time, timestamp

Type:

float

getFields()[source]

Get monitoring fields

Return type:

dict

getTags()[source]

Get monitoring tags

Return type:

dict

Module contains global monitoring pont storages

class luna_handlers.sdk.sdk_loop.monitoring_utils.storages.Storage(name, enable=True)[source]

Container for monitoring points

name

storage name

enable

enable or not monitoring point storing

_values

map - keys - series name, values - list monitoring points

append(point)[source]

Add point to the storage

monitoring(point)[source]

Collect monitoring point ti storage (a event duration measurement).

popValues()[source]

pop points from the storage

Return type:

ContextManager[dict[str, list[MonitoringPoint]]]

Helpers enums.

class luna_handlers.sdk.sdk_loop.enums.Estimators(value)[source]

Available loop estimators

basicAttributes = 'basicAttributes'

basic attributes (age, gender, ethnicity)

bodyAttributes = 'bodyAttributes'

body attributes

bodyDescriptor = 'bodyDescriptor'

body descriptors

bodyDetection = 'bodyDetection'

body detection

bodyWarp = 'bodyWarp'

body warp

dynamicRange = 'dynamicRange'

dynamic range

emotions = 'emotions'

emotions

eyebrowExpression = 'eyebrow_expression'

eyebrow expression

eyes = 'eyes'

eyes state, points

faceDescriptor = 'faceDescriptor'

face descriptor

faceDetection = 'faceDetection'

face detection

faceDetectionBackground = 'face_detection_background'

face detection background

faceLandmarks5 = 'face_landmarks5'

face landmarks5

faceLandmarks68 = 'face_landmarks68'

face landmarks68

faceNaturalLight = 'face_natural_light'

face natural light

faceWarp = 'faceWarp'

face warp

fisheye = 'fisheye'

fisheye

gazeDirection = 'gaze'

gaze direction

glasses = 'glasses'

glasses

headPose = 'headPose'

head pose

headwear = 'headwear'

headwear

humanDetection = 'humaDetection'

human detection (face and body)

imageColorType = 'image_color_type'

image color type

imageOrientation = 'imageOrientation'

image orientation

livenessV1 = 'livenessV1'

livenessv1

mask = 'mask'

mask

mouthState = 'mouthState'

mouth state

peopleCount = 'peopleCount'

people count

quality = 'warpQuality'

warp quality

redEyes = 'red_eyes'

red-eyes

class luna_handlers.sdk.sdk_loop.enums.LoopEstimations(value)[source]

SDK Loop estimations enum. Name - estimation target.

basicAttributes = 'basic_attributes'

basic attributes (age, gender, ethnicity)

bodyAttributes = 'body_attributes'

body attributes

bodyDescriptor = 'body_descriptor'

body descriptor

bodyDetection = 'body_detection'

body detection

bodyLandmarks17 = 'body_landmarks17'

body landmarks

bodyWarp = 'body_warp'

body warp

dynamicRange = 'dynamic_range'

dynamic range

emotions = 'emotions'

emotions

eyebrowExpression = 'eyebrow_expression'

eyebrow expression

eyes = 'eyes'

eyes state, points

faceDescriptor = 'face_descriptor'

face descriptor

faceDetection = 'face_detection'

face detection

faceDetectionBackground = 'face_detection_background'

face detection background

faceLandmarks5 = 'face_landmarks5'

face landmarks68

faceLandmarks68 = 'face_landmarks68'

face landmarks68

faceNaturalLight = 'face_natural_light'

face natural light

faceWarp = 'face_warp'

face warp

faceWarpQuality = 'face_warp_quality'

face warp quality

fisheye = 'fisheye'

fisheye

gaze = 'gaze'

gaze direction

glasses = 'glasses'

glasses

headPose = 'head_pose'

head pose

headwear = 'headwear'

headwear

imageColorType = 'image_color_type'

image color type

imageOrientation = 'image_orientation'

image orientation

livenessV1 = 'livenessV1'

livenessV1

mask = 'mask'

mask

mouthAttributes = 'mouth_state'

mouth state

peopleCount = 'people_count'

people count

redEyes = 'red_eyes'

red-eyes

class luna_handlers.sdk.sdk_loop.enums.MultifacePolicy(value)[source]

Multiple face detection policy enum.

allowed = 1

multiple face detection allowed

getBest = 2

get best detection from the image

notAllowed = 0

multiple face detection not allowed

Module contains face detector estimator realization Estimation landmakrks68 is very slow therefore we logical divide detection with and without landmarks17

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceDetMonitoringPoint(imageWidth, imageHeight, executionTime=0, batchSize=0)[source]

Face detector monitoring point. We get first (random) image for monitoring an image size and a body detection size.

detectionCount

detection count on all images from batch

imageWidth

first image width from batch

imageHeight

first image height from batch

detectionWidth

first body detection width on first image from batch

detectionHeight

first body detection height on first image from batch

getFields()[source]

Get fields for monitoring

Return type:

dict

updateFromFirstImageResult(detectionResult)[source]

Update point of detection results :param detectionResult: detection result

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceDetV3Settings(redetectFaceTargetSize=None, minFaceSize=None, redetectTensorSize=None, redetectScoreThreshold=None, scoreThreshold=None)[source]

Face detector version 3 settings

updateSDKFaceDetV3Settings(faceEngineConf)[source]

Update FaceDetV3 section of the sdk config. :param faceEngineConf: sdk config

Returns:

updated config

Return type:

FaceEngineSettingsProvider

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceDetectionEstimationBroker(optimalBatchSize, estimatorSettings, workerClass, workerCount=1, queueName='', **kwargs)[source]

Face detection without landmarks68 broker

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceDetectionEstimationBrokerMixin[source]

Face detector creation mixin

createEstimator(settings)[source]

Create face detector

Return type:

tuple[FaceDetector, VLFaceEngine]

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceDetectionQueueEstimator(taskQueue, estimator, maxBatchSize, settings, estimationName='')[source]

Face detection estimator without landmarks68 estimator

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceDetectorSettings(launchOptions=<factory>, detectorSettings=<factory>, maxFacesCount=128)[source]

Face detector settings

createFEConf()[source]

Create faceEngine conf for detector creation. :returns: face engine config + runtime config

Return type:

tuple[FaceEngineSettingsProvider, bool]

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceReDetMonitoringPointBase(bboxWidth, bboxHeight, imageWidth, imageHeight, executionTime=0, batchSize=0)[source]

Face detector monitoring point. We get first (random) image for monitoring an image size, bbox and a face detection size.

bboxWidth

first bbox width on first image from batch

bboxHeight

first bbox height on first image from batch

getFields()[source]

Get fields for monitoring

Return type:

dict

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceReDetectionEstimationBroker(optimalBatchSize, estimatorSettings, workerClass, workerCount=1, queueName='', **kwargs)[source]

Face ReDetection with landmarks68 broker

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_detector.FaceReDetectionQueueEstimator(taskQueue, estimator, maxBatchSize, settings, estimationName='')[source]

Body ReDetection estimator without landmarks65 estimation

assertData(image)[source]

assert image for redetection

Module contains face warp estimator realization

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_warp_estimator.FaceWarpEstimationBroker(optimalBatchSize, estimatorSettings, workerClass, workerCount=1, queueName='', **kwargs)[source]

Face warp broker

createEstimator(settings)[source]

Create face warper

Return type:

tuple[FaceWarper, VLFaceEngine]

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_warp_estimator.FaceWarpFaceWarpQueueEstimator(taskQueue, estimator, maxBatchSize, settings, estimationName='')[source]

Face warp estimator

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_warp_estimator.FaceWarpMonitoringPoint(eventTime=None, executionTime=0, batchSize=0)[source]

Face warp estimation monitoring

class luna_handlers.sdk.sdk_loop.estimators.face_estimators.face_warp_estimator.FaceWarperSettings(launchOptions=<factory>)[source]

Face warp estimator settings

Module contains body detector estimator realization. Estimation landmakrks17 is very slow therefore we logical divide detection with and without landmarks17

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyBrokerDetectorMixin[source]

Body detection base broker

createEstimator(settings)[source]

Create body detector

Return type:

tuple[BodyDetector, VLFaceEngine]

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyDetMonitoringPoint(imageWidth, imageHeight, estimate17Landmarks, executionTime=0, batchSize=0)[source]

Body detector monitoring point. We get first (random) image for monitoring an image size and a body detection size.

detectionCount

detection count on all images from batch

imageWidth

first image width from batch

imageHeight

first image height from batch

detectionWidth

first body detection width on first image from batch

detectionHeight

first body detection height on first image from batch

estimate17Landmarks

estimate or not 17Landmarks

getFields()[source]

Get fields for monitoring

Return type:

dict

getTags()[source]

Get tags for monitoring

Return type:

dict

updateFromFirstImageResult(detectionResult)[source]

Update point of detection results :param detectionResult: detection result

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyDetection17EstimationBroker(optimalBatchSize, estimatorSettings, workerClass, workerCount=1, queueName='', **kwargs)[source]

Body detection with landmarks17 broker

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyDetection17QueueEstimator(taskQueue, estimator, maxBatchSize, settings, estimationName='')[source]

Body detection estimator with landmarks17 estimation

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyDetectionEstimationBroker(optimalBatchSize, estimatorSettings, workerClass, workerCount=1, queueName='', **kwargs)[source]

Body detection without landmarks17 broker

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyDetectionQueueEstimator(taskQueue, estimator, maxBatchSize, settings, estimationName='')[source]

Body detection estimator without landmarks17 estimator

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyDetectorSettings(launchOptions=<factory>, maxBodiesCount=128)[source]

Body detector settings

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyReDetMonitoringPointBase(bboxWidth, bboxHeight, imageWidth, imageHeight, estimate17Landmarks, executionTime=0, batchSize=0)[source]

Body detector monitoring point. We get first (random) image for monitoring an image size, bbox and a body detection size.

bboxWidth

first bbox width on first image from batch

bboxHeight

first bbox height on first image from batch

getFields()[source]

Get fields for monitoring

Return type:

dict

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyReDetection17EstimationBroker(optimalBatchSize, estimatorSettings, workerClass, workerCount=1, queueName='', **kwargs)[source]

Body ReDetection with landmarks17 broker

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyReDetection17QueueEstimator(taskQueue, estimator, maxBatchSize, settings, estimationName='')[source]

Body ReDetection estimator with landmarks17 estimation

assertData(image)[source]

assert image for redetection

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyReDetectionEstimationBroker(optimalBatchSize, estimatorSettings, workerClass, workerCount=1, queueName='', **kwargs)[source]

Body ReDetection without landmarks17 broker

class luna_handlers.sdk.sdk_loop.estimators.body_estimators.body_detector.BodyReDetectionQueueEstimator(taskQueue, estimator, maxBatchSize, settings, estimationName='')[source]

Body ReDetection estimator without landmarks17 estimator

assertData(image)[source]

assert image for redetection

Module contains human detector estimator realization.

class luna_handlers.sdk.sdk_loop.estimators.others_estimators.human_detector.HumanDetMonitoringPoint(imageWidth, imageHeight, executionTime=0, batchSize=0)[source]

Human detector monitoring point. We get first (random) image and detection for monitoring an image size and detections size.

detectionCount

human detection count on all images from batch

faceDetectionCount

face detection count on all images from batch

bodyDetectionCount

face detection count on all images from batch

imageWidth

first image width from batch

imageHeight

first image height from batch

bodyDetectionWidth

first body detection width on first image from batch

bodyDetectionHeight

first body detection height on first image from batch

faceDetectionWidth

first face detection width on first image from batch

faceDetectionHeight

first face detection height on first image from batch

getFields()[source]

Get fields for monitoring

Return type:

dict

updateFromFirstImageResult(detectionResult)[source]

Update point of detection results :param detectionResult: detection result

class luna_handlers.sdk.sdk_loop.estimators.others_estimators.human_detector.HumanDetectionEstimationBroker(optimalBatchSize, estimatorSettings, workerClass, workerCount=1, queueName='', **kwargs)[source]

Human detection base broker

createEstimator(settings)[source]

Create human detector

Return type:

tuple[HumanDetector, VLFaceEngine]

class luna_handlers.sdk.sdk_loop.estimators.others_estimators.human_detector.HumanDetectionQueueEstimator(taskQueue, estimator, maxBatchSize, settings, estimationName='')[source]

Body detection estimator without landmarks17 estimator

class luna_handlers.sdk.sdk_loop.estimators.others_estimators.human_detector.HumanDetectorSettings(launchOptions=<factory>, minFaceSize=None, faceThreshold=None, bodyThreshold=None, associationThreshold=None)[source]

Human detector settings

createFEConf()[source]

Create faceEngine conf for detector creation. :returns: face engine config

Return type:

tuple[FaceEngineSettingsProvider, bool]

updateSettings(faceEngineConf)[source]

Update HumanDetector section of the sdk config. :param faceEngineConf: sdk config

Returns:

updated config

Return type:

FaceEngineSettingsProvider