Skip to content

Qt example#

What it does#

This example demonstrates how to use the qt library to processing images of different formats by example the detector, landmarks5, landmarks68 and to estimate a smile, emotions, face attributes, quality, eye, head pose and gaze in an image.

Prerequisites#

This example assumes that you have already read the FaceEngine Handbook (or at least have it somewhere nearby for reference) and know some core concepts, like memory management, object ownership, and life-time control. This sample will not explain these aspects in detail.

More detailed information about the qt library can be obtained by clicking on the link https://www.qt.io/.

Example walkthrough#

To get familiar with FSDK usage and common practices, please go through example_extraction first.

How to run#

Use the following command to run the example.

./example_qt <some_image>

Example output#

Warped images with faces, image with face detection and marked detection points.

Detection 1
Rect: x=277 y=426 w=73 h=94
Attribure estimate:
gender: 0.999705 (1 - man, 0 - woman)
wearGlasses: 0.000118364 (1 - person wears glasses, 0 - person doesn't wear glasses)
age: 17.7197 (in years)
Quality estimate:
light: 0.962603
dark: 0.974558
gray: 0.980648
blur: 0.955808
quality: 0.955808
Eye estimate:
left eye state: 2 (0 - close, 1 - open, 2 - noteye)
right eye state: 2 (0 - close, 1 - open, 2 - noteye)

Detection 2
Rect: x=203 y=159 w=63 h=89
Attribure estimate:
gender: 0.0053403 (1 - man, 0 - woman)
wearGlasses: 0.000911222 (1 - person wears glasses, 0 - person doesn't wear glasses)
age: 16.1504 (in years)
Quality estimate:
light: 0.964406
dark: 0.971644
gray: 0.981737
blur: 0.955808
quality: 0.955808
Eye estimate:
left eye state: 0 (0 - close, 1 - open, 2 - noteye)
right eye state: 2 (0 - close, 1 - open, 2 - noteye)
Back to top