Getting the best shot#
With LUNA ID, you can capture video stream and get the best shot on which the face is fixed in the optimal angle for further processing.
Tip: In LUNA ID for Android you can specify a face recognition area for best shot selection.
In LUNA ID for Android#
1․ Initialize the camera.
Call the LunaID.showCamera()
method to start the camera session. This method initiates face detection and analysis within the video stream.
2․ Get the list of best shots.
This step is optional. Implement it, if you want to get multiple best shots during a session. You can then send the list of acquired best shot to the backend for estimation aggregation. For details, see Sending multiple frames for estimation aggregation to the backend.
2.1. Set the LunaConfig.multipartBestShotsEnabled
parameter to true
to get multiple frames.
2.2. Specify the number of best shots to be returned by setting the LunaConfig.bestShotsCount
parameter. The valid range of values for bestShotsCount
is from 1 to 10.
When multipartBestShotsEnabled
is active, the list of best shots will be returned in the BestShotsFound
event. Use the bestShots
Flow to collect this list.
Structure of BestShotsFound
:
data class BestShotsFound(
val bestShots: List<BestShot>?
) : Event()
Usage example:
LunaID.bestShots.filterNotNull().onEach { bestShotsList ->
Log.e(TAG, "bestShots: ${bestShotsList.bestShots}")
}.launchIn(viewModelScope)
This Flow continuously gets a list of best shots as they are detected during the session.
3․ Subscribe to the final best shot result.
To retrieve the final best shot result (including metadata such as videoPath
and interactionFrames
), subscribe to the LunaID.bestShot
Flow.
Structure of BestShotFound
:
data class BestShotFound(
val bestShot: BestShot, // The selected best shot
val videoPath: String?, // Path to the recorded video (if enabled)
val interactionFrames: List<InteractionFrame>? // Frames with Dynamic Liveness interactions (optional)
) : Event()
Usage example:
val bestShotFlow = MutableStateFlow<Event.BestShotFound?>(null)
LunaID.bestShot.filterNotNull().onEach { bestShotFound ->
Log.e("BestShotFound", bestShotFound.toString())
// Process the best shot or its associated metadata here
}.launchIn(viewModelScope)
4․ Handle best shot events.
The system gets events for both individual best shots (BestShotFound
) and lists of best shots (BestShotsFound
). Depending on your use case, handle these events accordingly:
BestShotFound
: Contains the final best shot and optional metadata.
Use this for single-best-shot scenarios.
BestShotsFound
: Contains a list of all best shots detected during the session.
Use this for multi-best-shot scenarios.
Face recognition area#
In some cases, you may need the best shot search to start only after a user places their face in a certain area in the screen. You can specify face recognition area borders by implementing one of the following strategies:
- Border distances are not initialized
- Border distances are initialized with an Android custom view
- Border distances are initialized in dp
- Border distances are initialized automatically
Add a delay before starting face recognition#
You can optionally set up a fixed delay or specific moment in time to define when the face recognition will start after the camera is displayed in the screen. To do this, use the StartBestShotSearchCommand
command.
Add a delay before getting the best shot#
You can optionally set up a delay, in milliseconds, to define for how long a user's face should be placed in the face detection bounding box before the best shot is taken. To do this, use the LunaID.foundFaceDelayMs
parameter. The default value is 0.
In LUNA ID for iOS#
To get the best shots, pass a value to the delegate
parameter of the LMCameraBuilder.viewController
camera controller instance creation function that conforms to the LMCameraDelegate
protocol.
let controller = LMCameraBuilder.viewController(delegate: LMCameraDelegate,
configuration: LCLunaConfiguration,
livenessAPI: livenessAPI)
With the implementation of the LMCameraDelegate
protocol, the camera controller will interact with the user application. In the implemented methods, you will receive the best shot or the corresponding error.
public protocol LMCameraDelegate: AnyObject {
func bestShot(_ bestShot: LunaCore.LCBestShot, _ videoFile: String?)
func error(_ error: LMCameraError, _ videoFile: String?)
}
Face recognition area#
The minDetSize
parameter specifies the minimum size of a face (in pixels) that LUNA ID can detect within a frame. For example, if a face fits into a square with a side length of 50 pixels and minDetSize
is set to 60, such a face will not be detected.
You can define minDetSize
in either of the following ways:
- Locate the
LCLunaConfiguration
class in the best shot configuration section and define theminDetSize
property with the required value. - Configure
minDetSize
via the LCLunaConfiguration.plist file.
Difference between minDetSize
and minFaceSize
:
minDetSize
determines the smallest detectable face size in the frame.minFaceSize
defines how close or far the face should be from the camera for optimal processing.
This parameter does not affect face detection but rather ensures the quality of the detected face.
Add a delay before starting face recognition#
You can optionally set up a delay, in seconds, to define when the face recognition will start after the camera is displayed in the screen. To do this, use LCLunaConfiguration.startDelay
.
Add a delay before getting the best shot#
You can optionally set up a delay, in seconds, to define for how long a user's face should be placed in the face detection bounding box before the best shot is taken. To do this, define the LCLunaConfiguration::faceTime
property. The default value is 5. In case, the face disappears from the bounding box within the specified period, the BestShotError.FACE_LOST
will be caught in the LCBestShotDelegate::bestShotError
delegate.