Getting the best shot#
With LUNA ID, you can capture video stream and get the best shot on which the face is fixed in the optimal angle for further processing.
Tip: In LUNA ID for Android you can specify a face recognition area for best shot selection.
In LUNA ID for Android#
To get the best shot, call the LunaID.showCamera()
method.
To receive a result, subscribe to LunaID.finishStates()
for the StateFinished(val result: FinishResult)
events.
A value of the result
field depends on a best shot search result. Possible values are:
class ResultSuccess(val data: FinishSuccessData) : FinishResult()
class ResultFailed(val data: FinishFailedData) : FinishResult()
// when the camera closed before the best shot was found
class ResultCancelled(val data: FinishCancelledData) : FinishResult()
ResultSuccess
When the best shot was found, data: FinishSuccessData
will contain the found best shot and an optional path to the recorded video.
class FinishSuccessData(
val bestShot: BestShot,
val videoPath: String?,
)
ResultFailed
Search for the best shot can fail for various reasons.
In case the search fails, the data: FinishFailedData
type will define a reason.
sealed class FinishFailedData {
class InteractionFailed() : FinishFailedData()
class LivenessCheckFailed() : FinishFailedData()
class LivenessCheckError(val cause: Throwable?) : FinishFailedData()
class UnknownError(val cause: Throwable?) : FinishFailedData()
}
ResultCancelled
If a user closes a camera screen before the best shot was found, data: FinishCancelledData
will contain an optional path to the recorded video.
Since for getting the best shot, you open a camera in a new Activity
class, pay special attention to the lifecycle of your code components. For example, the calling Activity
class may be terminated or a presenter or view model may be recreated while searching for the best shot. In these cases, subscribe to any of the flows exposed via the LunaID
class (.allEvents()
, interactions()
, and so on) with respect to a component's lifecycle. To do this, consider using the flowWithLifecycle()
and launchIn()
extension functions available for the Flow
class in Kotlin.
Example#
The example below shows how to subscribe to the StateFinished
events with respect to components' lifecycles:
LunaID.finishStates()
.flowOn(Dispatchers.IO)
.flowWithLifecycle(lifecycleOwner.lifecycle, Lifecycle.State.STARTED)
.onEach {
when (it.result) {
is LunaID.FinishResult.ResultSuccess -> {
val image = (it.result as LunaID.FinishResult.ResultSuccess).data.bestShot
}
is LunaID.FinishResult.ResultCancelled -> {
}
is LunaID.FinishResult.ResultFailed -> {
val failReason = (it.result as LunaID.FinishResult.ResultFailed).data
}
}
}
.launchIn(viewModelScope)
Face recognition area#
In some cases, you may need the best shot search to start only after a user places their face in a certain area in the screen. You can specify face recognition area borders by implementing one of the following strategies:
- Border distances are not initialized
- Border distances are initialized with an Android custom view
- Border distances are initialized in dp
- Border distances are initialized automatically
Border distances are not initialized#
This strategy is useful if the border distances should be 0 pixels. This is the default strategy.
To implement the strategy, use the Default
object of the InitBorderDistancesStrategy
class.
Consider the code below for the strategy implementation:
LunaID.showCamera(
activity,
LunaID.ShowCameraParams(
disableErrors = true,
borderDistanceStrategy = InitBorderDistancesStrategy.Default
)
)
Border distances are initialized with an Android custom view#
This strategy allows you to define how to calculate distances to the face recognition area inside an Android custom view. The custom view can stretch to fill the entire screen and contain different elements, one of which is a circle that corresponds to the face recognition area. The custom view must implement the MeasureBorderDistances
interface. The interface result value is a child object with custom view border distances. Implementation of this interface is required due to impossibility to get the distances outside the custom view and allows you to comply with the encapsulation principle.
Consider the example code below for the MeasureBorderDistances
interface implementation. It also shows how to implement a business logic according to which a chin and forehead must be inside the face recognition area.
override fun measureBorderDistances(): BorderDistancesInPx {
val radius = minOf(right - left, bottom - top) / 2f
val diameter = radius * 2
val distanceFromLeftToCircle = (width - diameter) / 2f
val distanceFromTopToCircle = (height - diameter) / 2f
// business logic
val foreheadZone = 64
val chinZone = 36
val horizontalMargin = 16
val distanceFromTopWithForehead = distanceFromTopToCircle.toInt() + foreheadZone
val distanceFromBottomWithChin = distanceFromTopToCircle.toInt() + chinZone
val distanceHorizontalToCircle = distanceFromLeftToCircle.toInt() + horizontalMargin
// business logic ends
return BorderDistancesInPx(
fromLeft = distanceHorizontalToCircle,
fromTop = distanceFromTopWithForehead,
fromRight = distanceHorizontalToCircle,
fromBottom = distanceFromBottomWithChin,
)
}
To implement the strategy, use the InitBorderDistancesStrategy.WithCustomView
class. You also need to pass an argument with the ID of the custom view on the XML markup to the object of the WithCustomView
class.
Consider the example code below for the strategy implementation:
LunaID.showCamera(
context,
LunaID.ShowCameraParams(
disableErrors = true,
borderDistanceStrategy = InitBorderDistancesStrategy.WithCustomView(
R.id.overlay_viewport
)
)
)
Border distances are initialized in dp#
This strategy allows you to specify distances to the face recognition area in density-independent pixels.
To implement the strategy, use the InitBorderDistancesStrategy.WithDp
class.
Consider the example code below for the strategy implementation:
LunaID.showCamera(
context,
LunaID.ShowCameraParams(
disableErrors = false,
borderDistanceStrategy = InitBorderDistancesStrategy.WithDp(
topPaddingInDp = 150,
bottomPaddingInDp = 250,
leftPaddingInDp = 8,
rightPaddingInDp = 8
)
)
)
Border distances are initialized automatically#
This strategy allows you to automatically calculate distances to the face recognition area on the XML markup by using its ID:
<View
android:id="@+id/faceZone"
android:layout_width="200dp"
android:layout_height="300dp"
android:background="#1D000000"
android:layout_gravity="top|center"
android:layout_marginTop="150dp"/>
To implement the strategy, use the InitBorderDistancesStrategy.WithViewId
class.
Consider the example code below for the strategy implementation:
LunaID.showCamera(
context,
LunaID.ShowCameraParams(
disableErrors = false,
borderDistanceStrategy = InitBorderDistancesStrategy.WithViewId(R.id.faceZone)
)
)
Add a delay before starting face recognition#
You can optionally set up a fixed delay or specific moment in time to define when the face recognition will start after the camera is displayed in the screen. To do this, use the StartBestShotSearchCommand
command.
Add a delay before getting the best shot#
You can optionally set up a delay, in milliseconds, to define for how long a user's face should be placed in the face detection bounding box before the best shot is taken. To do this, use the LunaID.foundFaceDelayMs
parameter. The default value is 0.
In LUNA ID for iOS#
To get the best shots, pass a value to the delegate
parameter of the LMCameraBuilder.viewController
camera controller instance creation function that conforms to the LMCameraDelegate
protocol.
let controller = LMCameraBuilder.viewController(delegate: LMCameraDelegate,
configuration: LCLunaConfiguration,
livenessAPI: livenessAPI)
With the implementation of the LMCameraDelegate
protocol, the camera controller will interact with the user application. In the implemented methods, you will receive the best shot or the corresponding error.
public protocol LMCameraDelegate: AnyObject {
func bestShot(_ bestShot: LunaCore.LCBestShot, _ videoFile: String?)
func error(_ error: LMCameraError, _ videoFile: String?)
}
Add a delay before starting face recognition#
You can optionally set up a delay, in seconds, to define when the face recognition will start after the camera is displayed in the screen. To do this, use LCLunaConfiguration.startDelay
.
Add a delay before getting the best shot#
You can optionally set up a delay, in seconds, to define for how long a user's face should be placed in the face detection bounding box before the best shot is taken. To do this, define the LCLunaConfiguration::faceTime
property. The default value is 5. In case, the face disappears from the bounding box within the specified period, the BestShotError.FACE_LOST
will be caught in the LCBestShotDelegate::bestShotError
delegate.