Camera Capture Activity
Shows the camera preview on screen while simultaneously recording it to a .mp4 file.
Every time we receive a frame from the camera, we need to:
Render the frame to the SurfaceView, on GLSurfaceView's renderer thread.
Render the frame to the mediacodec's input surface, on the encoder thread, if recording is enabled.
At any given time there are four things in motion:
The UI thread, embodied by this Activity. We must respect -- or work around -- the app lifecycle changes. In particular, we need to release and reacquire the Camera so that, if the user switches away from us, we're not preventing another app from using the camera.
The Camera, which will busily generate preview frames once we hand it a SurfaceTexture. We'll get notifications on the main UI thread unless we define a Looper on the thread where the SurfaceTexture is created (the GLSurfaceView renderer thread).
The video encoder thread, embodied by TextureMovieEncoder. This needs to share the Camera preview external texture with the GLSurfaceView renderer, which means the EGLContext in this thread must be created with a reference to the renderer thread's context in hand.
The GLSurfaceView renderer thread, embodied by CameraSurfaceRenderer. The thread is created for us by GLSurfaceView. We don't get callbacks for pause/resume or thread startup/shutdown, though we could generate messages from the Activity for most of these things. The EGLContext created on this thread must be shared with the video encoder, and must be used to create a SurfaceTexture that is used by the Camera. As the creator of the SurfaceTexture, it must also be the one to call updateTexImage(). The renderer thread is thus at the center of a multi-thread nexus, which is a bit awkward since it's the thread we have the least control over.
GLSurfaceView is fairly painful here. Ideally we'd create the video encoder, create an EGLContext for it, and pass that into GLSurfaceView to share. The API doesn't allow this, so we have to do it the other way around. When GLSurfaceView gets torn down (say, because we rotated the device), the EGLContext gets tossed, which means that when it comes back we have to re-create the EGLContext used by the video encoder. (And, no, the "preserve EGLContext on pause" feature doesn't help.)
We could simplify this quite a bit by using TextureView instead of GLSurfaceView, but that comes with a performance hit. We could also have the renderer thread drive the video encoder directly, allowing them to work from a single EGLContext, but it's useful to decouple the operations, and it's generally unwise to perform disk I/O on the thread that renders your UI.
We want to access Camera from the UI thread (setup, teardown) and the renderer thread (configure SurfaceTexture, start preview), but the API says you can only access the object from a single thread. So we need to pick one thread to own it, and the other thread has to access it remotely. Some things are simpler if we let the renderer thread manage it, but we'd really like to be sure that Camera is released before we leave onPause(), which means we need to make a synchronous call from the UI thread into the renderer thread, which we don't really have full control over. It's less scary to have the UI thread own Camera and have the renderer call back into the UI thread through the standard Handler mechanism.
(The [
camera docs](http://developer.android.com/training/camera/cameradirect.html#TaskOpenCamera) recommend accessing the camera from a non-UI thread to avoid bogging the UI thread down. Since the GLSurfaceView-managed renderer thread isn't a great choice, we might want to create a dedicated camera thread. Not doing that here.)
With three threads working simultaneously (plus Camera causing periodic events as frames arrive) we have to be very careful when communicating state changes. In general we want to send a message to the thread, rather than directly accessing state in the object.
To exercise the API a bit, the video encoder is required to survive Activity restarts. In the current implementation it stops recording but doesn't stop time from advancing, so you'll see a pause in the video. (We could adjust the timer to make it seamless, or output a "paused" message and hold on that in the recording, or leave the Camera running so it continues to generate preview frames while the Activity is paused.) The video encoder object is managed as a static property of the Activity.
Properties
Functions
Called up when an error occurs in the process of the best shot detector work.
Called when interaction ended
Called when interaction started
offline liveness check estimation resulted in 'not live'
online liveness check estimation resulted in 'not live'
online liveness request not succeeded at all