Build a Camera Android App in Jetpack Compose Using CameraX
Jetpack Compose + CameraX

Thinking about creating a Camera app or do you need to record video in your app? The CameraX
library is a great way to do it. Today, I am gonna explain to you how to create a Camera app using the CameraX
library as it is recommended way by Google.
“CameraX is a Jetpack library, built to help make camera app development easier.” According to the CameraX official documentation
There are a couple of use cases where you can use the CameraX
:
- Image Capture — Save images
- Video Capture — Save videos and audio
- Preview — View the image on the display
- Image Analysis — Access a buffer seamlessly for use in your algorithms
In this article, we are going to go through the Video Capture as it is not that much-treated topic.
Video Capture
First, let’s add some dependencies:
Now, our main screen will record the video, but first, we need to ask for camera and audio permissions. Won’t go into that in detail as I already explained in one of my previous articles, take a look if you need more explanation.
Now we are gonna create a couple of objects which we are going to need to record the video.
A Recording
is an object that allows us to control current active recording. It will allow us to stop, pause and resume the current recording. We create that object when we start recording. PreviewView
is a custom view that will display the camera feed. We will bind it to the lifecycle, add it to the AndroidView
and it will show us what we are currently recording. VideoCapture
is a generic class that provides a camera stream suitable for video applications. Here we pass the Recorder
class which is an implementation of the VideoOutput
interface and it allows us to start recording.
The recordingStarted
and the audioEnabled
are helper variables that we will use in this screen and I think they are pretty much self-explanatory.CameraSelector
is a set of requirements and priorities used to select a camera or return a filtered set of cameras. Here we will just use the default front and back camera.
In the LaunchedEffect we are calling a function that will create us a video capture use case. The function looks like this:
First, we create a Preview
which is a use case that provides a camera preview stream for displaying on-screen. We can set multiple things here like aspect ratio, capture processor, image info processor, and so on. We won’t need them so we create plain Preview
object.
Next is to choose the quality of our video. For that, we use QualitySelector
which defines the desired quality setting. We want Full HD quality so we will pass Quality.FHD
. Some phones may not have desired quality so you should always have a backup plan as we did it here by passing FallbackStrategy
. There are a couple of strategies:
higherQualityOrLowerThan
— Choose the quality that is closest to and higher than the input quality. If that can not result in a supported quality, choose the quality that is closest to and lower than the input qualityhigherQualityThan
— Choose the quality that is closest to and higher than the input qualitylowerQualityOrHigherThan
— Choose the quality that is closest to and lower than the input quality. If that can not result in a supported quality, choose the quality that is closest to and higher than the input qualitylowerQualityThan
— Choose the quality that is closest to and higher than the input quality
One more way to do it is to just pass Quality.LOWEST
or Quality.HIGHEST
, which is probably the simpler way but I wanted also to show this one.
Now we create an Recorder
and use it to get the VideoCapture
object by calling VideoCapture.withOutput(recorder)
.
A camera provider is an object of ProcessCameraProvider
singleton that allows us to bind the lifecycle of cameras to any LifecycleOwner within an application’s process. The function that we are using to get a camera provider is:
ProcessCameraProvider.getInstance(this)
is returning future that we need to wait to finish to get an instance.
Next, we need to bind everything to the lifecycle and we pass lifecycleOwner
, cameraSelector
, preview
, and videoCapture
.
Now it is time to finish the rest of the compose code, I hope you are still with me!
Inside PermissionsRequired
content block, we add AndroidView
and button for recording. Like this:
AndroidView
will display our preview.
As for the button, we will use it to start and stop recording. When we want to start recording we first get the media directory where we will put the video, if the directory doesn’t exist, we just create it. Next is to call startRecordingVideo
function that looks like this:
A simple function that creates a file, prepares a recording, and starts it. If audio is enabled we will also start recording with audio enabled. An object that this function returns, we will use to stop the recording. The consumer
parameter is a callback that will be called on each event. You can use it to get the URI of the file after the video recording is finished.
Let’s just add the logic for the audio and camera selector.
They are two buttons that will enable-disable audio and switch between the front and back camera. When we switch between cameras we need to create a new videoCapture
object to change what is our preview displaying.
That is it for this screen, but now it would be nice to see what have we recorded right? Of course, for that, we are gonna create another screen and use ExoPlayer
to display the video.
Let’s just first add logic in our consumer callback:
if (event is VideoRecordEvent.Finalize) {
val uri = event.outputResults.outputUri
if (uri != Uri.EMPTY) {
val uriEncoded = URLEncoder.encode(
uri.toString(),
StandardCharsets.UTF_8.toString()
)
navController.navigate("${Route.VIDEO_PREVIEW}/$uriEncoded")
}
}
If event is VideoRecordEvent.Finalize
, that means that the recording is finished and we can get the URI of the video. There are a couple of video record events, you can use any of them but here we just need Finalize
:
- Start
- Finalize
- Status
- Pause
- Resume
URI can be empty if the video is too short, like under half of the second or something like that and that’s why we need that if statement.
URI should be encoded to pass it as the navigation argument.
Our final code for this screen looks like this:
ExoPlayer
ExoPlayer
is an alternative to Android’s MediaPlayer API for playing audio and video both locally and over the Internet. It is easier to use and provides more features. Also, it is easy to customize and extend.
Now when we know what is the ExoPlayer
, let’s create our next screen. Add dependency:
//ExoPlayer Library
exoPlayerVersion = '2.18.1'
implementation "com.google.android.exoplayer:exoplayer:$exoPlayerVersion"
Our screen should look like this:
We will use a builder to create ExoPlayer
, set the URI of the video which will be loaded, and then prepare the player.
We use AndroidView
to show our video and we will attach StyledPlayerView
to it.
StyledPlayerView
is a high-level view for Player media playbacks. It displays video, subtitles, and album art during playback, and displays playback controls using a StyledPlayerControlView
.
The StyledPlayerView
can be customized by setting attributes (or calling corresponding methods), or overriding drawable.
That’s it for our video recorder, I hope you learned something new in this article and that you like it.
You can find all of the source code in my GitHub repo.
Want to Connect?GitHub
Portfolio website
If you want to learn more about Jetpack Compose, take a look at these articles:
- Implement Bottom Sheet in Jetpack Compose
- Implement Horizontal and Vertical ViewPager in Jetpack Compose
- 2 Ways to Request Permissions in Jetpack Compose
Also, you can learn how to use intercepters to include access tokens in your requests by reading this article: