AVCamBuildingACameraApp
Apple sample code.
Install / Use
/learn @AppTyrant/AVCamBuildingACameraAppREADME
AVCam: Building a Camera App
Capture photos with depth data and record video using the front and rear iPhone and iPad cameras.
Overview
The iOS Camera app allows you to capture photos and movies from both the front and rear cameras. Depending on your device, the Camera app also supports the still capture of depth data, portrait effects matte, and Live Photos.
This sample code project, AVCam, shows you how to implement these capture features in your own camera app. It leverages basic functionality of the built-in front and rear iPhone and iPad cameras.
- Note: To use AVCam, you need an iOS device running iOS 13 or later. Because Xcode doesn’t have access to the device camera, this sample won't work in Simulator. AVCam hides buttons for modes that the current device doesn’t support, such as portrait effects matte delivery on an iPhone 7 Plus.
Configure a Capture Session
[AVCaptureSession][1] accepts input data from capture devices like the camera and microphone. After receiving the input, AVCaptureSession marshals that data to appropriate outputs for processing, eventually resulting in a movie file or still photo. After configuring the capture session's inputs and outputs, you tell it to start—and later stop—capture.
private let session = AVCaptureSession()
AVCam selects the rear camera by default and configures a camera capture session to stream content to a video preview view. PreviewView is a custom [UIView][2] subclass backed by an [AVCaptureVideoPreviewLayer][3]. AVFoundation doesn't have a PreviewView class, but the sample code creates one to facilitate session management.
The following diagram shows how the session manages input devices and capture output:
![A diagram of the AVCam app's architecture, including input devices and capture output in relation to the main capture session.][image-1]
Delegate any interaction with the AVCaptureSession—including its inputs and outputs—to a dedicated serial dispatch queue (sessionQueue), so that the interaction doesn't block the main queue. Perform any configuration involving changes to a session's topology or disruptions to its running video stream on a separate dispatch queue, since session configuration always blocks execution of other tasks until the queue processes the change. Similarly, the sample code dispatches other tasks—such as resuming an interrupted session, toggling capture modes, switching cameras, and writing media to a file—to the session queue, so that their processing doesn’t block or delay user interaction with the app.
In contrast, the code dispatches tasks that affect the UI (such as updating the preview view) to the main queue, because AVCaptureVideoPreviewLayer, a subclass of [CALayer][4], is the backing layer for the sample’s preview view. You must manipulate UIView subclasses on the main thread for them to show up in a timely, interactive fashion.
In viewDidLoad, AVCam creates a session and assigns it to the preview view:
previewView.session = session
For more information about configuring image capture sessions, see [Setting Up a Capture Session][5].
Request Authorization for Access to Input Devices
Once you configure the session, it is ready to accept input. Each [AVCaptureDevice][6]—whether a camera or a mic—requires the user to authorize access. AVFoundation enumerates the authorization state using [AVAuthorizationStatus][7], which informs the app whether the user has restricted or denied access to a capture device.
For more information about preparing your app's Info.plist for custom authorization requests, see [Requesting Authorization for Media Capture][8].
Switch Between the Rear- and Front-Facing Cameras
The changeCamera method handles switching between cameras when the user taps a button in the UI. It uses a discovery session, which lists available device types in order of preference, and accepts the first device in its devices array. For example, the videoDeviceDiscoverySession in AVCam queries the device on which the app is running for available input devices. Furthermore, if a user's device has a broken camera, it won't be available in the devices array.
switch currentPosition {
case .unspecified, .front:
preferredPosition = .back
preferredDeviceType = .builtInDualCamera
case .back:
preferredPosition = .front
preferredDeviceType = .builtInTrueDepthCamera
@unknown default:
print("Unknown capture position. Defaulting to back, dual-camera.")
preferredPosition = .back
preferredDeviceType = .builtInDualCamera
}
If the discovery session finds a camera in the proper position, it removes the previous input from the capture session and adds the new camera as an input.
// Remove the existing device input first, because AVCaptureSession doesn't support
// simultaneous use of the rear and front cameras.
self.session.removeInput(self.videoDeviceInput)
if self.session.canAddInput(videoDeviceInput) {
NotificationCenter.default.removeObserver(self, name: .AVCaptureDeviceSubjectAreaDidChange, object: currentVideoDevice)
NotificationCenter.default.addObserver(self, selector: #selector(self.subjectAreaDidChange), name: .AVCaptureDeviceSubjectAreaDidChange, object: videoDeviceInput.device)
self.session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
} else {
self.session.addInput(self.videoDeviceInput)
}
[View in Source][9]
Handle Interruptions and Errors
Interruptions such as phone calls, notifications from other apps, and music playback may occur during a capture session. Handle these interruptions by adding observers to listen for [AVCaptureSessionWasInterruptedNotification][10]:
NotificationCenter.default.addObserver(self,
selector: #selector(sessionWasInterrupted),
name: .AVCaptureSessionWasInterrupted,
object: session)
NotificationCenter.default.addObserver(self,
selector: #selector(sessionInterruptionEnded),
name: .AVCaptureSessionInterruptionEnded,
object: session)
[View in Source][11]
When AVCam receives an interruption notification, it can pause or suspend the session with an option to resume activity when the interruption ends. AVCam registers sessionWasInterrupted as a handler for receiving notifications, to inform the user when there's an interruption to the capture session:
if reason == .audioDeviceInUseByAnotherClient || reason == .videoDeviceInUseByAnotherClient {
showResumeButton = true
} else if reason == .videoDeviceNotAvailableWithMultipleForegroundApps {
// Fade-in a label to inform the user that the camera is unavailable.
cameraUnavailableLabel.alpha = 0
cameraUnavailableLabel.isHidden = false
UIView.animate(withDuration: 0.25) {
self.cameraUnavailableLabel.alpha = 1
}
} else if reason == .videoDeviceNotAvailableDueToSystemPressure {
print("Session stopped running due to shutdown system pressure level.")
}
[View in Source][12]
The camera view controller observes [AVCaptureSessionRuntimeError][13] to receive a notification when an error occurs:
NotificationCenter.default.addObserver(self,
selector: #selector(sessionRuntimeError),
name: .AVCaptureSessionRuntimeError,
object: session)
When a runtime error occurs, restart the capture session:
// If media services were reset, and the last start succeeded, restart the session.
if error.code == .mediaServicesWereReset {
sessionQueue.async {
if self.isSessionRunning {
self.session.startRunning()
self.isSessionRunning = self.session.isRunning
} else {
DispatchQueue.main.async {
self.resumeButton.isHidden = false
}
}
}
} else {
resumeButton.isHidden = false
}
[View in Source][14]
The capture session may also stop if the device sustains system pressure, such as overheating. The camera won’t degrade capture quality or drop frames on its own; if it reaches a critical point, the camera stops working, or the device shuts off. To avoid surprising your users, you may want your app to manually lower the frame rate, turn off depth, or modulate performance based on feedback from [AVCaptureSystemPressureState][15]:
let pressureLevel = systemPressureState.level
if pressureLevel == .serious || pressureLevel == .critical {
if self.movieFileOutput == nil || self.movieFileOutput?.isRecording == false {
do {
try self.videoDeviceInput.device.lockForConfiguration()
print("WARNING: Reached elevated system pressure level: \(pressureLevel). Throttling frame rate.")
self.videoDeviceInput.device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: 20)
self.videoDeviceInput.device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: 15)
self.videoDeviceInput.device.unlockForConfiguration()
} catch {
print("Could not lock device for configuration: \(error)")
}
}
} else if pressureLevel == .shutdown {
print("Session stopped running due to shutdown system pressure level.")
}
[View in Source][16]
Capture a Photo
Taking a photo happens on the session queue. The process begins by updating the [AVCapturePhotoOutput][17] connection to match the video orientation of the video preview layer. This enables the camera to accurately capture what the user sees onscreen:
if let photoOutputConnection = self.photoOutput.connection(with: .video) {
photoOutputConnection.videoOrientation = videoPreviewLayerOrie
