IOS three kinds of video recording in detail

Attach references

Http://www.jianshu.com/p/16cb14f53933

Https://developer.apple.com/library/content/samplecode/AVSimpleEditoriOS/Introduction/Intro.html

Https://github.com/objcio/VideoCaptureDemo

Https://github.com/gsixxxx/DTSmallVideo

Https://github.com/AndyFightting/VideoRecord

The frontispiece Tucao language

This is the first time contact custom interface to record video, including a variety of parameter settings, have to say, this recording video, various types, various methods, quite complicated, online information is messy, trying to figure out what really is a lot of effort, I refer to a large number of data, according to their finishing again according to my ideas, ideas, I will promise you again, here is just a simple recording, compression, cropping, export and other functions, not the design of the filter, add background music, with subtitles, etc., is important in this process, the main process will, the other is the icing on the cake.

Attach the dome demo address

My blog look more convenient, the left side of the directory
Click to enter my blog article address

Brain map

To facilitate the recording of the way we have a general understanding, look at this picture.

Basic properties and classes of IOS three kinds of video recording in detail

the first use of the system recording is relatively simple, detailed introduction of the following two.

Design sketch

IOS three kinds of video recording in detail
1
IOS three kinds of video recording in detail
2
IOS three kinds of video recording in detail
3
IOS three kinds of video recording in detail
4
IOS three kinds of video recording in detail
5
IOS three kinds of video recording in detail
6

Demo three separate ways to facilitate learning. Support flash, switch lenses, record different sizes of video, etc..

1.UIImagePickerController

This method can only set up some simple parameters, the degree of customization is not high, only the user interface on the operating buttons, as well as video quality, etc..

- (void) viewDidLoad viewDidLoad] if {[super ([self; isVideoRecordingAvailable]) {return}; self.sourceType = UIImagePickerControllerSourceTypeCamera; self.mediaTypes = @[(NSString * kUTTypeMovie]); self.delegate = self; / / hide system comes with UI self.showsCameraControls = NO; / / switchCameraIsFront:NO] / / [self camera; video quality category set self.videoQuality = UIImagePickerControllerQualityTypeMedium; / / set the spotlight type self.cameraFlashMode = UIImagePickerControllerCameraFlashModeAuto; / / set the maximum record length of self.videoMaximumDuration = 20;} - {(BOOL) isVideoRecordingAvailable if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerCont RollerSourceTypeCamera] *availableMediaTypes = [UIImagePickerController NSArray) {availableMediaTypesForSourceType:UIImagePickerControllerSourceTypeCamera]; if ([availableMediaTypes containsObject: (NSString * kUTTypeMovie]) {return YES}}); return NO;} - (void) switchCameraIsFront: (BOOL) front (front) {if {if ([UIImagePickerController isCameraDeviceAvailable:UIImagePickerControllerCameraDeviceFront]) {[self}} {if setCameraDevice:UIImagePickerControllerCameraDeviceFront]; else ([UIImagePickerController isCameraDeviceAvailable:UIImagePickerControllerCameraDeviceRear]) {[self setCameraDevice:UIImagePickerControllerCameraDeviceRear]; }}

2.AVCaptureSession+AVCaptureMovieFileOutput

Technological process:

Input 4 input 3 output source audio capture session 1 to create 2 set video settings, this video, audio data output will merge together, get the video or audio data alone in China proxy method can also be assigned to AVCaptureMovieFileOutput path, began recording after the will to the path to write data 5 to add video preview of layer 6 start collecting data, this time is not to write data, the user clicks on the record after the beginning of writing data

0 create capture session

Self.session = [[AVCaptureSession alloc] init]; if ([_session canSetSessionPreset:AVCaptureSessionPreset640x480]) {// set resolution _session.sessionPreset=AVCaptureSessionPreset640x480;}

1 video input

- (void setUpVideo) {/ / 1.1 to obtain the video input device (camera) AVCaptureDevice *videoCaptureDevice=[self getCameraDeviceWithPosition:AVCaptureDevicePositionBack]; / / get / rear camera video HDR (high dynamic range) / / videoCaptureDevice.videoHDREnabled = YES; / / set the maximum and minimum frame rate //videoCaptureDevice.activeVideoMinFrameDuration = CMTimeMake (1, 60); / / create 1.2 video input source NSError *error=nil; self.videoInput= [[AVCaptureDeviceInput alloc] initWithDevice:videoCaptureDevice error:& error]; / / the 1.3 video input source added to the session if ([self.session canAddInput:self.videoInput]) {[self.session addInput:self.videoInput];}}

2 audio input

2.1 / / get audio input device AVCaptureDevice *audioCaptureDevice=[[AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio] firstObject]; NSError *error=nil; / / create 2.2 audio input source self.audioInput = [[AVCaptureDeviceInput alloc] initWithDevice: audioCaptureDevice error:& error]; / / the 2.3 audio input source added to the session if ([self.session canAddInput:self.audioInput]) {[self.session addInput:self.audioInput];}

3 output source settings

- (void setUpFileOut) {/ / 3.1 initialization device output object, used to obtain the output data of self.FileOutput=[[AVCaptureMovieFileOutput alloc]init] AVCaptureConnection *captureConnection=[self.FileOutput connectionWithMediaType:AVMediaTypeVideo]; / / some attributes 3.2 to set the output object; / / set / / Fangdou video stabilization is introduced in iOS 6 and iPhone 4S release function. To the iPhone 6, an increase of more robust and smooth anti shake mode, known as the theater level video anti shake. API has also been related to changes (so far not reflected in the document, but you can view the header file). Anti shake is not configured on the capture device, but on the AVCaptureConnection settings. Because not all the equipment format to support all of the anti shake mode, so in practical application should confirm whether the concrete anti shake mode support: if ([captureConnection isVideoStabilizationSupported) {captureConnection.preferredVideoStabilizationMode= AVCaptureVideoStabilizationModeAuto;} / / preview layer and video direction consistent captureConnection.videoOrientation [self.previewlayer = connection].videoOrientation; / / 3.3 will be added to the session if output device ([_session canAddOutput:_FileOutput]) {[_session addOutput:_FileOutput];}}

4 video preview layer

One into the video recording interface, this time session has been in the collection of data, and the data displayed on the preview layer, the user choose to record, and then collected data written to the file.

- (void) setUpPreviewLayerWithType: (FMVideoViewType type) {CGRect rect = CGRectZero; switch (type) {case Type1X1: rect = CGRectMake (0, 0, kScreenWidth, kScreenWidth); break case; Type4X3: rect = CGRectMake (0, 0, kScreenWidth, kScreenWidth*4/3); break case; TypeFullScreen: rect = [UIScreen mainScreen].bounds; break; default: rect = [UIScreen mainScreen].bounds; break self.previewlayer.frame;} = rect [_superView.layer; insertSublayer: self.previewlayer atIndex:0];}

5 start collecting screen

[self.session startRunning];

6 start recording

- (void) writeDataTofile *videoPath createVideoFilePath] {NSString = [self; self.videoUrl = [NSURL fileURLWithPath:videoPath]; [self.FileOutput startRecordingToOutputFileURL:self.videoUrl recordingDelegate:self];}

3.AVCaptureSession+AVAssetWriter

Technological process:

The input and output of 3 set audio capture session 1 to create 2 set video input and output 4 add video preview of layer 5 to begin collecting data, this time is not to write data, the user clicks on the record after the beginning of the 6 write data initialization AVAssetWriter, we can get to the video and audio data stream is written to the file by AVAssetWriter, this step we need to achieve their own.

1 create capture session

Need to ensure that in the same queue, the best queue is created only once

Self.session = [[AVCaptureSession alloc] init]; if ([_session canSetSessionPreset:AVCaptureSessionPreset640x480]) {// set resolution _session.sessionPreset=AVCaptureSessionPreset640x480;}

2 set video input and output

- (void setUpVideo) {/ / 2.1 to obtain the video input device (camera) AVCaptureDevice *videoCaptureDevice=[self getCameraDeviceWithPosition:AVCaptureDevicePositionBack]; / / get / / the 2.2 rear camera to create the video input source NSError *error=nil self.videoInput= [[AVCaptureDeviceInput alloc] initWithDevice:videoCaptureDevice; error:& error]; / / the 2.3 video input source added to the session if ([self.session canAddInput:self.videoInput]) {[self.session} addInput:self.videoInput]; self.videoOutput = [[AVCaptureVideoDataOutput alloc] init]; self.videoOutput.alwaysDiscardsLateVideoFrames = YES; / / immediately discard the old frame, save memory, the default YES [self.videoOutput setSampleBufferDelegate:self queue:self.videoQ Ueue]; if ([self.session canAddOutput:self.videoOutput]) {[self.session addOutput:self.videoOutput];}}

3 set audio input and output

- (void setUpAudio) {/ / the 2.2 audio input device AVCaptureDevice *audioCaptureDevice=[[AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio] for firstObject]; NSError *error=nil; / / create 2.4 audio input source self.audioInput = [[AVCaptureDeviceInput alloc] initWithDevice:audioCaptureDevice error:& error]; / / the 2.6 audio input source added to the session if ([self.session canAddInput:self.audioInput]) {[self.session} addInput:self.audioInput]; self.audioOutput = [[AVCaptureAudioDataOutput alloc] init]; [self.audioOutput setSampleBufferDelegate:self queue:self.videoQueue]; if ([self.session canAddOutput:self.audioOutput]) {[self.session addOutput:self.audioOutput];}}

4 add video preview layer

- (void) setUpPreviewLayerWithType: (FMVideoViewType type) {CGRect rect = CGRectZero; switch (type) {case Type1X1: rect = CGRectMake (0, 0, kScreenWidth, kScreenWidth); break case; Type4X3: rect = CGRectMake (0, 0, kScreenWidth, kScreenWidth*4/3); break case; TypeFullScreen: rect = [UIScreen mainScreen].bounds; break; default: rect = [UIScreen mainScreen].bounds; break self.previewlayer.frame;} = rect [_superView.layer; insertSublayer: self.previewlayer atIndex:0];}

5 start collecting screen

[self.session startRunning];

6 initialize AVAssetWriter

The process of writing data to AVAssetWriter needs to be performed in a child thread, and each time the data is written, it is necessary to ensure that the same thread.

- (void) setUpWriter [[NSURL alloc] initFileURLWithPath:[self {self.videoUrl = createVideoFilePath]]; self.writeManager = [[AVAssetWriteManager alloc] initWithURL:self.videoUrl viewType:_viewType]; self.writeManager.delegate = self;}

7 get data flow after processing

Video and audio data need to be processed separately

- (void) captureOutput: (AVCaptureOutput *) captureOutput didOutputSampleBuffer: (CMSampleBufferRef) sampleBuffer fromConnection: (AVCaptureConnection * connection) {@autoreleasepool {/ / video if (connection = = [self.videoOutput connectionWithMediaType:AVMediaTypeVideo]) {if (! Self.writeManager.outputVideoFormatDescription) {@synchronized (self) {CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription (sampleBuffer); self.writeManager.outputVideoFormatDescription = formatDescription;}} else {@synchronized (self {if (self.writeManager.writeState = FMRecordStateRecord) Ing [self.writeManager appendSampleBuffer:sampleBuffer ofMediaType:AVMediaTypeVideo]) {}}}}; / / audio if (connection = = [self.audioOutput connectionWithMediaType:AVMediaTypeAudio]) {if (! Self.writeManager.outputAudioFormatDescription) {@synchronized (self) {CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription (sampleBuffer); self.writeManager.outputAudioFormatDescription = formatDescription;}} {@synchronized (self) if (self.writeManager.writeState = = FMRecordStateRecording) { [self.writeManager appendSampleBuffer:sampleBuffer ofMediaType:AVMediaTypeAudio];}}}}

We get the most primitive data, you can set the parameters

- (void) setUpWriter [AVAssetWriter assetWriterWithURL:self.videoUrl fileType:AVFileTypeMPEG4 {self.assetWriter = error:nil]; numPixels = NSInteger / / write video size self.outputSize.width * self.outputSize.height; / / bits per pixel CGFloat bitsPerPixel = 6; NSInteger bitsPerSecond = numPixels * bitsPerPixel; / / rate and frame rate set NSDictionary *compressionProperties = AVVideoAverageBitRateKey (bitsPerSecond) @{: @ AVVideoExpectedSourceFrameRateKey, @ AVVideoMaxKeyFrameIntervalKey, (30):: @ (30): AVVideoProfileLevelH264BaselineAutoLevel, AVVideoProfileLevelKey}; Self.videoCompressionSettings = AVVideoCodecKey @{/ / video attributes: AVVideoCodecH264, AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill, AVVideoWidthKey, AVVideoHeightKey (self.outputSize.height): @: @ (self.outputSize.width), AVVideoCompressionPropertiesKey: compressionProperties = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo _assetWriterVideoInput}; outputSettings:self.videoCompressionSettings]; //expectsMediaDataInRealTime must be set to yes, from capture session _assetWriterVideoInput.expectsMediaDataInRealTime need to access data in real time = YES; _assetWriterVideoInput.transform = CGAffineTransformMakeRotation (M_PI / 2); / / self.audioCompressionSettings = @{audio settings: @ AVEncoderBitRatePerChannelKey (28000), AVFormatIDKey (kAudioFormatMPEG4AAC, AVNumberOfChannelsKey): @: @ (1), AVSampleRateKey (22050): @ _assetWriterAudioInput = [AVAssetWriterInput}; assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:self.audioCompressionSettings]; _assetWriterAudioInput.expectsMediaDataInRealTime = YES; if ([_assetWriter canAddInput:_assetWriterVideoInput]) {[_assetWriter}else {addInput:_assetWriterVideoInput]; NSLog (@ AssetWriter videoInput append Failed ");} if ([_assetWriter canAddInput:_assetWriterAudioInput]) {[_assetWriter addInput:_assetWriterAudioInput];}else {NSLog (" AssetWriter audioInput Append @ Failed "}); self.writeState = FMRecordStateRecording;}

After setting the parameters, you can write to the file. AVAssetWriter data written in the process is a bit complicated, demo I AVAssetWriteManager the new separation of AVAssetWriter, separate processing data, so that the logic will be clear.

The similarities and differences between fileOut and writer

From the above two processes can be seen,
the same point: data acquisition are carried out in AVCaptureSession, video and audio input are the same, the screen preview consistent.
: the
output is not the same, AVCaptureMovieFileOutput needs only one output can specify a file path, video and audio will be written to the specified path, no other complex operation.
AVAssetWriter requires AVCaptureVideoDataOutput and AVCaptureAudioDataOutput two separate outputs, get their own output data, and then their corresponding processing.

Can be inconsistent with the parameters, AVAssetWriter can configure more parameters.

Video clipping is not the same, AVCaptureMovieFileOutput if you want to cut the video, because the system has the data written to the file, we need from the original file in a complete video, then AVAssetWriter processing; we get the data flow, no video synthesis, processing of data stream, so the two cut is not the same.

Other add background music, watermarks, etc. are not the same, there is no mention is not involved. Here is almost the same, the article is a bit long. These are my own summary of the collation of information, does not rule out some mistakes, for everyone to learn the reference, hoping to harvest. If convenient, please also for me star a, also be to my support.

Demo address