GPUImage based on a detailed analysis of 1~4.
this is the introduction of video recording, add filters, save to the phone.
Through the GPUImageVideoCamera collection of audio and video information, audio information is sent directly to the GPUImageMovieWriter; video information in response to incoming chain as a source, rendering the video information and then write to GPUImageMovieWriter at the same time, through the GPUImageView display on the screen.
1, class AV
AVCaptureSession *_captureSession; AV coordinate input devices to the data stream AVoutput AVCaptureDevice *_inputCamera AVCaptureDevice *_microphone; camera equipment; microphones AVCaptureDeviceInput *videoInput; AVCaptureVideoDataOutput * videoOutput camera camera input; output AVCaptureDeviceInput *audioInput; AVCaptureAudioDataOutput *audioOutput microphone microphone input; output AVAssetWriter *assetWriter; the multimedia data into AVAssetWriterInput *assetWriterAudioInput file; AVAssetWriterInput *assetWriterVideoInput audio input; video input AVAssetWriterInp UtPixelBufferAdaptor sh*assetWriterPixelBufferInput; video input adapter
2, flow chart
3, process analysis
1, find the physical device _inputCamera, _microphone camera microphone, camera and microphone input to create input videoInput audioInput;
2, videoInput and audioInput to set the input of _captureSession, videoOutput and audioOutput at the same time set for a _captureSession output, output delegate and set the videoOutput and audioOutput
; 3, _captureSession calls startRunning, start signal acquisition;
4 audio data arrives, before the data is forwarded to the audioEncodingTarget setup and write audio data by calling the appendSampleBuffer method of assetWriterAudioInput
; 5, the video data to the video data in response to incoming chain, processed through the appendSampleBuffer assetWriterPixelBufferInput method of writing video data
6, select save, the file through the ALAssertLibrary to write the phone photo library.
- _videoCamera = [[GPUImageVideoCamera alloc] / / initialize camera initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack]; _videoCamera.outputImageOrientation [UIApplication = sharedApplication].statusBarOrientation; _filter = [[GPUImageSepiaFilter alloc] / / filter init]; _filterView [[GPUImageView alloc] = initWithFrame:self.view.frame]; self.view = _filterView; / / [_videoCamera addTarget:_filter] [_filter addTarget:_filterView] response chain; [_videoCamera; startCameraCapture];
- Start recording unlink ([pathToMovie UTF8String]); / / if the file already exists, AVAssetWriter will be abnormal, delete the old file _movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake (480, 640)]; _movieWriter.encodingLiveVideo = YES; [_filter = addTarget:_movieWriter]; _videoCamera.audioEncodingTarget _movieWriter; [_movieWriter startRecording];
- End recording [_filter removeTarget:_movieWriter]; _videoCamera.audioEncodingTarget = nil; [_movieWriter finishRecording];
The recorded video is as follows:
Video in the system’s mobile library:
1, in the process of the link, which step to the video information to add a filter?
2, why the video analog audio to write a class more than a class?
The core class is GPUImageMovieWriter, which coordinates audio and video information.
- The answer questions 1 and GPUImageVideoCamera to YUV video camera, converted to RGB video frame, and at the same time as the starting point of the GPUImageMovieWriter response chain; the realization of the GPUImageInput protocol, as a response to the end point of the chain, accept filter processed video information.
2, class AVAssetWriterInputPixelBufferAdaptor provides CVPixelBufferPool for faster allocation of cache.