How to quickly develop a complete iOS live app] (acquisition)

Preface

Before reading this, if you do not understand the principle of live, please check this article how to quickly develop a complete iOS live app (principle)

The development of a live app, first need to anchor the acquisition of video and audio, and then introduced the streaming media server, this article mainly explain how to anchor the collection of video and audio, the current can switch front rear camera and focus the cursor, but beauty hasn’t done it, you can see the makeup, there will be a follow-up to the other function live released.

If you love my article, I can focus on micro-blog: Yuan Zheng Seemygo, can also be small brother iOS training course, we understand. The follow-up will be more updates, have any questions, welcome Jane Zheng Yuan Book message Seemygo…

Effect

In order to capture the effect of map, I was ready to sacrifice, please ignore people, pay attention to technology.

How to quickly develop a complete iOS live app] (acquisition)
ignore me.Png

Basic knowledge introduction

  • AVFoundation: audio and video data acquisition need to use AVFoundation framework
  • AVCaptureDevice: hardware devices, including microphones, cameras, through which the object can set up some physical properties of the device (such as camera focus, white balance, etc.)
  • AVCaptureDeviceInput: hardware input object can be created according to the corresponding AVCaptureDeviceInput AVCaptureDevice object for the management of hardware input data.
  • AVCaptureOutput: hardware output object, for receiving all kinds of output data, usually using the corresponding subclasses of AVCaptureAudioDataOutput (voice output data object), AVCaptureVideoDataOutput (video output data object)
  • AVCaptionConnection: when an input and output is added to the AVCaptureSession, AVCaptureSession will establish a connection between the input and output devices, and the AVCaptureOutput can obtain the connection object.
  • AVCaptureVideoPreviewLayer: camera preview layer can view real-time photographing or video recording, the creation of the object need to specify the corresponding AVCaptureSession object, because the AVCaptureSession includes video input data, can display video data.
  • The function of data transmission system between AVCaptureSession: coordinate input and output: can operate the hardware device work principle: let a capture session between App and system, the equivalent of App and hardware connection, we only need to input and output hardware object object added to the session, the session will automatically put the hardware input and output produced object the hardware connection, input and output devices can transmit audio and video data. Real life scene: the tenant (input money), intermediary (session), the landlord (real output), the tenant and the landlord in the registered intermediary, the intermediary would allow the connection between tenant and landlord, later can directly contact the landlord and tenant.

Capture audio and video steps: official documentation

  • 1 create AVCaptureSession objects
  • 2 get AVCaptureDevicel video equipment (camera), recording equipment (microphone), pay attention to do not have input data function, just to adjust the configuration of the hardware device.
  • 3 create an audio / video hardware input data object (AVCaptureDeviceInput) based on audio / video hardware (AVCaptureDevice) to manage data input.
  • 4 create a video output data management object (AVCaptureVideoDataOutput), and set the sample cache proxy (setSampleBufferDelegate) can be obtained through it to collect the video data
  • 5 create an audio output data management object (AVCaptureAudioDataOutput), and set the sample cache proxy (setSampleBufferDelegate) can be obtained through it to the audio data
  • 6 data input and data output object AVCaptureDeviceInput object AVCaptureOutput added to the media session management object in AVCaptureSession, it will automatically make the input and output of audio and video input and output connection.
  • 7 create a video preview layer AVCaptureVideoPreviewLayer and specify the media session, add the layer to the display container layer
  • 8 start AVCaptureSession, only open, will begin to input to the output data stream transmission.
/ / capture audio and video - (void setupCaputureVideo) {/ / 1 capture sessions, have stronger reference, otherwise it will be released AVCaptureSession *captureSession alloc] init] = [[AVCaptureSession; _captureSession = captureSession; / / the 2 camera access equipment, the default is AVCaptureDevice *videoDevice = [self getVideoDevice:AVCaptureDevicePositionFront] rear camera; / / get sound equipment AVCaptureDevice *audioDevice 3 = [AVCaptureDevice / defaultDeviceWithMediaType:AVMediaTypeAudio]; 4 create the corresponding video equipment input *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:nil] AVCaptureDeviceInput; _currentVideoDeviceInput videoDeviceInput = 5; / / create the corresponding audio input device AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil]; / / 6 / / add to session note "the best to determine whether to add input session cannot add empty 6.1 / / add video ([captureSession canAddInput:videoDeviceInput]) {if [captureSession addInput:videoDeviceInput];} / / 6.2 ([captureSession canAddInput:audioDeviceInput]) add audio if {[captureSession addInput:audioDeviceInput];} / / 7 acquisition of video data output AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init] equipment; / / the 7.1 setting agent, capture video sample data / / Note: queue must be serial queue to get to number According to dispatch_queue_t, and cannot be empty videoQueue = dispatch_queue_create ("Video Capture Queue", DISPATCH_QUEUE_SERIAL); [videoOutput setSampleBufferDelegate:self queue:videoQueue]; if ([captureSession canAddOutput:videoOutput]) {captureSession} [addOutput:videoOutput]; / / 8 to obtain the audio data output device AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init]; / / the 8.2 setting agent, capture video sample data / / Note: the queue must be serial queue to get access to data, and cannot be empty dispatch_queue_t audioQueue = dispatch_queue_create ("Audio Capture Queue", DISPATCH_QUEUE_SERIAL); [audioOutput setSampleBufferDelegate:self queue:audioQueue]; if ([captur ESession canAddOutput:audioOutput]) {[captureSession addOutput:audioOutput];} / / 9 to get the video input and output connections, is used to distinguish the audio and video data _videoConnection [videoOutput connectionWithMediaType:AVMediaTypeVideo] = 10; / / add video preview layer AVCaptureVideoPreviewLayer = [AVCaptureVideoPreviewLayer previedLayer.frame = *previedLayer layerWithSession:captureSession]; [UIScreen mainScreen].bounds; [self.view.layer insertSublayer:previedLayer atIndex:0]; _previedLayer = previedLayer; / / the 11 start session [captureSession startRunning];} / / specify camera direction get the camera - (AVCaptureDevice) getVideoDevice: (AVCaptureDevicePosition position) {NSArray *devices = [AVCaptureDe Vice devicesWithMediaType:AVMediaTypeVideo]; for (AVCaptureDevice *device in devices) {if (device.position = = position) {return}} device; return nil;} #pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate / / get the input equipment data, there may be a likely Audio Video - (void) captureOutput: (AVCaptureOutput *) captureOutput didOutputSampleBuffer: (CMSampleBufferRef sampleBuffer fromConnection: (AVCaptureConnection) * connection) {if (_videoConnection = = connection) {NSLog (@ "collected video data");} else {NSLog (@ "collected audio data");}}

Video capture extra features a (toggle camera)

  • Step 1 to obtain the current video camera switching device input object 2 to determine the current video camera input device is the object or front rear 3 determine the direction of the camera according to the 4 switching camera gets the direction corresponding to the 5 camera equipment to create the corresponding 6 video input from the object before session removed 7 add video input object to the new session
Switch / camera - (IBAction) toggleCapture: (ID sender) {/ / get the current direction of equipment AVCaptureDevicePosition curPosition = _currentVideoDeviceInput.device.position; / / get the need to change the direction of AVCaptureDevicePosition = curPosition = togglePosition = AVCaptureDevicePositionFront? AVCaptureDevicePositionBack:AVCaptureDevicePositionFront; / / get change camera equipment AVCaptureDevice *toggleDevice = [self getVideoDevice:togglePosition]; / / get the change of camera input device AVCaptureDeviceInput *toggleDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:toggleDevice error:nil] remove the camera before; / / removeInput:_currentVideoDeviceInput] / / add [_captureSession input device; The new camera input device [_captureSession addInput:toggleDeviceInput]; / / _currentVideoDeviceInput = record the current camera input device toggleDeviceInput;}

Video capture extra features two (focus cursor)

  • Position the cursor focus screen click on the 2 step 1 monitor clicks, converted to the camera, through the video preview layer (AVCaptureVideoPreviewLayer) to set the cursor position 3 focus pictures, and animation 4 set the camera equipment focusing mode and exposure mode (Note: this setting must lock the configuration of lockForConfiguration, or newspaper wrong)
"Click on the screen, focused view - (void) touchesBegan: (NSSet< UITouch *> touches withEvent: (* *) UIEvent event) {/ / *touch = [touches for click position UITouch anyObject]; CGPoint point = [touch locationInView:self.view]; / / when the front position is converted to camera point on the position of CGPoint cameraPoint = [_previedLayer captureDevicePointOfInterestForPoint:point]; / / set focus cursor position [self setFocusCursorWithPoint:point] [self focusWithMode: AVCaptureFocusModeAutoFocus; / / set focused exposureMode:AVCaptureExposureModeAutoExpose atPoint:cameraPoint];} / * * * * * @param focus set the cursor position cursor position point * / - (void) setFocusCursorWithPoint: (CGPoint) point{self.focusCursorImage View.center=point; self.focusCursorImageView.transform=CGAffineTransformMakeScale (1.5, 1.5); self.focusCursorImageView.alpha=1.0 [UIView; animateWithDuration:1.0 animations:^{self.focusCursorImageView.transform=CGAffineTransformIdentity;} completion:^ (BOOL finished) {self.focusCursorImageView.alpha=0};}]; / * * * set * / - focus (void) (AVCaptureFocusMode) focusWithMode: focusMode exposureMode: (AVCaptureExposureMode) exposureMode atPoint: (CGPoint) point{AVCaptureDevice *captureDevice = _currentVideoDeviceInput.device; / / lockForConfiguration:nil] / / [captureDevice lock configuration; set the focus on if ([captureDevice isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {[capture Device setFocusMode:AVCaptureFocusModeAutoFocus];} if ([captureDevice isFocusPointOfInterestSupported]) {[captureDevice setFocusPointOfInterest:point];} / / set ([captureDevice isExposureModeSupported:AVCaptureExposureModeAutoExpose]) exposure if {[captureDevice setExposureMode:AVCaptureExposureModeAutoExpose];} if ([captureDevice isExposurePointOfInterestSupported]) {[captureDevice setExposurePointOfInterest:point];} / / unlock the configuration of [captureDevice unlockForConfiguration];}

Concluding remarks

Subsequent updates will also be more information about the live, hoping to do every one of the friends of the church from scratch to do a live app, and Demo will gradually improve.
Demo click download

  • Because the FFMPEG library is relatively large, about 100M.
  • Originally wanted to upload all their own code, upload 1 hours, not successful, they gave up.
  • To provide an alternative, you need to import the specific steps of the IJKPlayer library:
  • Download Demo, open the YZLiveApp.xcworkspace problem
How to quickly develop a complete iOS live app] (acquisition)
open YZLiveApp.xcworkspace problem
  • Pod install will be able to solve
How to quickly develop a complete iOS live app] (acquisition)
Snip20160830_12.png
  • Download the jkplayer library, click download
  • The jkplayer directly into the same directory with the Classes, the direct operation of the program, you can succeed
How to quickly develop a complete iOS live app] (acquisition)
drag ijkplayer into the same directory with Classes under.Png
  • Note that you do not need to open the project, the jkplayer into the project, but the jkplayer library directly to the same directory with the Classes can be.
  • Error demonstration: do not operate like this
How to quickly develop a complete iOS live app] (acquisition)
Snip20160830_14.png