Face recognition based on iOS

Recently, a project has a face recognition and automatic camera two functions, first record the relevant code.
face recognition of this piece of the third party is face++, but the client does not apply, only responsible for taking pictures, photos, data from the background to analyze, then in terms of the realization of these functions in fact, and scan code similar.

First, get the device, create the input and output streams, and capture the static image stream, capture the image layer, initialize the AVCaptureSession object.

Get the camera equipment device = [self / / cameraWithPosition:AVCaptureDevicePositionFront]; / / input = [AVCaptureDeviceInput deviceInputWithDevice:device creates a new input stream error:nil]; if (input!) return; / / create output = "AVCaptureMetadataOutput alloc]init] output stream; / / set the proxy to refresh the [output in the main thread (setMetadataObjectsDelegate:self queue:dispatch_get_main_queue)]; / / initialize link object _session = [[AVCaptureSession alloc]init]; / / high quality collection the rate of [_session setSessionPreset:AVCaptureSessionPresetHigh]; [_session addInput:input]; [_session addOutput:output]; [_session addOutput:self.stillImageOutput]; if ([_session canAddOutput:self.videoDataOutput]) {[_session AddOutput:self.videoDataOutput];} / / [_videoDataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] setpixelformat forKey: (ID kCVPixelBufferPixelFormatTypeKey]]); / / set and face type [email protected][AVMetadataObjectTypeFace]; layer = layer.videoGravity=AVLayerVideoGravityResizeAspectFill; [AVCaptureVideoPreviewLayer layerWithSession:_session]; layer.frame=self.view.layer.bounds [self.view.layer; insertSublayer: layer atIndex:0]; / / to capture [_session startRunning];

Set AVCaptureMetadataOutputObjectsDelegate, AVCaptureVideoDataOutputSampleBufferDelegate proxy, and the realization of the two methods.

//AVCaptureVideoDataOutput real-time image, the proxy method callback frequency quickly, almost the same as the mobile phone screen refresh frequency - (void) captureOutput: (AVCaptureOutput *) captureOutput didOutputSampleBuffer: (CMSampleBufferRef) sampleBuffer fromConnection: (AVCaptureConnection *) connection{[connection setVideoOrientation:AVCaptureVideoOrientationPortrait]; constantImage = [self imageFromSampleBuffer:sampleBuffer]; [self addFaceFrameWithImage:constantImage];} //CMSampleBufferRef (UIImage *) - NSImage imageFromSampleBuffer: (CMSampleBufferRef) Core Video CVImageBufferRef imageBuffer image cache object = CMSampleBufferGetImageBuffer sampleBuffer{/ / CMSampleBuffer for a set of media data (sampleBuffer); / / P lock Ixel buffer base address CVPixelBufferLockBaseAddress (imageBuffer, 0); / / pixel buffer void *baseAddress base address = CVPixelBufferGetBaseAddress (imageBuffer); / / pixel buffer for size_t bytesPerRow bytes = CVPixelBufferGetBytesPerRow (imageBuffer); / / pixel buffer size_t width = CVPixelBufferGetWidth width and height (imageBuffer); size_t (height = CVPixelBufferGetHeight imageBuffer); / / RGB = CGColorSpaceCreateDeviceRGB colorSpace CGColorSpaceRef color space to create a device dependent (); / / by sampling the cached data to create a bitmap graphics context (graphics context) CGContextRef context = CGBitmapContextCreate (baseAddress, width, height, 8, bytesPerRow, C OlorSpace, kCGBitmapByteOrder32Little kCGImageAlphaPremultipliedFirst |); / / create a Quartz object CGImageRef quartzImage image = CGBitmapContextCreateImage according to the pixel data of the bitmap in context (context); / / pixel buffer CVPixelBufferUnlockBaseAddress unlock (imageBuffer, 0); / / release of context and CGContextRelease color space (context); CGColorSpaceRelease (colorSpace); / / create a UIImage object image UIImage *image = [UIImage imageWithCGImage: quartzImage scale:1 orientation:UIImageOrientationUp] Quartz image Quartz image CGImageRelease; / / release object (quartzImage); return (image);} / / face location, adding a frame - (void) addFaceFrameWithImage: (UIImage * images{) CIContext * context = [CICon Text contextWithOptions:nil]; CIImage * image = [CIImage imageWithCGImage:images.CGImage]; NSDictionary * param = [NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy]; CIDetector faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:context * options:param] * detectResult = [faceDetector; NSArray featuresInImage:image]; for (int j=0; m_highlitView[j] =nil; j++! = YES) {m_highlitView[j].hidden}; int i=0; for (CIFaceFeature* faceObject in detectResult) {CGRect modifiedFaceBounds = faceObject.bounds; modifiedFaceBounds.origin.y = images.size.height-faceObject.bounds.size.height -faceObject.bounds.origin.y; [self addSubViewWithFrame:modif IedFaceBounds index:i]; i++;}} / / / - (void) addSubViewWithFrame: self portrait image (CGRect) frame index: (int) _index{if (m_highlitView[_index]==nil) {m_highlitView[_index]= [[UIView alloc] initWithFrame:frame]; m_highlitView[_index].Layer.borderWidth = 2; m_highlitView[_index].layer.borderColor = [[UIColor redColor] CGColor]; [self.view addSubview:m_highlitView[_index]]; m_transform[_index] = m_highlitView[_index].transform;} frame.origin.x = frame.origin.x/2.5; frame.origin.y = frame.origin.y/2.5; frame.size.width = frame.size.width/1.8; frame.size.height = frame.size.height/1.8; m_highlitView[_index].frame = frame; / / / according to picture size zoom self portrait View flo At scale CGAffineTransform = frame.size.width/220; transform = CGAffineTransformScale (m_transform[_index], scale, scale); m_highlitView[_index].transform = transform; m_highlitView[_index].hidden = NO;}

Face recognition by the following methods

CaptureOutput: (AVCaptureOutput *) captureOutput didOutputMetadataObjects: (NSArray *) metadataObjects fromConnection: (AVCaptureConnection *) connection

Through the following methods to get the picture, the final upload server comparison

CaptureStillImageAsynchronouslyFromConnection: (AVCaptureConnection *) connection completionHandler: (void (^) (CMSampleBufferRef imageDataSampleBuffer, NSError *error)) handler

At present, the problem is not resolved, add the red box to identify the face flashing frequency quickly, other normal!