OpenCV Learning Development Notes (iOS9)

The development environment used in this article is:
1) OpenCV for 8.2
2 Xcode iOS 3.2

Preface

Recently, the company has entered a more stable maintenance cycle, taking into account the nature of the project is likely to be carried out after the need to learn the next OpenCV, also here to share with you.

OpenCV introduction

OpenCV is a cross platform computer vision and machine learning library is an open source, the popular point said, is he to the computer provides a pair of eyes, a pair can obtain information from the picture of the glasses, so as to complete the function of image related to eye, face recognition, motion tracking and so on. For more details, see OpenCV’s official website.

Modular

The following is the most important module listed in the official document.

Core: a simple core module that defines the basic data structures, including the basic functions required for dense multidimensional arrays Mat and other modules.
imgproc: the image processing module, including linear and nonlinear image filtering, image geometric transformation (zoom, affine and projective transformation, based on the general weight mapping table), color space conversion, histogram and so on.
Video: video analysis module, including motion estimation, background elimination, object tracking algorithm.
calib3d: including the basic multi view geometry algorithm, calibration of single and stereo cameras, object pose estimation, binocular stereo matching algorithm and 3D reconstruction of elements.
features2d: including the salient feature detection algorithm, description operator and operator matching algorithm.
objdetect: object detection and the detection of some predefined objects (such as face, eyes, cups, people, cars, etc.).
ml: a variety of machine learning algorithms, such as K mean, support vector machines and neural networks.
highgui: a simple and easy to use interface to provide video capture, image and video coding and other functions, as well as a simple UI interface (iOS can only be a subset of it).
gpu:OpenCV in different modules of the GPU acceleration algorithm (not available on iOS).
OCL: the use of OpenCL to achieve a general algorithm (iOS is not available).
some other auxiliary modules, such as Python binding and user contributed algorithms.

Base classes and operations

OpenCV contains hundreds of classes. For the sake of simplicity, we will only look at a few basic classes and operations. Over these core classes should be sufficient to have some understanding of the mechanism of the library.

Cv:: Mat

Cv:: Mat is the core data structure of OpenCV, used to express any N dimensional matrix. Because the image is only a special case of the 2 dimensional matrix, it is also represented by cv:: Mat. In other words, cv:: Mat will be the most used class in OpenCV.

A cv:: Mat example of the role is like the head of the image data, which contains information describing the image format. Image data is only referenced and can be shared for multiple cv:: Mat instances. OpenCV uses a reference counting method similar to the ARC to ensure that the image data will be released when the last reference from the cv:: Mat. The image data itself is an array of consecutive rows of images (for the N dimensional matrix, the data is an array of contiguous N-1 dimensional data). Using the value contained in the step[] array, any pixel address of the image can be obtained by the following pointer operation:

Uchar *pixelPtr = cvMat.data + rowIndex * cvMat.step[0] + colIndex * cvMat.step[1]

The data format of each pixel can be obtained by the type () method. In addition to each channel used 8 bit unsigned integer grayscale (channel 1, CV_8UC1) and color map (channel 3, CV_8UC3), OpenCV also supports many common formats, such as CV_16SC3 (per pixel 3 channels, each channel uses 16 bit signed integer), or CV_64FC4 (4 pixels per channel. Each channel uses 64 bit floating point).

Cv:: Algorithm

Algorithm is an abstract base class for many algorithms implemented in OpenCV, including the FaceRecognizer that will be used in our demo project. It provides API and Apple’s Core Image framework in the CIFilter some similarities. Create a Algorithm when using the name of the algorithm to call Algorithm:: create (), and can be get () and set () method to obtain and set the parameters, which is like a key encoding. In addition, Algorithm supports the function of loading / saving parameters from / to XML or YAML files from the bottom.

Actual part

Import project

First download from the official website of the official website OpenCV download iOS support library, we build a new project.

OpenCV Learning Development Notes (iOS9)
build environment

Import OpenCV to Xcode project is still relatively simple, download the corresponding framework from the official website, directly into the Xcode project, after the project from xcode7 into will be automatically added to the Building phase, check.

OpenCV Learning Development Notes (iOS9)

And then you want to use OpenCV to introduce the OpenCV header file:

#import < opencv2/opencv.hpp>

Or add directly to the PCH file:

#ifdef __cplusplus #import < opencv2/opencv.hpp> #endif

The use of the C++ method to achieve the file suffix OpenCV into.Mm, you can start using the OpenCV method.

Objective-C++

As mentioned earlier, OpenCV is a C++ API, and therefore can not be used directly in the Swift and Objective-C code, but can be used in the Objective-C++ file.

Objective-C++ is a mixture of Objective-C and C++, so you can use the C++ object in the Objective-C class. The clang compiler treats all the files named.Mm as Objective-C++. In general, it will run as you would expect, but there are still some things to use Objective-C++. Memory management is one of the most important points you should pay attention to, because ARC is only valid for Objective-C objects. When you use a C++ object as a class attribute, its only valid attribute is assign. Therefore, your dealloc function should ensure that the C++ object is properly released.

The second important point is that if you are in the Objective-C++ header file into the C++ header file, when you use the Objective-C++ file in the project when it leaked C++ dependence. Any Objective-C class that introduces your Objective-C++ class will also introduce the C++ class, so that the Objective-C file must also be declared as a Objective-C++ file. It will spread like wildfire in the works. So, you should be introduced into the C++ file with the #ifdef __cplusplus package, and as long as possible, as far as possible in the.Mm implementation of the file into the C++ header file.

It seems simple, but there are still many problems in practice.

<; ——MORE——->

Small warm-up

Because the OpenCV code is written based on C++, therefore, to run the c++ code in the project, the need to achieve the file name suffix from.M to.Mm, as shown in the following figure.

OpenCV Learning Development Notes (iOS9)

Once again stressed that the use of OpenCV class name must be changed to.Mm!

Said so much to test it!

#import < opencv2/opencv.hpp> #import; < opencv2/imgproc/types_c.h> #import < opencv2/imgcodecs/ios.h> #import "ViewController.h" @interface (ViewController) {cv:}: Mat cvImage; @property (weak, nonatomic) IBOutlet UIImageView *imgView; @end @implementation ViewController (void) viewDidLoad viewDidLoad] UIImage {[super; *image = [UIImage imageNamed:@ UIImageToMat ("learn.jpg"]; image, cvImage); if (cvImage.empty) {cv: (!): Mat gray; / / convert image to grayscale display: cv: cvtColor (cvImage, gray, CV_RGB2GRAY); / / the application of Gauss filter to remove the small edge of cv:: GaussianBlur (gray, gray, cv:: Size (5,5), 1.2,1.2); / / calculation with the edge of the canvas cv:: Mat edges; Cv:: Canny (gray, edges, 0, 50); / / use white filled cvImage.setTo (cv:: Scalar:: all (225)); / / modify edge color cvImage.setTo (cv:: Scalar, edges (0128255255)); / / Mat will be converted to Xcode UIImageView self.imgView.image (cvImage) MatToUIImage = @end;}}

Artwork

OpenCV Learning Development Notes (iOS9)
finn-w290

Effect of running on simulator

OpenCV Learning Development Notes (iOS9)
operation effect

Demo: face recognition

Demo: face detection and recognition

Now, we are on the OpenCV and how to integrate it with our understanding about the application, so let’s do a small demo application: acquire the video stream from the camera iPhone, it continues to face detection, and to screen out. When the user clicks on a face, the application tries to identify the person. If the result is correct, the user must click “Correct”. If an error is identified, the user must select the correct name to correct the error. Our face recognizer will learn from the mistakes and become better and better.

Video shooting

A class of highgui, the OpenCV module of CvVideoCamera, which abstract the iPhone camera, let us app through a proxy function (void) – processImage: (cv:: Mat& image) to get the video stream. CvVideoCamera instances can be set as follows:

CvVideoCamera *videoCamera = [[CvVideoCamera alloc] initWithParentView:view]; videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront; videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset640x480; videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait; videoCamera.defaultFPS = 30; videoCamera.grayscaleMode = NO; videoCamera.delegate = self;

The frame rate of the camera is set to 30 frames per second, and the processImage function we achieve will be called up to 30 times per second. Because our app to continuously detect the face, so we should realize the face detection in this function. It should be noted that, if the face detection of a frame for more than 1/30 seconds, it will produce the phenomenon of frame.

Face detection

In fact, you do not need to use OpenCV to do face detection, because Core Image has provided the CIDetector class. Using it to do face detection has been pretty good, and it has been optimized, it is easy to use:

CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}]; NSArray *faces = [faceDetector featuresInImage:image];

Each face detected from the image is stored in the array faces with a CIFaceFeature instance. In this example, the position and width of the face are kept. In addition, the position of the eyes and mouth is optional.

On the other hand, OpenCV also provides a set of object detection function, after training to detect any object you need. The library comes with a number of parameters that can be used directly, such as face, eyes, mouth, body, upper body, lower body and smiling face. The detection engine consists of a number of very simple detectors. These detectors are called Haar feature detectors, each of which has different scales and weights. In the training phase, the decision tree is optimized by the known correct and wrong pictures. Details of the training and testing process can refer to the original paper. These parameters can be loaded and initialized when the correct feature cascade and its scale and weight are established by training:

The face detector / training parameters of NSString *faceCascadePath [[NSBundle mainBundle] file path = pathForResource:@ "haarcascade_frontalface_alt2" ofType:@ "XML"]; const CFIndex CASCADE_NAME_LEN = 2048; char = *CASCADE_NAME (char *) malloc (CASCADE_NAME_LEN); CFStringGetFileSystemRepresentation ((CFStringRef) faceCascadePath, CASCADE_NAME, CASCADE_NAME_LEN); CascadeClassifier faceDetector; faceDetector.load (CASCADE_NAME);

These parameters can be found in the data/haarcascades folder in the OpenCV distribution package.

After using the parameters needed to initialize the face detector, it can be used for face detection:

Cv:: Mat img; vector< cv:: Rect> faceRects; double scalingFactor int = 1.1; minNeighbors = 2; int flags = 0; cv:: Size minimumSize (30,30); faceDetector.detectMultiScale (IMG, faceRects, scalingFactor, minNeighbors, flags, cv:, Size (30, 30));

In the detection process, the trained classifier can be used to scan each pixel of the input image with different scales to detect the faces of different sizes. Parameter scalingFactor determines how many times the size of each classifier will be traversed. The minNeighbors parameter specifies a qualified face region should have the number of eligible neighbor pixels is considered a possible face region; if a qualified face region move only one pixel will no longer trigger classifier, then it is very likely not the result we want. A face area with fewer than minNeighbors qualified neighbor pixels will be rejected. If minNeighbors is set to 0, all possible face areas will be returned. Parameter flags is a legacy of the OpenCV 1.x version of API, which should always be set to 0. Finally, the parameter minimumSize specifies the minimum size of the face area we are looking for. The faceRects vector will contain all the face regions obtained by face recognition for img. Recognition of the face image can be extracted by cv:: (Mat) operator, the call is very simple: cv:: Mat = img (aFaceRect).

Whether using CIDetector or OpenCV CascadeClassifier, as long as we get at least one face area, we can identify the image of the people.

Encountered problems

This is the first time to import C++ library into the project, so still groping for some time.

  1. The first time when the compiler encounters a problem, a compiler warning is #if defined (NO) warning Detected Apple’NO’macro definition #, it can cause build conflicts. Please, include this header before any Apple headers. #endif

In accordance with the instructions, OpenCV header file should be imported before all APPLE header file, or you will throw an exception, the import can be transferred to the front.

  1. Why do we need to add #ifdef __cpluseplus to the OpenCV header file in the PCH file? This is because the PCH file is a file header files are all introduced, and we also hope that #import < opencv2/opencv.hpp> this part will only be some C++ files compiled, so we add #ifdef to __cpluseplus said that this is a C++ file will be compiled, in addition to #ifdef __cpluseplus, and #ifdef __OBJC__ such a macro to illustrate the Compilation Rules (compiled in accordance with the OC file), so many macros appear in some kinds of file will be referenced in the header file.
  2. Also note that another question: if a header file is a type of C++ header files, you must ensure that all directly or indirectly referenced files to the header file are.Mm or.Cpp, or Xcode will not put this header file as C++ header files to compile, it will appear the most basic #include < iostream> this quote will quote file not found this compilation errors. I am in the process of compiling a C++ header file, A.h quoted by B.h, then B.h is also a reference to C.m, while the B file is B.mm, but still reported that mistake
    said before. Thanks to StackOberflow let me find the cause of the problem. So for the C++ header file must be noted that, whenever the reference to the realization of the A.h part, must be.Mm or.Cpp suffix. At the same time, we can also know that Xcode is based on the header file to be used to determine the type of header file compiler.

Convert UIImage and cv:: Mat

In OpenCV with the commonly used cv:: Mat, said the picture, and the iOS is UIImage to represent the picture, but the official tutorial openCV has given a method.

UIImage to cv:: Mat

- (cv:: Mat) cvMatFromUIImage: (UIImage * image) {CGColorSpaceRef colorSpace = CGImageGetColorSpace (image.CGImage); CGFloat cols = image.size.width CGFloat; rows = image.size.height; cv:: Mat cvMat (rows, cols, CV_8UC4); / / bits per 8 component, 4 channels (color channels + alpha CGContextRef contextRef = CGBitmapContextCreate (cvMat.data). To data / / Pointer / / Width of cols, bitmap rows, of bitmap / / Height / / Bits per 8, component cvMat.step[0], Bytes per row / / colorSpace / / Colorspace kCGImageAlphaNoneSkipLast kCGBitmapByteOrderDefault |); / / Bitmap info flags CGContextDrawImage (contextRef, CGRectMake (0, 0, cols, rows, image.CGImage)); CGContextRelease (contextRef); return cvMat;}

Cv:: Mat to UIImage

- (UIImage * UIImageFromCVMat:) (cv:: Mat) cvMat *data dataWithBytes:cvMat.data length:cvMat.elemSize {NSData = [NSData (*cvMat.total) (CGColorSpaceRef)]; colorSpace; if (cvMat.elemSize) (= = 1) according to the {// can decide which colorSpace = (CGColorSpaceCreateDeviceGray);} else {colorSpace = CGColorSpaceCreateDeviceRGB (provider = CGDataProviderRef);} CGDataProviderCreateWithCFData ((__bridge CFDataRef) data); / / Creating CGImage from cv:: Mat CGImageRef imageRef = CGImageCreate (cvMat.cols, //width cvMat.rows, //height 8, //bits per Component 8 * cvMat.elemSize), //bits per (pixel cvMat.step[0], //bytesPerRow colorSpace, //colorspace kCGImageAlphaNone|kCGBitmapByteOrderDefault, info provider //CGDataProviderRef / / bitmap, NULL, //decode false, //should interpolate kCGRenderingIntentDefault //intent); / / Getting UIImage from CGImage UIImage *finalImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease (imageRef); CGDataProviderRelease (provider); CGColorSpaceRelease (colorSpace); return finalImage;}