The first Core ML: to establish a simple image content recognition application

The first Core ML: to establish a simple image content recognition application

Reference: Introduction, to, Core, ML:, Building, a, Simple,, Image, Recognition, App

Xcode 9, beta, & iOS11, beta, & Swift4

Core ML is a framework that integrates machine learning models into app.

New initial project

  1. New project CoreMLDemo, with single-view application template template
  2. Set up as follows: UI
    The first Core ML: to establish a simple image content recognition application

Two, the realization of photography and access to picture library function

  1. Follow two protocols:
    , class, ViewController:, UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
  2. Add two interfaces and associate.
@IBOutlet, VaR, imageView:, UIImageView, @IBOutlet, VaR, classifier:, UILabel!
  1. Implement two Action
@IBAction func camera (_ sender: Any) {if! UIImagePickerController.isSourceTypeAvailable (.Camera) {return} let (cameraPicker = UIImagePickerController) cameraPicker.delegate = self cameraPicker.sourceType =.Camera cameraPicker.allowsEditing = false (present cameraPicker, animated: true, completion: nil @IBAction func (openLibrary)} _ sender: Any) {let picker = UIImagePickerController (picker.allowsEditing = picker.delegate = false) self picker.sourceType =.PhotoLibrary present (picker animated: true)}
  1. Method for implementing protocol UIImagePickerControllerDelegate:
Func imagePickerControllerDidCancel (_ picker: UIImagePickerController dismiss (animated: true) {completion:}, Nil)
  1. Add request in Info.plist, Privacy – Camera Usage Description and Privacy

Three, integrated Core ML Data Model

  1. To the Apple machine learning official website Machine Learning download Core ML models, there are currently 6 Model, the identification of the use of pictures Inception v3. Download to get a mlmodel end of the file, drag directly to the project, will automatically generate the corresponding model name swift class file, you can directly use in the code.
The first Core ML: to establish a simple image content recognition application
The first Core ML: to establish a simple image content recognition application
4.jpg
  1. Core ML:
    import CoreML is introduced in ViewController.swift
  2. Initial Inceptionv3:
Var model: Inceptionv3 override func viewWillAppear (_! Animated: Bool) {model} = (Inceptionv3)
  1. UIImagePickerControllerDelegate protocol imagePickerController (_: didFinishPickingMediaWithInfo) method:
Func imagePickerController (picker: UIImagePickerController didFinishPickingMediaWithInfo info: _, [String: Any]) (animated: true) {picker.dismiss classifier.text = "Analyzing Image..." guard let image "UIImagePickerControllerOriginalImage"] = info[as? UIImage else {return} UIGraphicsBeginImageContextWithOptions (CGSize (width: 299, height: 299, true 2), image.draw (in: CGRect (x:) 0 y:, 0, width: 299, height: 299) = UIGraphicsGetImageFromCurrentImageContext (newImage) let) UIGraphicsEndImageContext (let) attrs! = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] As CFDictionary var pixelBuffer: CVPixelBuffer let = CVPixelBufferCreate? Status (kCFAllocatorDefault, Int (newImage.size.width), Int (newImage.size.height), kCVPixelFormatType_32ARGB, attrs, & pixelBuffer) guard (status = = kCVReturnSuccess else) {return} CVPixelBufferLockBaseAddress (pixelBuffer!, CVPixelBufferLockFlags (rawValue: 0) pixelData CVPixelBufferGetBaseAddress (let) = let rgbColorSpace = pixelBuffer!) CGColorSpaceCreateDeviceRGB (let) context = CGContext (data: pixelData, width: Int (newImage.size.width), height: Int (newImage.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow, space: rgbColorSpace (pixelBuffer!), BitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue.TranslateBy (x:) //3 context? 0 y:, newImage.size.height) context?.scaleBy (x: 1, y: -1.0 (UIGraphicsPushContext) context! NewImage.draw (in:) CGRect (x: 0, y: 0, width: newImage.size.width, height: newImage.size.height)) UIGraphicsPopContext (CVPixelBufferUnlockBaseAddress) (pixelBuffer!, CVPixelBufferLockFlags (rawValue: 0) imageView.image = newImage)
  1. The use of Core ML
    on imagePickerController (_: didFinishPickingMediaWithInfo) finally added:
Guard, let, prediction = try, model.prediction (image:, pixelBuffer) else {return} classifier.text = I think this is a / (prediction.classLabel)"

Now I can run the pictures. I’ve done a few simple experiments, and the effect is ok:

Can identify types of, Pug Pug

The first Core ML: to establish a simple image content recognition application

Should be relatively simple:

The first Core ML: to establish a simple image content recognition application

Friends of the cat, I do not know the type of cat, identified as Persian cat (Persian cat):

The first Core ML: to establish a simple image content recognition application

I don’t know how to identify the espresso (espresso) of the

The first Core ML: to establish a simple image content recognition application
yes1.jpg

Of course, the difference between identification is strange.
millet Bracelet recognition into a stethoscope (stethoscope)

The first Core ML: to establish a simple image content recognition application

Kindle not recognize

The first Core ML: to establish a simple image content recognition application
The first Core ML: to establish a simple image content recognition application

Four, Core ML learning resources

Official document official Core ML documentation

WWDC 2017:

  • Introducing Core ML
  • Core ML in Depth

Code: CoreMLDemo (because Inceptionv3.mlmodel is relatively large, I did not upload to GitHub, to Machine Learning download, drag to the project can)