Extended –OCR]iOS optical character recognition

Original blog address Tesseract OCR tutorial

Prior to scanning the Almighty king, has been thinking about how this function is achieved, recently found the information, and to share with you

What is 1.OCR?

OCR (Optical Character Recognition) optical character recognition is the process of extracting text from the image of the electronic scanning, it can be used in a variety of forms, such as document editing, such as: text search / compression and other uses.

This OCR is widely used in the tablet computer you scanned documents, handwritten graffiti, news reports (2015/01/14) Google will add this into their applications, we learn together how to use it on their own iPhone today!

OCR open source library address —Tesseract;

2 introduction Tesseract

Tesseract OCR is very hard, but there are some limitations:

1 Unlike some OCR engines (like those used by the U.S. Postal Service to sort mail Tesseract is unable), to recognize handwriting and is limited to about 64 fonts in total. Tesseract can not recognize hand written words, and only about 64 < > 2 Tesseract; some doubts; some need to improve OCR results image preprocessing; appropriate proportion, as far as possible contrast image, text horizontally. 3 finally, Tesseract OCR can only work on Linux, Windows, and Mac OS platform above the X;
Extended --OCR]iOS optical character recognition

(a,) that how to use on iOS??

Fortunately, there is a layer of Objective-C on the Tesseract OCR package can be used on the swift and iOS devices, do not worry, swift compatible version includes this package

3 Love In A Snap

How can you not think that Ray Wenderlich will update this article on the eve of the blue s (Valentine) Day, right? Certainly not! We try to find a way to impress your inner desire. Together with you to create a gift of love -_-!

In this tutorial, you will learn how to use the Google framework —Tesseract open source, you can use the Love object name In A Snap to let you take pictures of the love poems and “make it” to replace the original poet Mousika the name of goddess with their feelings of. Be prepared to impress!

3.1 start

You can download the initial project from GitHub to local;

This archive contains several files:

1 LoveInASnap: Xcode 2 Tesseract Resources: project; Tesseract framework and language pack; after 3 Image Resources: a few proofs picture containing text can be used.
Extended --OCR]iOS optical character recognition

Open the current project LoveinASnap.xcodeproj, you will find that there are some @IBOutlets and blank @IBAction methods associated with ViewController.swift from Storyboard;

There are two ways in which you can see how to handle and display the active state of View;

Func (addActivityIndicator) {activityIndicator = UIActivityIndicatorView (frame: view.bounds activityIndicator.activityIndicatorViewStyle) =.WhiteLarge activityIndicator.backgroundColor = UIColor (white: 0, alpha: 0.25) (activityIndicator.startAnimating) view.addSubview (activityIndicator) func (removeActivityIndicator) {activityIndicator.removeFromSuperview}} (activityIndicator = Nil)

Then there are some ways to move the View position to prevent the keyboard from the input box;

Func moveViewUp (if topMarginConstraint.constant) {originalTopMargin {return}! = topMarginConstraint.constant = 135 (UIView.animateWithDuration 0.3, animations: (-> Void) {in}}) (self.view.layoutIfNeeded) func (moveViewDown) {if topMarginConstraint.constant = = originalTopMargin {return} topMarginConstraint.constant = originalTopMargin (UIView.animateWithDuration 0.3, animations: (-> Void) {in (self.view.layoutIfNeeded)}})

Finally, the rest of the method handles the location of the keyboard and View based on the user’s interaction, moveViewUp () and moveViewDown ()

@IBAction func backgroundTapped (sender: AnyObject) {view.endEditing (true) moveViewDown (func)} textFieldDidBeginEditing (textField: UITextField) {moveViewUp} (@IBAction) func textFieldEndEditing (sender: AnyObject) {view.endEditing (true) moveViewDown (func)} textViewDidBeginEditing (textView: UITextView) {(moveViewDown)}

Although it is important that the user experience (UX–User Experience), but these methods are the most critical part of this tutorial, so prepared in advance of the interface, so you can immediately enter the interesting coding phase.

Before writing code, to compile and run a program, click on the interface, feel UI, TextView now also can’t edit, click on the input box can be lifted and shut down the keyboard, you can bring life to the APP Yes.You Can ^_^!

Extended --OCR]iOS optical character recognition
3.2 add Tesseract Framework

3.2.1. the first step to add TesseractOCR.framework;

Find the Tesseract file folder in the extracted file, add Xcode in TesseractOCR.framework, to ensure that the selected Copy items if is added, and finally click Finish needed;

3.2.2. second step add tessdata

The same way to add tessdata folder;

3.2.3. add system dependency libraries, including

1 libstdc++.6.0.9.dylib (or libstdc++.6.0.9.tbd); 2 CoreImage.framework; 3 TesseractOCR.framework

3.2.4. configuration

1 select the Build Settings using the above search box to search the Other Linker Flags, -lstdc++&lt splicing in existing keys; pro test did not change no problem > then, use the same method to find C++ Standard Library in Build Settings, and will be set to Compiler Default; found in the Enable Bitcode set to NO;

2 because Tesseract is a OC framework, you need to create a bridge connection file,

About iOS OC and Swift mixed problems will not be described here;

Add in LoveInASnap-Bridging-Header.h

#import < TesseractOCR/TesseractOCR.h>
Extended --OCR]iOS optical character recognition

You can then use the framework in the project

3.3 loading pictures

The first step is to add scanned images in OCR APP, and the easiest way is to use UIImagePickerController to get it from an album or camera

Open ViewController.swift in takePhoto () add code, as follows:

@IBAction func takePhoto (sender: AnyObject) {/ / 1 off the keyboard, the view back to the initial position of view.endEditing (true) moveViewDown (2) / / create a sheet style UIAlertController let imagePickerActionSheet = UIAlertController (title: "Snap /Upload Photo", message: nil, preferredStyle:.ActionSheet) 3 / / open the camera, if not on the simulator. The option if UIImagePickerController.isSourceTypeAvailable (.Camera) {let cameraButton = UIAlertAction ("title: Take Photo", style:.Default) {(ALERT) -> Void in let imagePicker = UIImagePickerController (imagePicker.delegate) = self imagePicker.sourceType =.Camera (self.presentViewController imagePicker, animated: t Rue, completion: nil imagePickerActionSheet.addAction (cameraButton)})} / / 4 take photos of the let libraryButton = UIAlertAction (title: "Choose Existing", style:.Default) {(ALERT) -> Void in let imagePicker = UIImagePickerController (imagePicker.delegate) = self imagePicker.sourceType =.PhotoLibrary (self.presentViewController imagePicker, animated: true, completion: nil imagePickerActionSheet.addAction (libraryButton))} 5 cancelButton = UIAlertAction let / / cancel (title: Cancel, style:.Cancel) {(ALERT)} Void -> in imagePickerActionSheet.addAction (cancelButton) / 6 Display Boxes presentViewController (imagePickerActionSheet, animated: true, Completion: Nil)}

As mentioned in the list of constraints, must be constrained in a certain position will be the best OCR image results. If the image is too large or too small, Tesseract may return an unsatisfactory result, but it is strange that the EXC_BAD_ACCESS error will be reported when the crash occurs.

To do this, you need to write a method to adjust the image to change the aspect ratio as much as possible without distorting the image.

3.4 to maintain the aspect ratio of the zoom image

The ratio of the height and width of the image can not be separated from the width and height, in mathematics, reducing the size of the original image does not affect the aspect ratio, but you must keep the aspect ratio unchanged.

Extended --OCR]iOS optical character recognition

When you know the width and height of a picture, the final width you want can be calculated by rearranging the formula:

Extended --OCR]iOS optical character recognition

The resulting two formulas:

1 Height1/Width1 * width2 = height2; Width1/Height1 * height2 = = width2

You will use these two formulas (formulas) to ensure that the ratio of width to height;

In ViewController.swift, add the following method

Func scaleImage (image: UIImage, maxDimension: CGFloat -> UIImage) {var scaledSize = CGSize (width: maxDimension, height: maxDimension var scaleFactor: CGFloat if image.size.width >); image.size.height {scaleFactor = image.size.height / image.size.width = scaledSize.width maxDimension scaledSize.height = scaledSize.width * scaleFactor} else {scaleFactor = image.size.width / image.size.height = scaledSize.height maxDimension scaledSize.width = scaledSize.height * scaleFactor} UIGraphicsBeginImageContext (scaledSize) image.drawInRect (CGRectMake (0, 0, scaledSize.width, scaledSize.height) = UIGraphicsGetImageFromCurrentImageContext (let) scaledImage) UIGraphicsEndImageContext (return) scal EdImage}

MaxDimension, the method to the image height and width, which is greater, set the maximum is equal to maxDimension. The other side of the image is then calculated based on the proportion of the appropriate image, the original image is redrawn to adapt to the new frame, and finally a new extended image is returned to the calling method.

3.5 achieve Tesseract OCR

In the lower side of the ViewController.swift to find the extension of the class UIImagePickerControllerDelegate, add code, as follows:

Func imagePickerController (picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String: AnyObject] {let = info[UIImagePickerControllerOriginalImage]) selectedPhoto as UIImage let scaledImage! = scaleImage (selectedPhoto, maxDimension: 640) (addActivityIndicator) dismissViewControllerAnimated (true, completion: (scaledImage) {self.performImageRecognition})}

ImagePickerController (_: didFinishPickingMediaWithInfo:) is the agent of UIImagePickerDelegate, the return is to select the image information (a dictionary), from the info to the original image is obtained by using the key–UIImagePickerControllerOriginalImage scaleImage method, had just written in use (_: maxDimension:) to zoom picture;

When Tesseract work, you can call addActivityIndicator () to close the user interaction, showing the active state; and then you can turn off the UIImagePicker, through performImageRecognition () processing;

Add the following method to show the magic of OCR:

Func performImageRecognition (image: UIImage) {/ / 1 Tesseract let Tesseract to initialize the G8Tesseract object (= G8Tesseract) / / 2 Tesseract will search the specified language you will search from the.Traineddata file, fra.traineddata< > and eng.traineddata< French; English; > tesseract.language = "eng+fra" / / 3 can specify three different engineMode types, TesseractCubeCombined: the fast is not accurate, but slow.CubeOnly: is more accurate, the use of artificial intelligence;.TesseractCubeCombined: also contains the most accurate and most slowly. Tesseract.engineMode =.TesseractCubeCombined / / 4 field because of uncertainty, so tell the piecewise tesseract.pageSegmentationMode =.Auto / / 5 TesseractCubeCombined only limit to automatically determine the engine, the longest scanning time of tesseract.ma XimumRecognitionTime = 60 / 6 in order to ensure the contrast of the picture to meet the requirements of < Tesseract filter; filter> (g8_blackAndWhite) contrast increases, has achieved the better effect of tesseract.image = image.g8_blackAndWhite (tesseract.recognize) (7) / / will get to the text display to textView textView.text = tesseract.recognizedText textView.editable = true / / 8 final remove indicator removeActivityIndicator said the scan is complete (})

The location of the scanned text language set – second [tesseract.language = “eng+fra”

3.6 the time has come to witness the miracle

Before finding the main preparation of the picture < contains English and French > open the simulator, the two test images directly into the simulator, will automatically add album, to be used later;

Run the simulator, Choose Existing– select the photo album in the test picture, the emergence of the compiled text displayed on the Textview, and Tesseract did a great job!!! NB

Extended --OCR]iOS optical character recognition
Extended --OCR]iOS optical character recognition
3.7 replace text

In fact, the main text, the next operation not at your disposal ^_-, the original is a song of love poems, one on the occasion of Valentine’s day show affection, and add change, then began to work!

Open ViewController.swift, find the existing swapText (), modify the code as follows:

@IBAction func swapText (sender: AnyObject) {/ / 1 guarantee replacement and replaced if let code is not null text let = textView.text, findText = findTextField.text, let = 2 replaceText replaceTextField.text {/ / UITextView replacement method of textView.text = textView.text.stringByReplacingOccurrencesOfString (findTextField.text, withString: replaceTextField.text, options: range: [], Nil) / / 3 input box will be null findTextField.text = nil replaceTextField.text = nil / / 4 end task, cancel the corresponding keyboard, View reset (true) moveViewDown (view.endEditing)}}

They can easily replace a try, you love the name of the TA ^_^;

3.8 share

Can be obtained, you can change, how to send to like people!!

The last paragraph is shared!

Create UIActivityViewController to share the operation, replace the current method sharePoem ();

@IBAction func sharePoem (sender: AnyObject) {/ / 1 when judging to the empty if textView.text.isEmpty {return} / / 2 initialization and assignment text let activityViewController = UIActivityViewController (activityItems: [textView.text], applicationActivities: Nil) 3 let / / excludeActivities = [UIActivityTypeAssignToContact, UIActivityTypeSaveToCameraRoll, UIActivityTypeAddToReadingList, UIActivityTypePostToFlickr, UIActivityTypePostToVimeo], activityViewController.excludedActivityTypes = excludeActivities / / 4 presentViewController (activityViewController, animated: and true display. Completion: nil})

Note third steps: UIActivityViewController has a lot of types to choose from Apple’s official documents;

Final project address

Complete! Get a Ta that you adore.

You can put Lenore into your own name, put this poem to myself by mail, dementia bleary eyed on Valentine’s night, holding a glass of wine and looked at the moon, and then pretend to receive elegant mature queen sent you the mail, romantic, warm and comfortable. Suddenly the phone rang…
then, a program ape suddenly awakened from a dream.
… Guess what?

Extended --OCR]iOS optical character recognition


Tesseract is very powerful, but the OCR potential is unlimited. Remember when you use or enhance the functions of OCR, as the sensing, thinking, if you can decipher the character using your eyes or ears or fingertips, then you are fully deserve the character recognition expert, has the ability of teaching your computer to learn more knowledge.

I am sorry to see so many u.s.!

Original blog address Tesseract OCR tutorial