IOS uses AUGraph recording to play simultaneously (and transcoding to Mp3)

If you only need to complete a simple recording function, Apple has a more advanced, convenient interface for developers to use: AVAudioRecorder, object-oriented, do not care about details. However, if the developer wants to get real-time data and its processing, it is necessary to use Audio Unit Services and Audio Processing Graph Services. Below I will introduce how to use them to complete a simple recording DEMO.

AudioSession

First, we need to understand the class under AudioSession. First look at the introduction of apple:

IOS handles audio at the app, inter-app, and device through audio behavior levels sessions

IOS through the AudioSession to control the audio performance of APP, cross application and hardware equipment. According to my understanding, it is used to set the most basic audio configuration, such as:
1, when the headset is pulled out, whether to stop the audio playback?
2, the APP audio playback and other APP audio mixer? Or let other APP audio pause?
3, whether to allow the APP to get the microphone data?

In this article, we need to use the microphone recording and playback, call the following code, APP will pop up the window to ask whether to allow access to the microphone APP:

AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession error:& setCategory:AVAudioSessionCategoryPlayAndRecord; error];

Audio Processing Graph

First use a form (from Apple documents) to introduce the type of the next Audio Unit:

Purpose Audio units
Effect IPod Equalizer
Mixing 3D Mixer
Multichannel Mixer
I/O Remote I/O
Voice-Processing I/O
Generic Output
Format conversion Format Converter

7 types, the 4 functions: equalizer, mixing, input / output and format conversion.
in this DEMO only use the Remote I/O to complete a simple recording and playback function.
and Audio Unit can not work independently, need to cooperate with AUGraph to use. AUGraph is a manager, different Unit as Node added to the AUGraph to play a role, such as the following figure AUGraph management Mixer Unit and Remote I/O

IOS uses AUGraph recording to play simultaneously (and transcoding to Mp3)
AudioProcessingGraphBeforeEQ_2x.png

Declare a Remote I/O type Node and add it to AUGraph:

AUNode remoteIONode; AudioComponentDescription componentDesc; / / Node on the description of componentDesc.componentType = kAudioUnitType_Output; componentDesc.componentSubType = kAudioUnitSubType_RemoteIO; componentDesc.componentManufacturer = kAudioUnitManufacturer_Apple; componentDesc.componentFlags = 0; componentDesc.componentFlagsMask = 0; CheckError (NewAUGraph (& auGraph), "couldn't NewAUGraph"); / / create a AUGraph CheckError (AUGraphOpen (auGraph), "couldn't AUGraphOpen"); / / open AUGraph CheckError (AUGraphAddNode (auGraph, & componentDesc, & remoteIONode), "couldn't add remote IO node"); CheckError (AUGraphNodeInfo (auGraph, remoteIONode, NULL, & remoteIOUnit), "couldn't get remote IO unit from node");

Remote I/O Unit

Remote I/O Unit is one of the Audio Unit is a hardware device related to the Unit, which is divided into input and output, such as speakers, microphones and headphones, etc.. Here we need to play at the same time recording, so we have to let the input and output of the Unit connected, as shown below:

IOS uses AUGraph recording to play simultaneously (and transcoding to Mp3)
IO_unit_2x.png

Element0 which represents the output, Element1 on behalf of the input; and each Element is divided into Input scope and Output scope. What we have to do is to connect the Element0 scope and the speaker to the Output, the Element1 Intput and the microphone. The code is as follows:

UInt32 oneFlag = 1; CheckError (AudioUnitSetProperty (remoteIOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, & oneFlag, sizeof (oneFlag)), "couldn't kAudioOutputUnitProperty_EnableIO with kAudioUnitScope_Output"); CheckError (AudioUnitSetProperty (remoteIOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, & oneFlag, sizeof (oneFla G), "couldn't kAudioOutputUnitProperty_EnableIO with kAudioUnitScope_Input");

Then set the audio format for input and output:

AudioStreamBasicDescription mAudioFormat; mAudioFormat.mSampleRate = 44100; / / sampling rate mAudioFormat.mFormatID = kAudioFormatLinearPCM; //PCM = kAudioFormatFlagIsSignedInteger | sampling mAudioFormat.mFormatFlags kAudioFormatFlagIsPacked; mAudioFormat.mFramesPerPacket = 1; / / the number of frames per packet mAudioFormat.mChannelsPerFrame = 1; //1 mono, stereo 2 mAudioFormat.mBitsPerChannel = 16; / / voice of each sampling point number of mAudioFormat.mBytesPerFrame occupied = mAudioFormat.mBitsPerChannel* mAudioFormat.mChannelsPerFrame/8; / / bytes number per frame mAudioFormat.mBytesPerPacket = mAudioFormat.mBytesPerFrame*mAudioFormat.mFramesPerPacket; / / the total number of bytes per packet, the number of bytes per frame * per packet frames M AudioFormat.mReserved = 0; UInt32 = size sizeof (mAudioFormat); CheckError (AudioUnitSetProperty (remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, & mAudioFormat, size), "couldn't set kAudioUnitProperty_StreamFormat with kAudioUnitScope_Output"); CheckError (AudioUnitSetProperty (remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, & mAudioFormat, Size), "couldn't set with kAudioUnitScope_Input kAudioUnitProperty_StreamFormat");

So far we have almost finished, only the last steps of setting CallBack, each audio from a microphone came into digital signals will call the CallBack function, the digital signal processing for what you want, then sent to the output end to play. Callback function is a static method of C language, the code is as follows:

Static OSStatus CallBack (void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {RecordTool *THIS= (__bridge RecordTool* inRefCon); OSStatus renderErr = AudioUnitRender (THIS-> remoteIOUnit, ioActionFlag, inTimeStamp, 1, inNumberFrames, ioData); / / //--------------------------------------------// here processing audio data transfer --------------------------------------------// / / / / / / Mp3, behind the time. The next cover mBuffers[0] / / [THIS-> convertPcmToMp3:ioData-> toPath:THIS-> outPath]; return renderErr;}
IoData-> mBuffers[n] //n=0~1, mono n=0, dual channel n=1 ioData-> mBuffers[0].mData //PCM data ioData-> mBuffers[0].mDataByteSize //PCM data length

Define the CallBack function and associate it with the AUGraph:

AURenderCallbackStruct inputProc; inputProc.inputProc = CallBack; inputProc.inputProcRefCon = (__bridge * void) (self); CheckError (AUGraphSetNodeInputCallback (auGraph, remoteIONode, 0, & inputProc), "Error setting IO output callback"); CheckError (AUGraphInitialize (auGraph), "couldn't AUGraphInitialize"); CheckError (AUGraphUpdate (auGraph, NULL). "Couldn't AUGraphUpdate"); / / then call the following code to start recording CheckError (AUGraphStart (auGraph), "couldn't AUGraphStart"); CAShow (auGraph);

Last

Because of the time, not under PCM MP3 (LAME) content, I have uploaded to GitHub DEMO, a friend in need can be downloaded to see, after a time I again on completion of the contents of MP3 transcoding.
if you feel my DEMO to help you, please Star, thank you very much!