Using RemoteIO audio unit

本文提供了一种在iPhone上利用RemoteIO音频单元进行音频输入与输出的方法,包括初始化、配置、回调设置及停止流程,并附带了相关代码示例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转自:http://atastypixel.com/blog/using-remoteio-audio-unit/


I’ve had nasty old time trying to get some audio stuff going on the iPhone, no thanks to Apple’s lack of documentation. If you’re an iPhone developer interested in getting RemoteIO/IO Remote/whatever it’s called working on the iPhone… Do I have good news for you. Read on.

Wanna skip the Core Audio learning curve and start writing code straight away? Check out my new project:

The Amazing Audio Engine: Core Audio, Cordially

Update: Thanks to Joel Reymont, we now have an explanation for the “CrashIfClientProvidedBogusAudioBufferList” iPhone simulator bug: The simulator doesn’t like mono audio. Thanks, Joel!

Update: Happily, Apple have now created some excellent documentation on Remote IO, with some good sample projects. I recommend using that as a resource, now that it’s there, as that will continue to be updated.

Update: Tom Zicarelli has created a very extensive sample app that demonstrates the use of AUGraph, with all sorts of goodies.

So, we need to obtain an instance of the RemoteIO audio unit, configure it, and hook it up to a recording callback, which is used to notify you that there is data ready to be grabbed, and where you pull the data from the audio unit.


Overview

  1. Identify the audio component (kAudioUnitType_Output/ kAudioUnitSubType_RemoteIO/ kAudioUnitManufacturerApple)
  2. Use AudioComponentFindNext(NULL, &descriptionOfAudioComponent) to obtain the AudioComponent, which is like the factory with which you obtain the audio unit
  3. Use AudioComponentInstanceNew(ourComponent, &audioUnit) to make an instance of the audio unit
  4. Enable IO for recording and possibly playback with AudioUnitSetProperty
  5. Describe the audio format in an AudioStreamBasicDescription structure, and apply the format using AudioUnitSetProperty
  6. Provide a callback for recording, and possibly playback, again using AudioUnitSetProperty
  7. Allocate some buffers
  8. Initialise the audio unit
  9. Start the audio unit
  10. Rejoice

Here’s my code: I’m using both recording and playback. Use what applies to you!

Initialisation

Initialisation looks like this. We have a member variable of type AudioComponentInstance which will contain our audio unit.

The audio format described below uses SInt16 for samples (i.e. signed, 16 bits per sample)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
#define kOutputBus 0
#define kInputBus 1
 
// ...
 
 
OSStatus status;
AudioComponentInstance audioUnit;
 
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
 
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
 
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
checkStatus(status);
 
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Input, 
                              kInputBus,
                              &flag, 
                              sizeof(flag));
checkStatus(status);
 
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Output, 
                              kOutputBus,
                              &flag, 
                              sizeof(flag));
checkStatus(status);
 
// Describe format
audioFormat.mSampleRate			= 44100.00;
audioFormat.mFormatID			= kAudioFormatLinearPCM;
audioFormat.mFormatFlags		= kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket	= 1;
audioFormat.mChannelsPerFrame	= 1;
audioFormat.mBitsPerChannel		= 16;
audioFormat.mBytesPerPacket		= 2;
audioFormat.mBytesPerFrame		= 2;
 
// Apply format
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_StreamFormat, 
                              kAudioUnitScope_Output, 
                              kInputBus, 
                              &audioFormat, 
                              sizeof(audioFormat));
checkStatus(status);
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_StreamFormat, 
                              kAudioUnitScope_Input, 
                              kOutputBus, 
                              &audioFormat, 
                              sizeof(audioFormat));
checkStatus(status);
 
 
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_SetInputCallback, 
                              kAudioUnitScope_Global, 
                              kInputBus, 
                              &callbackStruct, 
                              sizeof(callbackStruct));
checkStatus(status);
 
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_SetRenderCallback, 
                              kAudioUnitScope_Global, 
                              kOutputBus,
                              &callbackStruct, 
                              sizeof(callbackStruct));
checkStatus(status);
 
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_ShouldAllocateBuffer,
                              kAudioUnitScope_Output, 
                              kInputBus,
                              &flag, 
                              sizeof(flag));
 
// TODO: Allocate our own buffers if we want
 
// Initialise
status = AudioUnitInitialize(audioUnit);
checkStatus(status);

Then, when you’re ready to start:

1
2
OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);

And to stop:

1
2
OSStatus status = AudioOutputUnitStop(audioUnit);
checkStatus(status);

Then, when we’re finished:

1
AudioUnitUninitialize(audioUnit);

And now for our callbacks.

RECORDING
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
static OSStatus recordingCallback(void *inRefCon, 
                                  AudioUnitRenderActionFlags *ioActionFlags, 
                                  const AudioTimeStamp *inTimeStamp, 
                                  UInt32 inBusNumber, 
                                  UInt32 inNumberFrames, 
                                  AudioBufferList *ioData) {
 
    // TODO: Use inRefCon to access our interface object to do stuff
    // Then, use inNumberFrames to figure out how much data is available, and make
    // that much space available in buffers in an AudioBufferList.
 
    AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)
 
    // Then:
    // Obtain recorded samples
 
    OSStatus status;
 
    status = AudioUnitRender([audioInterface audioUnit], 
                             ioActionFlags, 
                             inTimeStamp, 
                             inBusNumber, 
                             inNumberFrames, 
                             bufferList);
    checkStatus(status);
 
    // Now, we have the samples we just read sitting in buffers in bufferList
    DoStuffWithTheRecordedAudio(bufferList);
    return noErr;
}
PLAYBACK
1
2
3
4
5
6
7
8
9
10
11
static OSStatus playbackCallback(void *inRefCon, 
                                  AudioUnitRenderActionFlags *ioActionFlags, 
                                  const AudioTimeStamp *inTimeStamp, 
                                  UInt32 inBusNumber, 
                                  UInt32 inNumberFrames, 
                                  AudioBufferList *ioData) {    
    // Notes: ioData contains buffers (may be more than one!)
    // Fill them up as much as you can. Remember to set the size value in each buffer to match how
    // much data is in the buffer.
    return noErr;
}

Finally, rejoice with me in this discovery ;)

Resources that helped

No thanks at all to Apple for their lack of accessible documentation on this topic – They really have a long way to go here! Also boo to them with their lack of search engine, and refusal to open up their docs to Google. It’s a jungle out there!

Update: You can adjust the latency of RemoteIO (and, in fact, any other audio framework) by setting the kAudioSessionProperty_PreferredHardwareIOBufferDuration property:

float aBufferLength = 0.005; // In seconds
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
                        sizeof(aBufferLength), &aBufferLength);

This adjusts the length of buffers that’re passed to you – if buffer length was originally, say, 1024 samples, then halving the number of samples halves the amount of time taken to process them.

RELATED POSTS

实现 iOS AudioUnit 录音分贝检测,可以参考以下步骤: 1. 配置音频会话 在使用 AudioUnit 之前,需要先配置音频会话。可以设置为录音模式,同时指定要使用的音频输入设备。 ```objc AVAudioSession *audioSession = [AVAudioSession sharedInstance]; NSError *error; [audioSession setCategory:AVAudioSessionCategoryRecord error:&error]; [audioSession setPreferredSampleRate:44100.0 error:&error]; [audioSession setPreferredIOBufferDuration:0.005 error:&error]; [audioSession setActive:YES error:&error]; ``` 2. 创建 AudioUnit 使用 `AudioComponentFindNext` 函数来查找可用的音频组件,并使用 `AudioComponentInstanceNew` 函数创建 AudioUnit 实例。 ```objc AudioComponentDescription desc; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_RemoteIO; desc.componentManufacturer = kAudioUnitManufacturer_Apple; desc.componentFlags = 0; desc.componentFlagsMask = 0; AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc); AudioComponentInstanceNew(inputComponent, &_audioUnit); ``` 3. 配置 AudioUnit 设置 AudioUnit 的音频格式和 IO 属性,并启用录音和回放功能。 ```objc // 设置音频输入格式 AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 44100.0; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; // 设置 AudioUnit 输入流 IO 属性 AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &audioFormat, sizeof(audioFormat)); UInt32 enable = 1; AudioUnitSetProperty(_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &enable, sizeof(enable)); // 设置 AudioUnit 输出流 IO 属性 AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &audioFormat, sizeof(audioFormat)); AudioUnitSetProperty(_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &enable, sizeof(enable)); // 启用录音和回放功能 AURenderCallbackStruct input; input.inputProc = recordingCallback; input.inputProcRefCon = (__bridge void *)(self); AudioUnitSetProperty(_audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &input, sizeof(input)); AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, 0, &input, sizeof(input)); ``` 4. 实现录音回调函数 在录音回调函数中,可以获取录音数据的分贝值,用来检测录音音量大小。 ```objc static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { AudioUnitRender(AudioUnitRecorder.audioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData); float decibels = 0.0; if (ioData->mNumberBuffers > 0) { AudioBuffer buffer = ioData->mBuffers[0]; // 计算分贝值 int channels = buffer.mNumberChannels; float peak = 0; for (int i = 0; i < inNumberFrames * channels; i++) { SInt16 sample = ((SInt16 *)buffer.mData)[i]; float sampleValue = sample / 32768.0; if (sampleValue < 0) { sampleValue = -sampleValue; } if (sampleValue > peak) { peak = sampleValue; } } decibels = 20.0 * log10(peak); } NSLog(@"Decibels: %f", decibels); return noErr; } ``` 5. 启动 AudioUnit 启动 AudioUnit,开始录音。 ```objc AudioUnitInitialize(_audioUnit); AudioOutputUnitStart(_audioUnit); ``` 通过以上步骤,就可以实现 iOS AudioUnit 录音分贝检测。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值