Android Audio代码分析18 - setSampleRate函数

本文详细解读了Android系统中AudioTrack类中setPlaybackRate方法的内部实现,包括初始化配置、测试流程、以及关键代码解析。着重解释了如何通过该方法调整音频播放速率,以及它对实际播放效果的影响。通过分析AudioTrack的内部工作原理,读者可以更好地理解Android音频系统的运行机制。
今天来看看playback rate相关的接口。包括set和get。


*****************************************源码*************************************************
//Test case 6: setPlaybackRate() accepts values twice the output sample rate
@LargeTest
public void testSetPlaybackRateTwiceOutputSR() throws Exception {
// constants for test
final String TEST_NAME = "testSetPlaybackRateTwiceOutputSR";
final int TEST_SR = 22050;
final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STREAM;
final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;

//-------- initialization --------------
int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
minBuffSize, TEST_MODE);
byte data[] = new byte[minBuffSize/2];
int outputSR = AudioTrack.getNativeOutputSampleRate(TEST_STREAM_TYPE);
//-------- test --------------
track.write(data, 0, data.length);
track.write(data, 0, data.length);
assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
track.play();
assertTrue(TEST_NAME, track.setPlaybackRate(2*outputSR) == AudioTrack.SUCCESS);
//-------- tear down --------------
track.release();
}
**********************************************************************************************
源码路径:
frameworks\base\media\tests\mediaframeworktest\src\com\android\mediaframeworktest\functional\MediaAudioTrackTest.java


#######################说明################################
//Test case 6: setPlaybackRate() accepts values twice the output sample rate
@LargeTest
public void testSetPlaybackRateTwiceOutputSR() throws Exception {
// constants for test
final String TEST_NAME = "testSetPlaybackRateTwiceOutputSR";
final int TEST_SR = 22050;
final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STREAM;
final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;

//-------- initialization --------------
int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
minBuffSize, TEST_MODE);
byte data[] = new byte[minBuffSize/2];
int outputSR = AudioTrack.getNativeOutputSampleRate(TEST_STREAM_TYPE);
+++++++++++++++++++++++++++++getNativeOutputSampleRate+++++++++++++++++++++++++++++++++++
/**
* Returns the hardware output sample rate
*/
static public int getNativeOutputSampleRate(int streamType) {
return native_get_output_sample_rate(streamType);
+++++++++++++++++++++++++++++android_media_AudioTrack_get_playback_rate+++++++++++++++++++++++++++++++++++
static jint android_media_AudioTrack_get_playback_rate(JNIEnv *env, jobject thiz) {
AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(
thiz, javaAudioTrackFields.nativeTrackInJavaObj);


if (lpTrack) {
return (jint) lpTrack->getSampleRate();
++++++++++++++++++++++++++++AudioTrack::getSampleRate++++++++++++++++++++++++++++++++++++
uint32_t AudioTrack::getSampleRate()
{
// 直接返回了audio_track_cblk_t中的sample rate。
// audio_track_cblk_t对象在AudioFlinger::ThreadBase::TrackBase::TrackBase的构造函数中被创建:new(mCblk) audio_track_cblk_t();
// 创建audio_track_cblk_t对象后,即对其成员变量sampleRate进行了赋值:mCblk->sampleRate = sampleRate;
// 此处的sampleRate其实是创建AudioTrack对象时传入的sampleRate。
return mCblk->sampleRate;
}
----------------------------AudioTrack::getSampleRate------------------------------------
} else {
jniThrowException(env, "java/lang/IllegalStateException",
"Unable to retrieve AudioTrack pointer for getSampleRate()");
return AUDIOTRACK_ERROR;
}
}
-----------------------------android_media_AudioTrack_get_playback_rate-----------------------------------
}
-----------------------------getNativeOutputSampleRate-----------------------------------
//-------- test --------------
track.write(data, 0, data.length);
track.write(data, 0, data.length);
assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
track.play();
assertTrue(TEST_NAME, track.setPlaybackRate(2*outputSR) == AudioTrack.SUCCESS);
++++++++++++++++++++++++++++setPlaybackRate+++++++++++++++++++++++++++++++++++
/**
* Sets the playback sample rate for this track. This sets the sampling rate at which
* the audio data will be consumed and played back, not the original sampling rate of the
* content. Setting it to half the sample rate of the content will cause the playback to
* last twice as long, but will also result in a negative pitch shift.
* The valid sample rate range if from 1Hz to twice the value returned by
* {@link #getNativeOutputSampleRate(int)}.
* @param sampleRateInHz the sample rate expressed in Hz
* @return error code or success, see {@link #SUCCESS}, {@link #ERROR_BAD_VALUE},
* {@link #ERROR_INVALID_OPERATION}
*/
// 看看这段注释
// 此处改变的rate,只是播放时的rate,并不是数据本身的rate。
// 例如,如果将rate设置为原来的一半,则播放时间将变为原来的2倍。
// 所设rate的范围是1Hz到原来rate的2倍。
public int setPlaybackRate(int sampleRateInHz) {
if (mState != STATE_INITIALIZED) {
return ERROR_INVALID_OPERATION;
}
if (sampleRateInHz <= 0) {
return ERROR_BAD_VALUE;
}
return native_set_playback_rate(sampleRateInHz);
++++++++++++++++++++++++++++android_media_AudioTrack_set_playback_rate++++++++++++++++++++++++++++++++++++
static jint android_media_AudioTrack_set_playback_rate(JNIEnv *env, jobject thiz,
jint sampleRateInHz) {
AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(
thiz, javaAudioTrackFields.nativeTrackInJavaObj);


if (lpTrack) {
return android_media_translateErrorCode(lpTrack->setSampleRate(sampleRateInHz));
+++++++++++++++++++++++++++AudioTrack::setSampleRate+++++++++++++++++++++++++++++++++++++
status_t AudioTrack::setSampleRate(int rate)
{
int afSamplingRate;


if (AudioSystem::getOutputSamplingRate(&afSamplingRate, mStreamType) != NO_ERROR) {
return NO_INIT;
+++++++++++++++++++++++++++++AudioSystem::getOutputSamplingRate+++++++++++++++++++++++++++++++++++
// 感觉这么熟悉!
// 原来已见过多次!
status_t AudioSystem::getOutputSamplingRate(int* samplingRate, int streamType)
{
OutputDescriptor *outputDesc;
audio_io_handle_t output;


if (streamType == DEFAULT) {
streamType = MUSIC;
}


output = getOutput((stream_type)streamType);
if (output == 0) {
return PERMISSION_DENIED;
}


gLock.lock();
// AudioSystem::AudioFlingerClient::ioConfigChanged函数有往gOutputs中添加成员
outputDesc = AudioSystem::gOutputs.valueFor(output);
if (outputDesc == 0) {
LOGV("getOutputSamplingRate() no output descriptor for output %d in gOutputs", output);
gLock.unlock();
const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
if (af == 0) return PERMISSION_DENIED;
*samplingRate = af->sampleRate(output);
+++++++++++++++++++++++++AudioFlinger::sampleRate+++++++++++++++++++++++++++++++++++++++
uint32_t AudioFlinger::sampleRate(int output) const
{
Mutex::Autolock _l(mLock);
PlaybackThread *thread = checkPlaybackThread_l(output);
if (thread == NULL) {
LOGW("sampleRate() unknown thread %d", output);
return 0;
}
return thread->sampleRate();
+++++++++++++++++++++++++++AudioFlinger::ThreadBase::sampleRate+++++++++++++++++++++++++++++++++++++
uint32_t AudioFlinger::ThreadBase::sampleRate() const
{
// 函数AudioFlinger::PlaybackThread::readOutputParameters中会给mSampleRate赋值: mSampleRate = mOutput->sampleRate();
return mSampleRate;
}
---------------------------AudioFlinger::ThreadBase::sampleRate-------------------------------------
}
-------------------------AudioFlinger::sampleRate---------------------------------------
} else {
LOGV("getOutputSamplingRate() reading from output desc");
*samplingRate = outputDesc->samplingRate;
gLock.unlock();
}


LOGV("getOutputSamplingRate() streamType %d, output %d, sampling rate %d", streamType, output, *samplingRate);


return NO_ERROR;
}
-----------------------------AudioSystem::getOutputSamplingRate-----------------------------------
}
// Resampler implementation limits input sampling rate to 2 x output sampling rate.
if (rate <= 0 || rate > afSamplingRate*2 ) return BAD_VALUE;


// 将rate设置到audio_track_cblk_t对象中
mCblk->sampleRate = rate;
return NO_ERROR;
}
---------------------------AudioTrack::setSampleRate-------------------------------------
} else {
jniThrowException(env, "java/lang/IllegalStateException",
"Unable to retrieve AudioTrack pointer for setSampleRate()");
return AUDIOTRACK_ERROR;
}
}
----------------------------android_media_AudioTrack_set_playback_rate------------------------------------
}
----------------------------setPlaybackRate------------------------------------
//-------- tear down --------------
track.release();
}
###########################################################


&&&&&&&&&&&&&&&&&&&&&&&总结&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
set rate改变的只是播放时的rate,而不是数据本身的rate。
也就是说,set rate若与原来的rate不同的话,播放时间会改变。
函数AudioFlinger::MixerThread::threadLoop中会根据rate计算max period。
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
<think>好的,用户想系统学习Android音频系统,我需要整理相关资料和步骤。首先,根据用户提供的引用内容,Android音频系统涉及多个层次,比如应用层、Framework层、HAL层等。用户提到的MediaPlayer、AudioTrack这些类属于应用和Framework层,而AudioFlinger和HAL层的内容在引用3中有提到。 用户可能需要从基础开始,逐步深入。首先应该了解整体架构,包括应用层如何调用Framework的API,然后到Native层的服务,比如AudioFlinger,最后到HAL和驱动层。引用1和引用2都提到了Framework层的类,可以作为学习的起点。 然后,用户可能需要实际操作,比如编写录音或播放的代码,使用MediaRecorder或AudioTrack。引用3中的录音示例涉及HAL层和重采样,说明底层处理的重要性,这可能属于进阶内容。 另外,用户可能需要官方文档和书籍推荐,比如Android开发者网站上的音频指南,或者《Android系统音频开发详解》这样的书籍。还需要注意版本差异,比如Android 8.0后的Project Treble可能影响HAL实现。 用户的问题可能还涉及音频处理流程,如采样率转换、延迟优化等。引用3提到HAL层的重采样,说明实际开发中会遇到这类问题,需要学习如何处理。 最后,应该推荐一些学习资源,如官方文档、书籍、博客和开源项目,帮助用户系统学习。还要提醒实践的重要性,比如调试AudioTrack或分析AudioFlinger的代码。</think>### Android音频系统学习路径 ### 一、分层架构基础 1. **应用层**:掌握$MediaPlayer$、$AudioTrack$、$MediaRecorder$等类的使用 - 示例代码实现音频播放: ```java AudioTrack track = new AudioTrack.Builder() .setAudioFormat(new AudioFormat.Builder() .setEncoding(AudioFormat.ENCODING_PCM_16BIT) .setSampleRate(44100) .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO).build()) .setBufferSizeInBytes(minBufferSize).build(); ``` - 注意缓冲区大小计算:$bufferSize = getMinBufferSize(sampleRate, channelConfig, audioFormat)$[^2] 2. **Framework层**: - 重点研究$AudioManager$、$AudioService$、$AudioSystem$类关系 - 理解音频焦点机制:$AudioManager.AUDIOFOCUS_GAIN$与$AUDIOFOCUS_LOSS$的处理逻辑 - 音频路由控制:$setSpeakerphoneOn()$和$setBluetoothScoOn()$等方法的实现路径 3. **Native层**: - 分析$AudioFlinger$服务源码(frameworks/av/services/audioflinger/) - 研究混音器实现:$MixerThread$处理多路音频流的算法 - 关键数据结构:$audio_track_cblk_t$控制块管理[^3] 4. **HAL层**: - 标准接口定义:$hardware/libhardware/include/hardware/audio.h$ - 设备厂商实现示例:Rockchip的tinyalsa_hal实现路径 - 重采样处理:观察$audio_hw.c$中的$in_read$函数实现[^3] ### 二、核心知识模块 1. **音频流类型**: ```mermaid graph TD A[音频流类型] --> B[STREAM_MUSIC] A --> C[STREAM_VOICE_CALL] A --> D[STREAM_ALARM] A --> E[STREAM_RING] ``` 2. **低延迟音频**: - 使用$AAudio$ API实现<10ms延迟 - 性能优化技巧: $$latency = bufferSize / (sampleRate * channelCount)$$ - 选择$AUDIO_PERFORMANCE_MODE_LOW_LATENCY$模式 3. **音频采集处理**: - 采样率转换原理: $$x_{resampled} = \sum_{n=-\infty}^{\infty} x[n] \cdot \text{sinc}(\frac{t - nT}{T})$$ - 回声消除算法:WebRTC AEC模块集成方法 ### 三、学习资源推荐 1. **官方文档**: - [Audio Developer Guide](https://developer.android.com/guide/topics/media/audio) - [AAudio API Reference](https://developer.android.com/ndk/reference/group/audio) 2. **书籍推荐**: -Android系统音频开发详解》(人民邮电出版社) - 《Pro Android Audio》(Apress出版社) 3. **实践项目**: - 实现带音频特效的播放器(变速/变调) - 构建多通道录音监控应用 - 开发实时语音处理模块
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值