React Native音频处理全攻略:从录音到高级音效的实战指南
引言:移动应用音频处理的痛点与解决方案
你是否曾为React Native应用中的音频处理而头疼?从基础的录音功能到复杂的音效处理,移动平台的音频开发始终是开发者面临的一大挑战。本文将系统讲解React Native音频处理的完整流程,包括录音、播放控制和音效处理三大核心模块,提供15+实用代码示例和5个完整业务场景实现,帮助你彻底掌握移动音频开发的关键技术点。
读完本文后,你将能够:
- 使用原生模块实现低延迟录音功能
- 构建支持后台播放的音频播放器
- 应用混响、均衡器等专业音效
- 处理跨平台兼容性问题
- 优化音频性能和电量消耗
一、React Native音频架构与核心模块
1.1 音频处理技术栈对比
| 方案 | 原生支持 | 跨平台性 | 功能丰富度 | 性能 | 适用场景 |
|---|---|---|---|---|---|
| 纯JavaScript实现 | ❌ | ✅ | ⭐⭐ | ⭐ | 简单演示 |
| 原生模块桥接 | ✅ | ❌ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | 专业应用 |
| 第三方库(expo-av) | ✅ | ✅ | ⭐⭐⭐ | ⭐⭐⭐ | 快速开发 |
| 自定义原生模块 | ✅ | ❌ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | 性能敏感场景 |
1.2 React Native音频处理架构
React Native音频处理通过JavaScript与原生代码的桥接实现,核心包括:
- JavaScript接口层:提供开发者友好的API
- 原生模块层:封装平台特定的音频功能
- 音频引擎层:处理音频数据流和效果
- 硬件抽象层:与设备音频硬件交互
二、录音功能实现:从基础到高级
2.1 环境配置与权限申请
Android配置 (AndroidManifest.xml):
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
iOS配置 (Info.plist):
<key>NSMicrophoneUsageDescription</key>
<string>需要访问麦克风以录制音频</string>
<key>NSAudioUsageDescription</key>
<string>需要访问音频设备以播放和录制音频</string>
2.2 基础录音功能实现
import React, { useState, useEffect } from 'react';
import { View, Button, Text } from 'react-native';
import { Audio } from 'expo-av';
const AudioRecorder = () => {
const [recording, setRecording] = useState(null);
const [isRecording, setIsRecording] = useState(false);
const [recordings, setRecordings] = useState([]);
useEffect(() => {
// 请求音频权限
(async () => {
const { status } = await Audio.requestPermissionsAsync();
if (status !== 'granted') {
alert('需要麦克风权限才能录制音频');
}
})();
}, []);
const startRecording = async () => {
try {
await Audio.setAudioModeAsync({
allowsRecordingIOS: true,
playsInSilentModeIOS: true,
});
const { recording } = await Audio.Recording.createAsync(
Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY
);
setRecording(recording);
setIsRecording(true);
} catch (err) {
console.error('开始录音失败:', err);
}
};
const stopRecording = async () => {
setIsRecording(false);
await recording.stopAndUnloadAsync();
const uri = recording.getURI();
setRecordings([...recordings, {
uri,
duration: recording.getDurationMillis(),
createdAt: new Date().toISOString()
}]);
};
return (
<View style={{ padding: 20 }}>
<Button
title={isRecording ? "停止录音" : "开始录音"}
onPress={isRecording ? stopRecording : startRecording}
/>
<Text style={{ marginTop: 20 }}>
{isRecording ? '正在录音...' : '录音已停止'}
</Text>
{recordings.map((rec, index) => (
<View key={index} style={{ marginTop: 10 }}>
<Text>录音 {index + 1}: {rec.duration}ms</Text>
<AudioPlayer uri={rec.uri} />
</View>
))}
</View>
);
};
2.3 高级录音配置与优化
// 自定义录音配置
const CUSTOM_RECORDING_OPTIONS = {
ios: {
extension: '.m4a',
outputFormat: Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC,
audioQuality: Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_MAX,
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
},
android: {
extension: '.mp4',
outputFormat: Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_MPEG_4,
audioEncoder: Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AAC,
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
},
};
// 录音质量对比
const RECORDING_QUALITY_PRESETS = {
low: { sampleRate: 22050, bitRate: 64000, channels: 1 },
medium: { sampleRate: 44100, bitRate: 128000, channels: 1 },
high: { sampleRate: 44100, bitRate: 256000, channels: 2 },
studio: { sampleRate: 48000, bitRate: 320000, channels: 2 },
};
2.4 录音场景实战:语音备忘录应用
// 带波形显示的录音组件
const WaveformRecorder = () => {
const [recording, setRecording] = useState(null);
const [waveformData, setWaveformData] = useState([]);
const waveformInterval = useRef(null);
useEffect(() => {
if (isRecording) {
// 每100ms更新一次波形数据
waveformInterval.current = setInterval(async () => {
if (recording) {
const data = await recording.getAmplitudeAsync();
setWaveformData(prev => [...prev, data].slice(-100)); // 保留最近100个点
}
}, 100);
} else if (waveformInterval.current) {
clearInterval(waveformInterval.current);
}
return () => {
if (waveformInterval.current) {
clearInterval(waveformInterval.current);
}
};
}, [isRecording, recording]);
// 渲染波形图
const renderWaveform = () => {
return (
<View style={{ height: 100, flexDirection: 'row', alignItems: 'center' }}>
{waveformData.map((point, index) => (
<View
key={index}
style={{
width: 2,
height: Math.max(5, point.amplitude * 100),
backgroundColor: '#4CAF50',
marginHorizontal: 1
}}
/>
))}
</View>
);
};
return (
<View>
{renderWaveform()}
{/* 录音控制按钮 */}
</View>
);
};
三、音频播放与控制
3.1 基础音频播放器实现
const AudioPlayer = ({ uri }) => {
const [sound, setSound] = useState(null);
const [isPlaying, setIsPlaying] = useState(false);
const [position, setPosition] = useState(0);
const [duration, setDuration] = useState(0);
const positionInterval = useRef(null);
useEffect(() => {
// 加载音频文件
const loadAudio = async () => {
const { sound } = await Audio.Sound.createAsync(
{ uri },
{ shouldPlay: false },
(status) => {
if (status.isLoaded) {
setDuration(status.durationMillis);
setPosition(status.positionMillis);
}
}
);
setSound(sound);
};
loadAudio();
return () => {
if (sound) {
sound.unloadAsync();
}
if (positionInterval.current) {
clearInterval(positionInterval.current);
}
};
}, [uri]);
const togglePlayback = async () => {
if (isPlaying) {
await sound.pauseAsync();
clearInterval(positionInterval.current);
} else {
await sound.playAsync();
// 更新播放进度
positionInterval.current = setInterval(async () => {
const status = await sound.getStatusAsync();
setPosition(status.positionMillis);
}, 1000);
}
setIsPlaying(!isPlaying);
};
const seek = async (value) => {
const newPosition = value * duration;
await sound.setPositionAsync(newPosition);
setPosition(newPosition);
};
return (
<View style={{ marginTop: 10 }}>
<View style={{ flexDirection: 'row', alignItems: 'center' }}>
<Button title={isPlaying ? "暂停" : "播放"} onPress={togglePlayback} />
<Slider
style={{ flex: 1, marginHorizontal: 10 }}
value={position / duration}
onValueChange={seek}
minimumValue={0}
maximumValue={1}
step={0.01}
/>
</View>
<View style={{ flexDirection: 'row', justifyContent: 'space-between' }}>
<Text>{formatTime(position)}</Text>
<Text>{formatTime(duration)}</Text>
</View>
</View>
);
};
// 辅助函数:格式化时间显示
const formatTime = (millis) => {
const totalSeconds = Math.floor(millis / 1000);
const minutes = Math.floor(totalSeconds / 60);
const seconds = totalSeconds % 60;
return `${minutes}:${seconds < 10 ? '0' : ''}${seconds}`;
};
3.2 高级播放功能:后台播放与音频焦点
// 配置后台音频播放
const setupBackgroundAudio = async () => {
try {
await Audio.setAudioModeAsync({
allowsRecordingIOS: false,
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
playsInSilentModeIOS: true,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DUCK_OTHERS,
shouldDuckAndroid: true,
staysActiveInBackground: true, // 允许后台播放
playThroughEarpieceAndroid: false,
});
} catch (err) {
console.error('音频模式配置失败:', err);
}
};
// 音频焦点管理
const useAudioFocus = () => {
useEffect(() => {
const subscription = Audio.addAudioInterruptionListener(({ type, shouldPlay }) => {
if (type === Audio.INTERRUPTION_BEGIN) {
// 音频被中断(如电话进来)
setIsPlaying(false);
} else if (type === Audio.INTERRUPTION_END && shouldPlay) {
// 中断结束,恢复播放
setIsPlaying(true);
sound.playAsync();
}
});
return () => {
subscription.remove();
};
}, [sound]);
};
3.3 音频播放列表实现
const AudioPlayerWithPlaylist = () => {
const [currentTrackIndex, setCurrentTrackIndex] = useState(0);
const [playlist, setPlaylist] = useState([
{ uri: 'track1.mp3', title: '歌曲1', artist: '艺术家A' },
{ uri: 'track2.mp3', title: '歌曲2', artist: '艺术家B' },
{ uri: 'track3.mp3', title: '歌曲3', artist: '艺术家C' },
]);
const [sound, setSound] = useState(null);
const [isPlaying, setIsPlaying] = useState(false);
// 加载当前曲目
useEffect(() => {
const loadCurrentTrack = async () => {
if (sound) {
await sound.unloadAsync();
}
const { sound: newSound } = await Audio.Sound.createAsync(
{ uri: playlist[currentTrackIndex].uri },
{ shouldPlay: isPlaying }
);
setSound(newSound);
// 监听播放完成事件
newSound.setOnPlaybackStatusUpdate(status => {
if (status.didJustFinish) {
playNext();
}
});
};
loadCurrentTrack();
return () => {
if (sound) {
sound.unloadAsync();
}
};
}, [currentTrackIndex, playlist]);
const playNext = () => {
setCurrentTrackIndex(prev =>
prev === playlist.length - 1 ? 0 : prev + 1
);
};
const playPrevious = () => {
setCurrentTrackIndex(prev =>
prev === 0 ? playlist.length - 1 : prev - 1
);
};
return (
<View>
<Text style={{ fontSize: 18, fontWeight: 'bold' }}>
{playlist[currentTrackIndex].title}
</Text>
<Text style={{ color: '#666' }}>
{playlist[currentTrackIndex].artist}
</Text>
<View style={{ flexDirection: 'row', justifyContent: 'center', marginTop: 20 }}>
<Button title="上一首" onPress={playPrevious} />
<Button
title={isPlaying ? "暂停" : "播放"}
onPress={() => {
if (isPlaying) {
sound.pauseAsync();
} else {
sound.playAsync();
}
setIsPlaying(!isPlaying);
}}
style={{ marginHorizontal: 20 }}
/>
<Button title="下一首" onPress={playNext} />
</View>
</View>
);
};
四、音效处理与音频增强
4.1 音效基础:音量、音调与速度控制
const AudioEffectsController = ({ sound }) => {
const [volume, setVolume] = useState(1.0);
const [pitch, setPitch] = useState(1.0);
const [rate, setRate] = useState(1.0);
const applyEffects = async () => {
if (sound) {
await sound.setVolumeAsync(volume);
await sound.setPitchAsync(pitch);
await sound.setRateAsync(rate);
}
};
return (
<View style={{ padding: 20 }}>
<View style={{ marginBottom: 20 }}>
<Text>音量: {volume.toFixed(1)}</Text>
<Slider
value={volume}
onValueChange={setVolume}
minimumValue={0}
maximumValue={2.0}
step={0.1}
/>
</View>
<View style={{ marginBottom: 20 }}>
<Text>音调: {pitch.toFixed(1)}</Text>
<Slider
value={pitch}
onValueChange={setPitch}
minimumValue={0.5}
maximumValue={2.0}
step={0.1}
/>
</View>
<View style={{ marginBottom: 20 }}>
<Text>速度: {rate.toFixed(1)}</Text>
<Slider
value={rate}
onValueChange={setRate}
minimumValue={0.5}
maximumValue={2.0}
step={0.1}
/>
</View>
<Button title="应用效果" onPress={applyEffects} />
</View>
);
};
4.2 高级音效:混响、均衡器与滤波器
// 使用react-native-audio-effects库实现高级音效
import AudioEffects from 'react-native-audio-effects';
const AdvancedAudioEffects = ({ soundUri }) => {
const [reverbType, setReverbType] = useState('smallRoom');
const [equalizer, setEqualizer] = useState({
bass: 0,
mid: 0,
treble: 0,
});
useEffect(() => {
// 初始化音效引擎
AudioEffects.init(soundUri);
return () => {
AudioEffects.release();
};
}, [soundUri]);
const applyReverb = async (type) => {
setReverbType(type);
switch(type) {
case 'smallRoom':
await AudioEffects.setReverb(0.2, 0.3, 0.5); // 房间大小, 阻尼, 干湿比
break;
case 'largeHall':
await AudioEffects.setReverb(0.8, 0.2, 0.7);
break;
case 'echo':
await AudioEffects.setEcho(0.6, 0.8, 0.5); // 延迟, 反馈, 干湿比
break;
case 'none':
await AudioEffects.disableReverb();
break;
}
};
const adjustEqualizer = async (band, value) => {
const newEQ = { ...equalizer, [band]: value };
setEqualizer(newEQ);
// 应用均衡器设置 (-12dB 到 +12dB)
await AudioEffects.setEqualizer(31, newEQ.bass * 12); // 31Hz频段
await AudioEffects.setEqualizer(1000, newEQ.mid * 12); // 1kHz频段
await AudioEffects.setEqualizer(16000, newEQ.treble * 12); // 16kHz频段
};
return (
<View>
<Text style={{ fontSize: 16, fontWeight: 'bold', marginVertical: 10 }}>
混响效果
</Text>
<View style={{ flexDirection: 'row', flexWrap: 'wrap' }}>
{['none', 'smallRoom', 'largeHall', 'echo'].map(type => (
<Button
key={type}
title={type}
onPress={() => applyReverb(type)}
style={{
margin: 5,
backgroundColor: reverbType === type ? '#4CAF50' : '#ccc'
}}
/>
))}
</View>
<Text style={{ fontSize: 16, fontWeight: 'bold', marginVertical: 10, marginTop: 20 }}>
均衡器
</Text>
{Object.entries(equalizer).map(([band, value]) => (
<View key={band} style={{ marginBottom: 10 }}>
<Text>{band === 'bass' ? '低音' : band === 'mid' ? '中音' : '高音'}: {value.toFixed(1)}</Text>
<Slider
value={value}
onValueChange={(v) => adjustEqualizer(band, v)}
minimumValue={-1}
maximumValue={1}
step={0.1}
/>
</View>
))}
</View>
);
};
4.3 音频可视化实现
import React, { useRef, useEffect } from 'react';
import { View, StyleSheet } from 'react-native';
import { Audio } from 'expo-av';
import Svg, { Rect } from 'react-native-svg';
const AudioVisualizer = ({ sound }) => {
const svgRef = useRef(null);
const animationFrame = useRef(null);
const [audioData, setAudioData] = useState(new Uint8Array(128).fill(0));
useEffect(() => {
if (!sound) return;
const setupAnalyzer = async () => {
const status = await sound.getStatusAsync();
if (status.isLoaded) {
// 创建音频分析器
const analyzer = await Audio.createAnalyserAsync(sound);
analyzer.fftSize = 256; // FFT大小,决定频率 bins 数量
const updateVisualization = () => {
const bufferLength = analyzer.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
analyzer.getByteFrequencyData(dataArray);
setAudioData(dataArray);
animationFrame.current = requestAnimationFrame(updateVisualization);
};
updateVisualization();
return () => {
cancelAnimationFrame(animationFrame.current);
analyzer.dispose();
};
}
};
const cleanup = setupAnalyzer();
return () => cleanup;
}, [sound]);
return (
<Svg
ref={svgRef}
width="100%"
height="100"
style={styles.visualizer}
>
{audioData.map((value, index) => (
<Rect
key={index}
x={index * (300 / audioData.length)}
y={100 - value / 2}
width={300 / audioData.length - 1}
height={value / 2}
fill="#4CAF50"
/>
))}
</Svg>
);
};
const styles = StyleSheet.create({
visualizer: {
backgroundColor: '#f5f5f5',
borderRadius: 5,
marginTop: 10,
},
});
4.4 音频混合与多轨录音
const AudioMixer = () => {
const [tracks, setTracks] = useState([]);
const [masterVolume, setMasterVolume] = useState(1.0);
const mixer = useRef(null);
useEffect(() => {
// 初始化音频混合器
mixer.current = new Audio.Mixer();
return () => {
mixer.current.release();
};
}, []);
const addTrack = async (uri, name) => {
const track = await Audio.createAudioTrackAsync({ uri });
mixer.current.addTrack(track);
setTracks([...tracks, {
id: Date.now(),
name,
track,
volume: 1.0,
muted: false
}]);
};
const removeTrack = (id) => {
const trackToRemove = tracks.find(t => t.id === id);
if (trackToRemove) {
mixer.current.removeTrack(trackToRemove.track);
trackToRemove.track.release();
setTracks(tracks.filter(t => t.id !== id));
}
};
const adjustTrackVolume = async (id, volume) => {
const track = tracks.find(t => t.id === id);
if (track) {
await track.track.setVolumeAsync(volume);
setTracks(tracks.map(t =>
t.id === id ? { ...t, volume } : t
));
}
};
const toggleMuteTrack = async (id) => {
const track = tracks.find(t => t.id === id);
if (track) {
const newMuted = !track.muted;
await track.track.setVolumeAsync(newMuted ? 0 : track.volume);
setTracks(tracks.map(t =>
t.id === id ? { ...t, muted: newMuted } : t
));
}
};
const startMixing = async () => {
await mixer.current.setVolumeAsync(masterVolume);
await mixer.current.playAsync();
};
const stopMixing = async () => {
await mixer.current.pauseAsync();
};
return (
<View>
<View style={{ marginBottom: 20 }}>
<Button title="添加背景音乐" onPress={() =>
addTrack('background.mp3', '背景音乐')}
/>
<Button title="添加人声" onPress={() =>
addTrack('vocals.mp3', '人声')}
style={{ marginLeft: 10 }}
/>
<Button title="添加吉他" onPress={() =>
addTrack('guitar.mp3', '吉他')}
style={{ marginLeft: 10 }}
/>
</View>
<View style={{ marginBottom: 20 }}>
<Text>主音量: {masterVolume.toFixed(1)}</Text>
<Slider
value={masterVolume}
onValueChange={setMasterVolume}
minimumValue={0}
maximumValue={1}
step={0.1}
/>
</View>
{tracks.map(track => (
<View key={track.id} style={{ marginBottom: 15, padding: 10, backgroundColor: '#f5f5f5' }}>
<View style={{ flexDirection: 'row', justifyContent: 'space-between', alignItems: 'center' }}>
<Text>{track.name}</Text>
<Button
title={track.muted ? "已静音" : "静音"}
onPress={() => toggleMuteTrack(track.id)}
color={track.muted ? 'red' : 'gray'}
/>
<Button
title="移除"
onPress={() => removeTrack(track.id)}
color="red"
/>
</View>
<View style={{ marginTop: 10 }}>
<Text>音量: {track.volume.toFixed(1)}</Text>
<Slider
value={track.volume}
onValueChange={(v) => adjustTrackVolume(track.id, v)}
minimumValue={0}
maximumValue={2}
step={0.1}
/>
</View>
</View>
))}
<View style={{ flexDirection: 'row', justifyContent: 'center' }}>
<Button title="开始混合" onPress={startMixing} />
<Button title="停止混合" onPress={stopMixing} style={{ marginLeft: 20 }} />
</View>
</View>
);
};
五、跨平台兼容性与性能优化
5.1 平台特定代码处理
// 使用 Platform API 处理平台差异
import { Platform } from 'react-native';
const AudioRecorder = () => {
const startRecording = async () => {
try {
if (Platform.OS === 'ios') {
await Audio.setAudioModeAsync({
allowsRecordingIOS: true,
playsInSilentModeIOS: true,
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
});
} else {
await Audio.setAudioModeAsync({
allowsRecordingAndroid: true,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
playThroughEarpieceAndroid: false,
});
}
// 根据平台设置不同的录音选项
const recordingOptions = Platform.select({
ios: {
extension: '.m4a',
outputFormat: Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC,
audioQuality: Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_HIGH,
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
},
android: {
extension: '.mp3',
outputFormat: Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_MPEG_4,
audioEncoder: Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AAC,
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
},
});
const { recording } = await Audio.Recording.createAsync(recordingOptions);
setRecording(recording);
} catch (err) {
console.error('录音初始化失败:', err);
}
};
// 组件其余部分...
};
5.2 性能优化策略
// 音频性能优化 hooks
const useAudioPerformance = () => {
// 1. 音频缓存管理
const audioCache = useRef(new Map());
const getCachedAudio = async (uri) => {
if (audioCache.current.has(uri)) {
return audioCache.current.get(uri);
}
const { sound } = await Audio.Sound.createAsync({ uri });
audioCache.current.set(uri, sound);
// 限制缓存大小,LRU策略
if (audioCache.current.size > 10) {
const oldestKey = audioCache.current.keys().next().value;
const oldestSound = audioCache.current.get(oldestKey);
await oldestSound.unloadAsync();
audioCache.current.delete(oldestKey);
}
return sound;
};
// 2. 音频会话管理
const activateAudioSession = useCallback(async () => {
await Audio.setAudioModeAsync({
allowsRecordingIOS: false,
playsInSilentModeIOS: true,
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
shouldDuckAndroid: false,
staysActiveInBackground: true,
playThroughEarpieceAndroid: false,
});
}, []);
// 3. 电量优化:暂停非活跃音频
useEffect(() => {
const subscription = AppState.addEventListener('change', nextAppState => {
if (nextAppState.match(/inactive|background/) && currentSound) {
currentSound.pauseAsync();
setIsPlaying(false);
}
});
return () => {
subscription.remove();
};
}, [currentSound]);
return { getCachedAudio, activateAudioSession };
};
5.3 常见问题解决方案
| 问题 | iOS 解决方案 | Android 解决方案 |
|---|---|---|
| 音频中断 | 使用 INTERRUPTION_MODE_IOS_DO_NOT_MIX 模式 | 设置 interruptionModeAndroid 为 DO_NOT_MIX |
| 后台播放 | 配置 staysActiveInBackground: true 并设置音频会话类别为 playback | 添加 FOREGROUND_SERVICE 权限并使用前台服务 |
| 低音量问题 | 检查 playThroughEarpieceAndroid 设置 | 确保未设置为通过听筒播放 |
| 音频延迟 | 使用 Audio.Sound.createAsync 的 progressUpdateIntervalMillis 参数 | 启用音频硬件加速 |
| 内存泄漏 | 正确调用 unloadAsync 并清理监听器 | 确保组件卸载时释放资源 |
六、实战案例:语音备忘录应用
6.1 应用架构设计
6.2 完整实现代码
// App.js - 主应用组件
import React, { useState, useEffect } from 'react';
import { View, Text, Button, FlatList, StyleSheet, AppState } from 'react-native';
import AudioService from './services/AudioService';
import AudioRepository from './services/AudioRepository';
import RecordingList from './components/RecordingList';
import RecordingControls from './components/RecordingControls';
import PlayerControls from './components/PlayerControls';
import AudioVisualizer from './components/AudioVisualizer';
import PermissionsManager from './utils/PermissionsManager';
const App = () => {
const [isRecording, setIsRecording] = useState(false);
const [isPlaying, setIsPlaying] = useState(false);
const [currentRecording, setCurrentRecording] = useState(null);
const [recordings, setRecordings] = useState([]);
const [appState, setAppState] = useState(AppState.currentState);
const audioService = useRef(new AudioService());
const audioRepo = useRef(new AudioRepository());
useEffect(() => {
// 检查权限
const checkPermissions = async () => {
const hasPermissions = await PermissionsManager.requestAudioPermissions();
if (!hasPermissions) {
alert('需要麦克风和存储权限才能使用应用');
}
};
// 加载录音列表
const loadRecordings = async () => {
const records = await audioRepo.current.getAllRecordings();
setRecordings(records);
};
checkPermissions();
loadRecordings();
// 监听应用状态变化
const subscription = AppState.addEventListener('change', nextAppState => {
setAppState(nextAppState);
// 应用进入后台时停止录音
if (nextAppState.match(/inactive|background/) && isRecording) {
stopRecording();
}
});
return () => {
subscription.remove();
};
}, []);
const startRecording = async () => {
try {
await audioService.current.startRecording();
setIsRecording(true);
} catch (err) {
console.error('开始录音失败:', err);
alert('无法开始录音: ' + err.message);
}
};
const stopRecording = async () => {
try {
setIsRecording(false);
const recording = await audioService.current.stopRecording();
// 保存录音元数据
const savedRecording = await audioRepo.current.saveRecording({
uri: recording.uri,
duration: recording.duration,
createdAt: new Date().toISOString(),
title: `录音 ${new Date().toLocaleString()}`
});
setRecordings([savedRecording, ...recordings]);
} catch (err) {
console.error('停止录音失败:', err);
}
};
const playRecording = async (recording) => {
try {
setCurrentRecording(recording);
await audioService.current.playRecording(recording.uri);
setIsPlaying(true);
} catch (err) {
console.error('播放录音失败:', err);
}
};
const pausePlayback = async () => {
await audioService.current.pausePlayback();
setIsPlaying(false);
};
const deleteRecording = async (id) => {
try {
await audioRepo.current.deleteRecording(id);
setRecordings(recordings.filter(r => r.id !== id));
// 如果正在播放的录音被删除
if (currentRecording && currentRecording.id === id) {
await audioService.current.stopPlayback();
setIsPlaying(false);
setCurrentRecording(null);
}
} catch (err) {
console.error('删除录音失败:', err);
}
};
return (
<View style={styles.container}>
<Text style={styles.title}>语音备忘录</Text>
<RecordingControls
isRecording={isRecording}
onStartRecording={startRecording}
onStopRecording={stopRecording}
/>
{isRecording && <AudioVisualizer sound={audioService.current.getRecorder()} />}
{currentRecording && (
<View style={styles.nowPlaying}>
<Text style={styles.nowPlayingText}>
正在播放: {currentRecording.title}
</Text>
<Button
title={isPlaying ? "暂停" : "继续播放"}
onPress={isPlaying ? pausePlayback : () => playRecording(currentRecording)}
/>
</View>
)}
<View style={styles.listHeader}>
<Text style={styles.sectionTitle}>我的录音</Text>
</View>
<RecordingList
recordings={recordings}
onPlayRecording={playRecording}
onDeleteRecording={deleteRecording}
/>
</View>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
padding: 20,
backgroundColor: '#fff',
},
title: {
fontSize: 24,
fontWeight: 'bold',
marginBottom: 30,
textAlign: 'center',
},
listHeader: {
marginTop: 30,
marginBottom: 10,
},
sectionTitle: {
fontSize: 18,
fontWeight: 'bold',
color: '#333',
},
nowPlaying: {
marginTop: 20,
padding: 15,
backgroundColor: '#f0f0f0',
borderRadius: 8,
alignItems: 'center',
},
nowPlayingText: {
marginBottom: 10,
fontSize: 16,
},
});
七、总结与进阶学习
7.1 核心知识点回顾
本文涵盖了React Native音频处理的核心技术点,包括:
- 录音功能:从基础录音到高级配置,支持不同质量和格式的音频录制
- 音频播放:实现完整的播放器功能,包括进度控制、播放列表和后台播放
- 音效处理:应用音量、音调、混响和均衡器等音频效果
- 音频可视化:使用频谱分析实现音频波形和频谱显示
- 性能优化:缓存策略、电量优化和资源管理
7.2 进阶学习路径
7.3 实用资源推荐
-
官方文档:
- React Native 音频相关API文档
- Expo AV模块完整文档
-
第三方库:
- react-native-track-player:功能全面的音频播放器
- react-native-audio-recorder-player:专业录音库
- react-native-sound:轻量级音频播放库
-
开发工具:
- Audacity:音频编辑和分析工具
- Adobe Audition:专业音频工作站
- FFmpeg:音频格式转换和处理
7.4 项目扩展建议
- 语音转文字:集成语音识别API,为录音添加文字转录功能
- 云同步:实现录音文件的云存储和多设备同步
- 音频编辑:添加剪切、合并和淡入淡出等编辑功能
- 社交分享:支持将录音分享到社交平台或通过消息发送
- 语音备忘录增强:添加标签、分类和搜索功能
通过本文介绍的技术和方法,你现在已经具备构建专业级React Native音频应用的能力。无论是简单的录音应用还是复杂的音乐播放器,这些知识都将帮助你应对各种音频处理挑战。
祝你在音频开发的旅程中取得成功!如果有任何问题或建议,请在评论区留言交流。别忘了点赞和收藏本文,以便日后参考。关注我们获取更多React Native开发技巧和最佳实践!
创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考



