How to Convert An audio From m4a to pcm

本文介绍如何使用iOS 4.1新增的AVFoundation API实现从iPod库中读取音频,并将其转换为PCM格式。通过AVAssetReader和AVAssetWriter类简化了音频文件的读取与转换流程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转自:http://www.subfurther.com/blog/2010/12/13/from-ipod-library-to-pcm-samples-in-far-fewer-steps-than-were-previously-necessary/

In a July blog entry, I showed a gruesome technique for getting raw PCM samples of audio from your iPod library, by means of an easily-overlooked metadata attribute in the Media Library framework, along with the export functionality of AV Foundation. The AV Foundation stuff was the gruesome part — with no direct means for sample-level access to the song “asset”, it required an intermedia export to.m4a, which was a lossy re-encode if the source was of a different format (like MP3), and then a subsequent conversion to PCM with Core Audio.

Please feel free to forget all about that approach… except for the Core Media timescale stuff, which you’ll surely see again before too long.

iOS 4.1 added a number of new classes to AV Foundation (indeed, these were among the most significant 4.1 API diffs) to provide an API for sample-level access to media. The essential classes areAVAssetReader and AVAssetWriter. Using these, we can dramatically simplify and improve the iPod converter.

I have an example project, VTM_AViPodReader.zip (70 KB) that was originally meant to be part of my session at the Voices That MatteriPhone conference in Philadelphia, but didn’t come together in time. I’m going to skip the UI stuff in this blog, and leave you to a screenshot and a simple description: tap “choose song”, pick something from your iPod library, tap “done”, and tap “Convert”.

Screenshot of VTM_AViPodReader

To do the conversion, we’ll use an AVAssetReader to read from the original song file, and an AVAssetWriter to perform the conversion and write to a new file in our application’s Documents directory.

Start, as in the previous example, by using thevalueForProperty:MPMediaItemPropertyAssetURL attribute to get an NSURL representing the song in a format compatible with AV Foundation.



-(IBAction) convertTapped: (id) sender {
	// set up an AVAssetReader to read from the iPod Library
	NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
	AVURLAsset *songAsset =
		[AVURLAsset URLAssetWithURL:assetURL options:nil];

	NSError *assetError = nil;
	AVAssetReader *assetReader =
		[[AVAssetReader assetReaderWithAsset:songAsset
			   error:&assetError]
		  retain];
	if (assetError) {
		NSLog (@"error: %@", assetError);
		return;
	}

Sorry about the dangling retains. I’ll explain those in a little bit (and yes, you could use the alloc/init equivalents… I’m making a point here…). Anyways, it’s simple enough to take an AVAssetand make an AVAssetReader from it.

But what do you do with that? Contrary to what you might think, you don’t just read from it directly. Instead, you create another object, an AVAssetReaderOutput, which is able to produce samples from an AVAssetReader.


AVAssetReaderOutput *assetReaderOutput =
	[[AVAssetReaderAudioMixOutput
	  assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
				audioSettings: nil]
	retain];
if (! [assetReader canAddOutput: assetReaderOutput]) {
	NSLog (@"can't add reader output... die!");
	return;
}
[assetReader addOutput: assetReaderOutput];

AVAssetReaderOutput is abstract. Since we’re only interested in the audio from this asset, a AVAssetReaderAudioMixOutput will suit us fine. For reading samples from an audio/video file, like a QuickTime movie, we’d want AVAssetReaderVideoCompositionOutput instead. An important point here is that we set audioSettings to nil to get a generic PCM output. The alternative is to provide anNSDictionary specifying the format you want to receive; I ended up doing that later in the output step, so the default PCM here will be fine.

That’s all we need to worry about for now for reading from the song file. Now let’s start dealing with writing the converted file. We start by setting up an output file… the only important thing to know here is that AV Foundation won’t overwrite a file for you, so you should delete the exported.caf if it already exists.


NSArray *dirs = NSSearchPathForDirectoriesInDomains
				(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [dirs objectAtIndex:0];
NSString *exportPath = [[documentsDirectoryPath
				 stringByAppendingPathComponent:EXPORT_NAME]
				retain];
if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath]) {
	[[NSFileManager defaultManager] removeItemAtPath:exportPath
		error:nil];
}
NSURL *exportURL = [NSURL fileURLWithPath:exportPath];

Yeah, there’s another spurious retain here. I’ll explain later. For now, let’s take exportURL and create the AVAssetWriter:


AVAssetWriter *assetWriter =
	[[AVAssetWriter assetWriterWithURL:exportURL
		  fileType:AVFileTypeCoreAudioFormat
			 error:&assetError]
	  retain];
if (assetError) {
	NSLog (@"error: %@", assetError);
	return;
}

OK, no sweat there, but the AVAssetWriter isn’t really the important part. Just as the reader is paired with “reader output” objects, so too is the writer connected to “writer input” objects, which is what we’ll be providing samples to, in order to write them to the filesystem.

To create the AVAssetWriterInput, we provide an NSDictionarydescribing the format and contents we want to create… this is analogous to a step we skipped earlier to specify the format we receive from the AVAssetReaderOutput. The dictionary keys are defined in AVAudioSettings.h and AVVideoSettings.h. You may find you need to look in these header files to look for the value types to provide for these keys, and in some cases, they’ll point you to the Core Audio header files. Trial and error led me to ultimately specify all of the fields that would be encountered in aAudioStreamBasicDescription, along with anAudioChannelLayout structure, which needs to be wrapped in anNSData in order to be added to an NSDictionary



AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary *outputSettings =
[NSDictionary dictionaryWithObjectsAndKeys:
	[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
	[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
	[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
	[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
		AVChannelLayoutKey,
	[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
	[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
	[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
	[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
	nil];

With this dictionary describing 44.1 KHz, stereo, 16-bit, non-interleaved, little-endian integer PCM, we can create anAVAssetWriterInput to encode and write samples in this format.


AVAssetWriterInput *assetWriterInput =
	[[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
				outputSettings:outputSettings]
	retain];
if ([assetWriter canAddInput:assetWriterInput]) {
	[assetWriter addInput:assetWriterInput];
} else {
	NSLog (@"can't add asset writer input... die!");
	return;
}
assetWriterInput.expectsMediaDataInRealTime = NO;

Notice that we’ve set the propertyassetWriterInput.expectsMediaDataInRealTime to NO. This will allow our transcode to run as fast as possible; of course, you’d set this to YES if you were capturing or generating samples in real-time.

Now that our reader and writer are ready, we signal that we’re ready to start moving samples around:


[assetWriter startWriting];
[assetReader startReading];
AVAssetTrack *soundTrack = [songAsset.tracks objectAtIndex:0];
CMTime startTime = CMTimeMake (0, soundTrack.naturalTimeScale);
[assetWriter startSessionAtSourceTime: startTime];

These calls will allow us to start reading from the reader and writing to the writer… but just how do we do that? The key is theAVAssetReaderOutput method copyNextSampleBuffer. This call produces a Core Media CMSampleBufferRef, which is what we need to provide to the AVAssetWriterInput‘s appendSampleBuffermethod.

But this is where it starts getting tricky. We can’t just drop into awhile loop and start copying buffers over. We have to be explicitly signaled that the writer is able to accept input. We do this by providing a block to the asset writer’srequestMediaDataWhenReadyOnQueue:usingBlock. Once we do this, our code will continue on, while the block will be called asynchronously by Grand Central Dispatch periodically. This explains the earlier retains… autoreleased variables created here in convertTapped: will soon be released, while we need them to still be around when the block is executed. So we need to take care that stuff we need is available inside the block: objects need to not be released, and local primitives need the __block modifier to get into the block.


__block UInt64 convertedByteCount = 0;
dispatch_queue_t mediaInputQueue =
	dispatch_queue_create("mediaInputQueue", NULL);
[assetWriterInput requestMediaDataWhenReadyOnQueue:mediaInputQueue
										usingBlock: ^
 {

The block will be called repeatedly by GCD, but we still need to make sure that the writer input is able to accept new samples.


while (assetWriterInput.readyForMoreMediaData) {
	CMSampleBufferRef nextBuffer =
		[assetReaderOutput copyNextSampleBuffer];
	if (nextBuffer) {
		// append buffer
		[assetWriterInput appendSampleBuffer: nextBuffer];
		// update ui
		convertedByteCount +=
			CMSampleBufferGetTotalSampleSize (nextBuffer);
		NSNumber *convertedByteCountNumber =
			[NSNumber numberWithLong:convertedByteCount];
		[self performSelectorOnMainThread:@selector(updateSizeLabel:)
			withObject:convertedByteCountNumber
		waitUntilDone:NO];

What’s happening here is that while the writer input can accept more samples, we try to get a sample from the reader output. If we get one, appending it to the writer output is a one-line call. Updating the UI is another matter: since GCD has us running on an arbitrary thread, we have to use performSelectorOnMainThread for any updates to the UI, such as updating a label with the current total byte-count. We would also have to do call out to the main thread to update the progress bar, currently unimplemented because I don’t have a good way to do it yet.

If the writer is ever unable to accept new samples, we fall out of thewhile and the block, though GCD will continue to re-run the block until we explicitly stop the writer.

How do we know when to do that? When we don’t get a sample fromcopyNextSampleBuffer, which means we’ve read all the data from the reader.


} else {
	// done!
	[assetWriterInput markAsFinished];
	[assetWriter finishWriting];
	[assetReader cancelReading];
	NSDictionary *outputFileAttributes =
		[[NSFileManager defaultManager]
			  attributesOfItemAtPath:exportPath
			  error:nil];
	NSLog (@"done. file size is %ld",
		    [outputFileAttributes fileSize]);
	NSNumber *doneFileSize = [NSNumber numberWithLong:
			[outputFileAttributes fileSize]];
	[self performSelectorOnMainThread:@selector(updateCompletedSizeLabel:)
			withObject:doneFileSize
			waitUntilDone:NO];
	// release a lot of stuff
	[assetReader release];
	[assetReaderOutput release];
	[assetWriter release];
	[assetWriterInput release];
	[exportPath release];
	break;
}

Reaching the finish state requires us to tell the writer to finish up the file by sending finish messages to both the writer input and the writer itself. After we update the UI (again, with the song-and-dance required to do so on the main thread), we release all the objects we had to retain in order that they would be available to the block.

Finally, for those of you copy-and-pasting at home, I think I owe you some close braces:


		}
	 }];
	NSLog (@"bottom of convertTapped:");
}

Once you’ve run this code on the device (it won’t work in the Simulator, which doesn’t have an iPod Library) and performed a conversion, you’ll have converted PCM in an exported.caf file in your app’s Documents directory. In theory, your app could do something interesting with this file, like representing it as a waveform, or running it through a Core Audio AUGraph to apply some interesting effects. Just to prove that we actually have performed the desired conversion, use the Xcode Organizer to open up the “iPod Reader” application and drag its “Application Data” to your Mac:

Accessing app's documents with Xcode Organizer

The exported folder will have a Documents, in which you should find exported.caf. Drag it over to QuickTime Player or any other application that can show you the format of the file you’ve produced:

QuickTime Player inspector showing PCM format of exported.caf file

Hopefully this is going to work for you. It worked for most Amazon and iTunes albums I threw at it, but found I had an iTunes Plus album, Ashtray Rock by the Joel Plaskett Emergency, whose songs throw an inexplicable error when opened, so I can’t presume to fully understand this API just yet:


2010-12-12 15:28:18.939 VTM_AViPodReader[7666:307] *** Terminating app
 due to uncaught exception 'NSInvalidArgumentException', reason:
 '*** -[AVAssetReader initWithAsset:error:] invalid parameter not
 satisfying: asset != ((void *)0)'

Still, the arrival of AVAssetReader and AVAssetWriter open up a lot of new possibilities for audio and video apps on iOS. With the reader, you can inspect media samples, either in their original format or with a conversion to a form that suits your code. With the writer, you can supply samples that you receive by transcoding (as I’ve done here), by capture, or even samples you generate programmatically (such as a screen recorder class that just grabs the screen as often as possible and writes it to a movie file).

Audio Stream.h class AudioDevice; typedef unsigned int app_type_t; class StreamPrimary { public: StreamPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, struct audio_config *config); virtual ~StreamPrimary(); uint32_t GetSampleRate(); uint32_t GetBufferSize(); audio_format_t GetFormat(); audio_channel_mask_t GetChannelMask(); int getPalDeviceIds(const std::set<audio_devices_t> &halDeviceIds, pal_device_id_t* palOutDeviceIds); audio_io_handle_t GetHandle(); int GetUseCase(); std::mutex write_wait_mutex_; std::condition_variable write_condition_; std::mutex stream_mutex_; bool write_ready_; std::mutex drain_wait_mutex_; std::condition_variable drain_condition_; bool drain_ready_; stream_callback_t client_callback; void *client_cookie; static int GetDeviceAddress(struct str_parms *parms, int *card_id, int *device_num); int GetLookupTableIndex(const struct string_to_enum *table, const int table_size, int value); bool GetSupportedConfig(bool isOutStream, struct str_parms *query, struct str_parms *reply); virtual int RouteStream(const std::set<audio_devices_t>&, bool force_device_switch = false) = 0; bool isStarted() { return stream_started_; }; protected: struct pal_stream_attributes streamAttributes_; pal_stream_handle_t* pal_stream_handle_; audio_io_handle_t handle_; pal_device_id_t pal_device_id_; struct audio_config config_; char address_[AUDIO_DEVICE_MAX_ADDRESS_LEN]; bool stream_started_ = false; bool stream_paused_ = false; bool stream_flushed_ = false; int usecase_; struct pal_volume_data *volume_; /* used to cache volume */ std::map <audio_devices_t, pal_device_id_t> mAndroidDeviceMap; int mmap_shared_memory_fd; app_type_t app_types_; pal_param_device_capability_t *device_cap_query_; app_type_t audio_power_app_types_;/* Audio PowerSave */ }; class StreamOutPrimary : public StreamPrimary { private: // Helper function for write to open pal stream & configure. ssize_t configurePalOutputStream(); //Helper method to standby streams upon write failures and sleep for buffer duration. ssize_t onWriteError(size_t bytes, ssize_t ret); protected: struct pal_device* mPalOutDevice; private: pal_device_id_t* mPalOutDeviceIds; std::set<audio_devices_t> mAndroidOutDevices; bool mInitialized; /* fixed ear_out aux_out stereo start */ bool mIsKaraokeMuteOnCombo; /* fixed ear_out aux_out stereo end */ // [offload playspeed bool isOffloadUsecase() { return GetUseCase() == USECASE_AUDIO_PLAYBACK_OFFLOAD; } bool isOffloadSpeedSupported(); bool isValidPlaybackRate(const audio_playback_rate_t *playbackRate); bool isValidStretchMode(audio_timestretch_stretch_mode_t stretchMode); bool isValidFallbackMode(audio_timestretch_fallback_mode_t fallbackMode); int setPlaybackRateToPal(const audio_playback_rate_t *playbackRate); audio_playback_rate_t mPlaybackRate = AUDIO_PLAYBACK_RATE_INITIALIZER; // offload Playspeed] public: StreamOutPrimary(audio_io_handle_t handle, const std::set<audio_devices_t>& devices, audio_output_flags_t flags, struct audio_config *config, const char *address, offload_effects_start_output fnp_start_offload_effect, offload_effects_stop_output fnp_stop_offload_effect, visualizer_hal_start_output fnp_visualizer_start_output_, visualizer_hal_stop_output fnp_visualizer_stop_output_); ~StreamOutPrimary(); bool sendGaplessMetadata = true; bool isCompressMetadataAvail = false; void UpdatemCachedPosition(uint64_t val); virtual int Standby(); int SetVolume(float left, float right); int refactorVolumeData(float left, float right); uint64_t GetFramesWritten(struct timespec *timestamp); virtual int SetParameters(struct str_parms *parms); int Pause(); int Resume(); int Drain(audio_drain_type_t type); int Flush(); virtual int Start(); int Stop(); virtual ssize_t write(const void *buffer, size_t bytes); virtual int Open(); void GetStreamHandle(audio_stream_out** stream); uint32_t GetBufferSize(); uint32_t GetBufferSizeForLowLatency(); int GetFrames(uint64_t *frames); static pal_stream_type_t GetPalStreamType(audio_output_flags_t halStreamFlags, uint32_t sample_rate, bool isDeviceAvail); static int64_t GetRenderLatency(audio_output_flags_t halStreamFlags); int GetOutputUseCase(audio_output_flags_t halStreamFlags); int StartOffloadEffects(audio_io_handle_t, pal_stream_handle_t*); int StopOffloadEffects(audio_io_handle_t, pal_stream_handle_t*); bool CheckOffloadEffectsType(pal_stream_type_t pal_stream_type); int StartOffloadVisualizer(audio_io_handle_t, pal_stream_handle_t*); int StopOffloadVisualizer(audio_io_handle_t, pal_stream_handle_t*); audio_output_flags_t flags_; int CreateMmapBuffer(int32_t min_size_frames, struct audio_mmap_buffer_info *info); int GetMmapPosition(struct audio_mmap_position *position); bool isDeviceAvailable(pal_device_id_t deviceId); int RouteStream(const std::set<audio_devices_t>&, bool force_device_switch = false); virtual void SetMode(audio_mode_t mode) = 0; ssize_t splitAndWriteAudioHapticsStream(const void *buffer, size_t bytes); bool period_size_is_plausible_for_low_latency(int period_size); source_metadata_t btSourceMetadata; std::vector<playback_track_metadata_t> tracks; int SetAggregateSourceMetadata(bool voice_active); static std::mutex sourceMetadata_mutex_; // [offload playback speed int getPlaybackRateParameters(audio_playback_rate_t *playbackRate); int setPlaybackRateParameters(const audio_playback_rate_t *playbackRate); // offload playback speed] protected: struct timespec writeAt; int get_compressed_buffer_size(); int get_pcm_buffer_size(); int is_direct(); audio_format_t halInputFormat = AUDIO_FORMAT_DEFAULT; audio_format_t halOutputFormat = AUDIO_FORMAT_DEFAULT; uint32_t convertBufSize; uint32_t fragments_ = 0; uint32_t fragment_size_ = 0; pal_snd_dec_t palSndDec; struct pal_compr_gapless_mdata gaplessMeta = {0, 0}; uint32_t msample_rate; uint16_t mchannels; std::shared_ptr<audio_stream_out> stream_; uint64_t mBytesWritten; /* total bytes written, not cleared when entering standby */ uint64_t mCachedPosition = 0; /* cache pcm offload position when entering standby */ offload_effects_start_output fnp_offload_effect_start_output_ = nullptr; offload_effects_stop_output fnp_offload_effect_stop_output_ = nullptr; visualizer_hal_start_output fnp_visualizer_start_output_ = nullptr; visualizer_hal_stop_output fnp_visualizer_stop_output_ = nullptr; void *convertBuffer; //Haptics Usecase struct pal_stream_attributes hapticsStreamAttributes; pal_stream_handle_t* pal_haptics_stream_handle; AudioExtn AudExtn; struct pal_device* hapticsDevice; uint8_t* hapticBuffer; size_t hapticsBufSize; audio_mode_t _mode; int FillHalFnPtrs(); friend class AudioDevice; struct timespec ts_first_write = {0, 0}; }; class StreamInPrimary : public StreamPrimary{ protected: struct pal_device* mPalInDevice; private: pal_device_id_t* mPalInDeviceIds; std::set<audio_devices_t> mAndroidInDevices; bool mInitialized; //Helper method to standby streams upon read failures and sleep for buffer duration. ssize_t onReadError(size_t bytes, size_t ret); public: StreamInPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_input_flags_t flags, struct audio_config *config, const char *address, audio_source_t source); ~StreamInPrimary(); int Standby(); int SetGain(float gain); void GetStreamHandle(audio_stream_in** stream); virtual int Open(); int Start(); int Stop(); int SetMicMute(bool mute); ssize_t read(const void *buffer, size_t bytes); uint32_t GetBufferSize(); uint32_t GetBufferSizeForLowLatencyRecord(); pal_stream_type_t GetPalStreamType(audio_input_flags_t halStreamFlags, uint32_t sample_rate); int GetInputUseCase(audio_input_flags_t halStreamFlags, audio_source_t source); int addRemoveAudioEffect(const struct audio_stream *stream, effect_handle_t effect,bool enable); virtual int SetParameters(const char *kvpairs); bool getParameters(struct str_parms *query, struct str_parms *reply); bool is_st_session; audio_input_flags_t flags_; int CreateMmapBuffer(int32_t min_size_frames, struct audio_mmap_buffer_info *info); int GetMmapPosition(struct audio_mmap_position *position); bool isDeviceAvailable(pal_device_id_t deviceId); int RouteStream(const std::set<audio_devices_t>& new_devices, bool force_device_switch = false); int64_t GetSourceLatency(audio_input_flags_t halStreamFlags); uint64_t GetFramesRead(int64_t *time); int GetPalDeviceIds(pal_device_id_t *palDevIds, int *numPalDevs); sink_metadata_t btSinkMetadata; std::vector<record_track_metadata_t> tracks; int SetAggregateSinkMetadata(bool voice_active); static std::mutex sinkMetadata_mutex_; pal_stream_handle_t *pal_vui_handle_; protected: struct timespec readAt; uint32_t fragments_ = 0; uint32_t fragment_size_ = 0; int FillHalFnPtrs(); std::shared_ptr<audio_stream_in> stream_; audio_source_t source_; friend class AudioDevice; uint64_t mBytesRead = 0; /* total bytes read, not cleared when entering standby */ // for compress capture usecase std::unique_ptr<CompressCapture::CompressAAC> mCompressEncoder; bool isECEnabled = false; bool isNSEnabled = false; bool effects_applied_ = true; //ADD: KARAOKE bool is_karaoke_on = false; int is_karaoke_status = 0; bool is_cts_stream = false; std::mutex activeStreamMutex; //END KARAOKE // MIUI ADD: Audio_XiaoAi bool is_map_switch = false; // END Audio_XiaoAi }; AudioStream.cpp int StreamOutPrimary::Standby() { int ret = 0; /* fixed ear_out aux_out stereo start */ std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); std::set<audio_devices_t> new_devices; /* fixed ear_out aux_out stereo end */ AHAL_DBG("Enter"); if (adevice->is_earout_hphl_conflict && mIsKaraokeMuteOnCombo) { AHAL_DBG("routestream from combo whs to whs before standby"); mAndroidOutDevices.erase(AUDIO_DEVICE_OUT_SPEAKER); new_devices = mAndroidOutDevices; StreamOutPrimary::RouteStream(new_devices, true); } stream_mutex_.lock(); if (pal_stream_handle_) { if (streamAttributes_.type == PAL_STREAM_PCM_OFFLOAD) { /* * when ssr happens, dsp position for pcm offload could be 0, * so get written frames. Else, get frames. */ if (PAL_CARD_STATUS_DOWN(AudioDevice::sndCardState)) { struct timespec ts; // release stream lock as GetFramesWritten will lock/unlock stream mutex stream_mutex_.unlock(); mCachedPosition = GetFramesWritten(&ts); stream_mutex_.lock(); AHAL_DBG("card is offline, return written frames %lld", (long long)mCachedPosition); } else { GetFrames(&mCachedPosition); } } ret = pal_stream_stop(pal_stream_handle_); if (ret) { AHAL_ERR("failed to stop stream."); ret = -EINVAL; } if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS && pal_haptics_stream_handle) { ret = pal_stream_stop(pal_haptics_stream_handle); if (ret) { AHAL_ERR("failed to stop haptics stream."); } } } stream_started_ = false; stream_paused_ = false; sendGaplessMetadata = true; if (CheckOffloadEffectsType(streamAttributes_.type)) { ret = StopOffloadEffects(handle_, pal_stream_handle_); ret = StopOffloadVisualizer(handle_, pal_stream_handle_); } if (pal_stream_handle_) { ret = pal_stream_close(pal_stream_handle_); pal_stream_handle_ = NULL; if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS && pal_haptics_stream_handle) { ret = pal_stream_close(pal_haptics_stream_handle); pal_haptics_stream_handle = NULL; if (hapticBuffer) { free (hapticBuffer); hapticBuffer = NULL; } hapticsBufSize = 0; if (hapticsDevice) { free(hapticsDevice); hapticsDevice = NULL; } } } if (karaoke) { ret = AudExtn.karaoke_stop(); if (ret) { AHAL_ERR("failed to stop karaoke path."); ret = 0; } else { ret = AudExtn.karaoke_close(); if (ret) { AHAL_ERR("failed to close karaoke path."); ret = 0; } } } if (mmap_shared_memory_fd >= 0) { close(mmap_shared_memory_fd); mmap_shared_memory_fd = -1; } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict && mIsKaraokeMuteOnCombo) { const char kvp[] = "audio_karaoke_mute=0"; struct str_parms *parms = str_parms_create_str(kvp); if (parms) { AudioExtn::audio_extn_set_parameters(adevice, parms); mIsKaraokeMuteOnCombo = false; str_parms_destroy(parms); } else { AHAL_ERR("Error in str_parms_create_str"); } } /* fixed ear_out aux_out stereo end */ if (ret) ret = -EINVAL; exit: stream_mutex_.unlock(); AHAL_DBG("Exit ret: %d", ret); return ret; } StreamOutPrimary::StreamOutPrimary( audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_output_flags_t flags, struct audio_config *config, const char *address __unused, offload_effects_start_output start_offload_effect, offload_effects_stop_output stop_offload_effect, visualizer_hal_start_output visualizer_start_output, visualizer_hal_stop_output visualizer_stop_output): StreamPrimary(handle, devices, config), mAndroidOutDevices(devices), flags_(flags), btSourceMetadata{0, nullptr} { stream_ = std::shared_ptr<audio_stream_out> (new audio_stream_out()); std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); mInitialized = false; /* fixed ear_out aux_out stereo start */ mIsKaraokeMuteOnCombo = false; bool isCombo = false; audio_devices_t OutDevices = AudioExtn::get_device_types(mAndroidOutDevices); /* fixed ear_out aux_out stereo end */ pal_stream_handle_ = nullptr; pal_haptics_stream_handle = nullptr; mPalOutDeviceIds = nullptr; mPalOutDevice = nullptr; convertBuffer = NULL; hapticsDevice = NULL; hapticBuffer = NULL; hapticsBufSize = 0; writeAt.tv_sec = 0; writeAt.tv_nsec = 0; mBytesWritten = 0; int noPalDevices = 0; int ret = 0; /*Initialize the gaplessMeta value with 0*/ memset(&gaplessMeta,0,sizeof(struct pal_compr_gapless_mdata)); if (!stream_) { AHAL_ERR("No memory allocated for stream_"); throw std::runtime_error("No memory allocated for stream_"); } AHAL_DBG("enter: handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%zu) flags(%#x)\ address(%s)", handle, config->format, config->sample_rate, config->channel_mask, mAndroidOutDevices.size(), flags, address); //TODO: check if USB device is connected or not if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, audio_is_usb_out_device)){ // get capability from device of USB device_cap_query_ = (pal_param_device_capability_t *) calloc(1, sizeof(pal_param_device_capability_t)); if (!device_cap_query_) { AHAL_ERR("Failed to allocate mem for device_cap_query_"); goto error; } dynamic_media_config_t *dynamic_media_config = (dynamic_media_config_t *) calloc(1, sizeof(dynamic_media_config_t)); if (!dynamic_media_config) { free(device_cap_query_); AHAL_ERR("Failed to allocate mem for dynamic_media_config"); goto error; } size_t payload_size = 0; device_cap_query_->id = PAL_DEVICE_OUT_USB_DEVICE; device_cap_query_->addr.card_id = adevice->usb_card_id_; device_cap_query_->addr.device_num = adevice->usb_dev_num_; device_cap_query_->config = dynamic_media_config; device_cap_query_->is_playback = true; ret = pal_get_param(PAL_PARAM_ID_DEVICE_CAPABILITY, (void **)&device_cap_query_, &payload_size, nullptr); if (ret < 0) { AHAL_ERR("Error usb device is not connected"); free(dynamic_media_config); free(device_cap_query_); dynamic_media_config = NULL; device_cap_query_ = NULL; } else if (audio_is_linear_pcm(config->format) && AUDIO_OUTPUT_FLAG_NONE == flags) { // HIFI output port AHAL_DBG("use deep buffer for HIFI output on USBC hs"); flags_ = AUDIO_OUTPUT_FLAG_DEEP_BUFFER; } if (!config->sample_rate || !config->format || !config->channel_mask) { if (dynamic_media_config) { config->sample_rate = dynamic_media_config->sample_rate[0]; config->channel_mask = (audio_channel_mask_t) dynamic_media_config->mask[0]; config->format = (audio_format_t)dynamic_media_config->format[0]; } if (config->sample_rate == 0) config->sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; if (config->channel_mask == AUDIO_CHANNEL_NONE) config->channel_mask = AUDIO_CHANNEL_OUT_STEREO; if (config->format == AUDIO_FORMAT_DEFAULT) config->format = AUDIO_FORMAT_PCM_16_BIT; memcpy(&config_, config, sizeof(struct audio_config)); AHAL_INFO("sample rate = %d channel_mask = %#x fmt = %#x", config->sample_rate, config->channel_mask, config->format); } } if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_AUX_DIGITAL)){ AHAL_DBG("AUDIO_DEVICE_OUT_AUX_DIGITAL and DIRECT | OFFLOAD, check hdmi caps"); if (config->sample_rate == 0) { config->sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; config_.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; } if (config->channel_mask == AUDIO_CHANNEL_NONE) { config->channel_mask = AUDIO_CHANNEL_OUT_5POINT1; config_.channel_mask = AUDIO_CHANNEL_OUT_5POINT1; } if (config->format == AUDIO_FORMAT_DEFAULT) { config->format = AUDIO_FORMAT_PCM_16_BIT; config_.format = AUDIO_FORMAT_PCM_16_BIT; } } usecase_ = GetOutputUseCase(flags_); if (address) { strlcpy((char *)&address_, address, AUDIO_DEVICE_MAX_ADDRESS_LEN); } else { AHAL_DBG("invalid address"); } fnp_offload_effect_start_output_ = start_offload_effect; fnp_offload_effect_stop_output_ = stop_offload_effect; fnp_visualizer_start_output_ = visualizer_start_output; fnp_visualizer_stop_output_ = visualizer_stop_output; if (mAndroidOutDevices.empty()) mAndroidOutDevices.insert(AUDIO_DEVICE_OUT_DEFAULT); AHAL_DBG("No of Android devices %zu", mAndroidOutDevices.size()); mPalOutDeviceIds = (pal_device_id_t*) calloc(mAndroidOutDevices.size(), sizeof(pal_device_id_t)); if (!mPalOutDeviceIds) { goto error; } noPalDevices = getPalDeviceIds(mAndroidOutDevices, mPalOutDeviceIds); if (noPalDevices != mAndroidOutDevices.size()) { AHAL_ERR("mismatched pal no of devices %d and hal devices %zu", noPalDevices, mAndroidOutDevices.size()); goto error; } mPalOutDevice = (struct pal_device*) calloc(mAndroidOutDevices.size(), sizeof(struct pal_device)); if (!mPalOutDevice) { goto error; } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { if ((OutDevices == (AUDIO_DEVICE_OUT_SPEAKER | AUDIO_DEVICE_OUT_WIRED_HEADSET)) || (OutDevices == (AUDIO_DEVICE_OUT_SPEAKER | AUDIO_DEVICE_OUT_WIRED_HEADPHONE))) { isCombo = true; } } /* fixed ear_out aux_out stereo end */ /* TODO: how to update based on stream parameters and see if device is supported */ for (int i = 0; i < mAndroidOutDevices.size(); i++) { memset(mPalOutDevice[i].custom_config.custom_key, 0, sizeof(mPalOutDevice[i].custom_config.custom_key)); mPalOutDevice[i].id = mPalOutDeviceIds[i]; if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, audio_is_usb_out_device)) mPalOutDevice[i].config.sample_rate = config_.sample_rate; else mPalOutDevice[i].config.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; mPalOutDevice[i].config.bit_width = CODEC_BACKEND_DEFAULT_BIT_WIDTH; mPalOutDevice[i].config.aud_fmt_id = PAL_AUDIO_FMT_PCM_S16_LE; // TODO: need to convert this from output format AHAL_INFO("device rate = %d width = %#x fmt = %#x", mPalOutDevice[i].config.sample_rate, mPalOutDevice[i].config.bit_width, mPalOutDevice[i].config.aud_fmt_id); mPalOutDevice[i].config.ch_info = {0, {0}}; if ((mPalOutDeviceIds[i] == PAL_DEVICE_OUT_USB_DEVICE) || (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_USB_HEADSET)) { mPalOutDevice[i].address.card_id = adevice->usb_card_id_; mPalOutDevice[i].address.device_num = adevice->usb_dev_num_; } strlcpy(mPalOutDevice[i].custom_config.custom_key, "", sizeof(mPalOutDevice[i].custom_config.custom_key)); if ((AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_SPEAKER_SAFE)) && (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_SPEAKER)) { strlcpy(mPalOutDevice[i].custom_config.custom_key, "speaker-safe", sizeof(mPalOutDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalOutDevice[i].custom_config.custom_key); } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { if (isCombo && (mPalOutDevice[i].id == PAL_DEVICE_OUT_WIRED_HEADSET || mPalOutDevice[i].id == PAL_DEVICE_OUT_WIRED_HEADPHONE)) { AHAL_DBG("copy combo custom key"); strlcpy(mPalOutDevice[i].custom_config.custom_key, "headphones-combo_custom_key", sizeof(mPalOutDevice[i].custom_config.custom_key)); } } /* fixed ear_out aux_out stereo end */ if (((AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_SPEAKER)) && (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_SPEAKER)) && property_get_bool("vendor.audio.mspp.enable", false)) { strlcpy(mPalOutDevice[i].custom_config.custom_key, "mspp", sizeof(mPalOutDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalOutDevice[i].custom_config.custom_key); } } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { AHAL_DBG("sjn: copied above?"); } /* fixed ear_out aux_out stereo end */ if (flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) { stream_.get()->start = astream_out_mmap_noirq_start; stream_.get()->stop = astream_out_mmap_noirq_stop; stream_.get()->create_mmap_buffer = astream_out_create_mmap_buffer; stream_.get()->get_mmap_position = astream_out_get_mmap_position; } if (isOffloadSpeedSupported() && isOffloadUsecase()) { stream_.get()->set_playback_rate_parameters = out_set_playback_rate_parameters; stream_.get()->get_playback_rate_parameters = out_get_playback_rate_parameters; } if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS) { AHAL_INFO("Haptics Usecase"); /* Setting flag here as no flag is being set for haptics from AudioPolicyManager * so that audio stream runs as low latency stream. */ flags_ = AUDIO_OUTPUT_FLAG_FAST; } mInitialized = true; for(auto dev : mAndroidOutDevices) audio_extn_gef_notify_device_config(dev, config_.channel_mask, config_.sample_rate, flags_, 0 /* MISOUND */); error: (void)FillHalFnPtrs(); AHAL_DBG("Exit"); return; } StreamOutPrimary::~StreamOutPrimary() { AHAL_DBG("close stream, handle(%x), pal_stream_handle (%p)", handle_, pal_stream_handle_); stream_mutex_.lock(); if (pal_stream_handle_) { if (CheckOffloadEffectsType(streamAttributes_.type)) { StopOffloadEffects(handle_, pal_stream_handle_); StopOffloadVisualizer(handle_, pal_stream_handle_); } pal_stream_close(pal_stream_handle_); pal_stream_handle_ = nullptr; } if (pal_haptics_stream_handle) { pal_stream_close(pal_haptics_stream_handle); pal_haptics_stream_handle = NULL; if (hapticBuffer) { free (hapticBuffer); hapticBuffer = NULL; } hapticsBufSize = 0; } if (convertBuffer) free(convertBuffer); if (mPalOutDeviceIds) { free(mPalOutDeviceIds); mPalOutDeviceIds = NULL; } if (mPalOutDevice) { free(mPalOutDevice); mPalOutDevice = NULL; } if (hapticsDevice) { free(hapticsDevice); hapticsDevice = NULL; } stream_mutex_.unlock(); } StreamInPrimary::StreamInPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_input_flags_t flags, struct audio_config *config, const char *address __unused, audio_source_t source) : StreamPrimary(handle, devices, config), mAndroidInDevices(devices), flags_(flags), btSinkMetadata{0, nullptr}, pal_vui_handle_(nullptr), mCompressEncoder(nullptr) { stream_ = std::shared_ptr<audio_stream_in> (new audio_stream_in()); std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); pal_stream_handle_ = NULL; mInitialized = false; int noPalDevices = 0; int ret = 0; readAt.tv_sec = 0; readAt.tv_nsec = 0; void *st_handle = nullptr; pal_param_payload *payload = nullptr; AHAL_DBG("enter: handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%zu) flags(%#x)"\ , handle, config->format, config->sample_rate, config->channel_mask, mAndroidInDevices.size(), flags); if (!(stream_.get())) { AHAL_ERR("stream_ new allocation failed"); goto error; } if (AudioExtn::audio_devices_cmp(mAndroidInDevices, audio_is_usb_in_device)) { // get capability from device of USB device_cap_query_ = (pal_param_device_capability_t *) calloc(1, sizeof(pal_param_device_capability_t)); if (!device_cap_query_) { AHAL_ERR("Failed to allocate mem for device_cap_query_"); goto error; } dynamic_media_config_t *dynamic_media_config = (dynamic_media_config_t *) calloc(1, sizeof(dynamic_media_config_t)); if (!dynamic_media_config) { free(device_cap_query_); AHAL_ERR("Failed to allocate mem for dynamic_media_config"); goto error; } size_t payload_size = 0; device_cap_query_->id = PAL_DEVICE_IN_USB_HEADSET; device_cap_query_->addr.card_id = adevice->usb_card_id_; device_cap_query_->addr.device_num = adevice->usb_dev_num_; device_cap_query_->config = dynamic_media_config; device_cap_query_->is_playback = false; ret = pal_get_param(PAL_PARAM_ID_DEVICE_CAPABILITY, (void **)&device_cap_query_, &payload_size, nullptr); if (ret < 0) { AHAL_ERR("Error usb device is not connected"); free(dynamic_media_config); free(device_cap_query_); dynamic_media_config = NULL; device_cap_query_ = NULL; } if (dynamic_media_config) { AHAL_DBG("usb fs=%d format=%d mask=%x", dynamic_media_config->sample_rate[0], dynamic_media_config->format[0], dynamic_media_config->mask[0]); if (!config->sample_rate) { config->sample_rate = dynamic_media_config->sample_rate[0]; config->channel_mask = (audio_channel_mask_t) dynamic_media_config->mask[0]; if (flags == AUDIO_INPUT_FLAG_DIRECT) { config_.format = AUDIO_FORMAT_AAC_LC; } else { config->format = (audio_format_t)dynamic_media_config->format[0]; } memcpy(&config_, config, sizeof(struct audio_config)); } } } /* this is required for USB otherwise adev_open_input_stream is failed */ if (!config_.sample_rate) { config_.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; } if (!config_.channel_mask) { config_.channel_mask = AUDIO_CHANNEL_IN_MONO; } if(!config_.format && flags == AUDIO_INPUT_FLAG_DIRECT) { // input direct flag is used for compress capture config_.format = AUDIO_FORMAT_AAC_LC; } else if (!config_.format) { config_.format = AUDIO_FORMAT_PCM_16_BIT; } /* * Audio config set from client may not be same as config used in pal, * update audio config here so that AudioFlinger can acquire correct * config used in pal/hal and configure record buffer converter properly. */ st_handle = audio_extn_sound_trigger_check_and_get_session(this); if (st_handle) { AHAL_VERBOSE("Found existing pal stream handle associated with capture handle"); pal_stream_handle_ = (pal_stream_handle_t *)st_handle; payload = (pal_param_payload *)calloc(1, sizeof(pal_param_payload) + sizeof(struct pal_stream_attributes)); if (!payload) { AHAL_ERR("Failed to allocate memory for stream attributes"); goto error; } payload->payload_size = sizeof(struct pal_stream_attributes); ret = pal_stream_get_param(pal_stream_handle_, PAL_PARAM_ID_STREAM_ATTRIBUTES, &payload); if (ret) { AHAL_ERR("Failed to get pal stream attributes, ret = %d", ret); if (payload) free(payload); goto error; } memcpy(&streamAttributes_, payload->payload, payload->payload_size); if (streamAttributes_.in_media_config.ch_info.channels == 1) config_.channel_mask = AUDIO_CHANNEL_IN_MONO; else if (streamAttributes_.in_media_config.ch_info.channels == 2) config_.channel_mask = AUDIO_CHANNEL_IN_STEREO; config_.format = AUDIO_FORMAT_PCM_16_BIT; config_.sample_rate = streamAttributes_.in_media_config.sample_rate; /* * reset pal_stream_handle in case standby come before * read as anyway it will be updated in StreamInPrimary::Open */ if (payload) free(payload); pal_stream_handle_ = nullptr; } AHAL_DBG("local : handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%#x) flags(%#x)"\ , handle, config_.format, config_.sample_rate, config_.channel_mask, AudioExtn::get_device_types(devices), flags); source_ = source; mAndroidInDevices = devices; if(mAndroidInDevices.empty()) mAndroidInDevices.insert(AUDIO_DEVICE_IN_DEFAULT); AHAL_DBG("No of devices %zu", mAndroidInDevices.size()); mPalInDeviceIds = (pal_device_id_t*) calloc(mAndroidInDevices.size(), sizeof(pal_device_id_t)); if (!mPalInDeviceIds) { goto error; } noPalDevices = getPalDeviceIds(devices, mPalInDeviceIds); if (noPalDevices != mAndroidInDevices.size()) { AHAL_ERR("mismatched pal %d and hal devices %zu", noPalDevices, mAndroidInDevices.size()); goto error; } mPalInDevice = (struct pal_device*) calloc(mAndroidInDevices.size(), sizeof(struct pal_device)); if (!mPalInDevice) { goto error; } for (int i = 0; i < mAndroidInDevices.size(); i++) { mPalInDevice[i].id = mPalInDeviceIds[i]; mPalInDevice[i].config.sample_rate = config->sample_rate; mPalInDevice[i].config.bit_width = CODEC_BACKEND_DEFAULT_BIT_WIDTH; // ch_info memory is allocated at resource manager:getdeviceconfig mPalInDevice[i].config.ch_info = {0, {0}}; mPalInDevice[i].config.aud_fmt_id = PAL_AUDIO_FMT_PCM_S16_LE; // TODO: need to convert this from output format if ((mPalInDeviceIds[i] == PAL_DEVICE_IN_USB_DEVICE) || (mPalInDeviceIds[i] == PAL_DEVICE_IN_USB_HEADSET)) { mPalInDevice[i].address.card_id = adevice->usb_card_id_; mPalInDevice[i].address.device_num = adevice->usb_dev_num_; } strlcpy(mPalInDevice[i].custom_config.custom_key, "", sizeof(mPalInDevice[i].custom_config.custom_key)); /* HDR use case check */ if ((source_ == AUDIO_SOURCE_UNPROCESSED) && (config_.sample_rate == 48000)) { uint8_t channels = audio_channel_count_from_in_mask(config_.channel_mask); if (channels == 4) { if (get_hdr_mode() == AUDIO_RECORD_ARM_HDR) { flags = flags_ = AUDIO_INPUT_FLAG_RAW; setup_hdr_usecase(&mPalInDevice[i]); } } } if (source_ == AUDIO_SOURCE_CAMCORDER && adevice->cameraOrientation == CAMERA_DEFAULT) { strlcpy(mPalInDevice[i].custom_config.custom_key, "camcorder_landscape", sizeof(mPalInDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalInDevice[i].custom_config.custom_key); } usecase_ = GetInputUseCase(flags, source); if (usecase_ == USECASE_AUDIO_RECORD_LOW_LATENCY || usecase_ == USECASE_AUDIO_RECORD_MMAP) { uint8_t channels = audio_channel_count_from_in_mask(config_.channel_mask); if (channels == 2) { strlcpy(mPalInDevice[i].custom_config.custom_key, "dual-mic", sizeof(mPalInDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalInDevice[i].custom_config.custom_key); } } if ((get_hdr_mode() == AUDIO_RECORD_SPF_HDR) && (source_ == AUDIO_SOURCE_CAMCORDER || source_ == AUDIO_SOURCE_MIC)) { setup_hdr_usecase(&mPalInDevice[i]); } } usecase_ = GetInputUseCase(flags, source); mInitialized = true; // compress capture using CompressAAC = CompressCapture::CompressAAC; if (usecase_ == USECASE_AUDIO_RECORD_COMPRESS) { if (config_.format == AUDIO_FORMAT_AAC_LC || config_.format == AUDIO_FORMAT_AAC_ADTS_HE_V1 || config_.format == AUDIO_FORMAT_AAC_ADTS_HE_V2) { mCompressEncoder = std::make_unique<CompressAAC>( config_.format, config_.sample_rate, audio_channel_count_from_in_mask(config_.channel_mask)); if (!mCompressEncoder) { usecase_ = USECASE_INVALID; AHAL_ERR("memory allocation failed"); mInitialized = false; } } else { usecase_ = USECASE_INVALID; AHAL_ERR("invalid usecase detected"); mInitialized = false; } } if (flags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) { stream_.get()->start = astream_in_mmap_noirq_start; stream_.get()->stop = astream_in_mmap_noirq_stop; stream_.get()->create_mmap_buffer = astream_in_create_mmap_buffer; stream_.get()->get_mmap_position = astream_in_get_mmap_position; } //ADD: KARAOKE if (usecase_ == USECASE_AUDIO_RECORD_LOW_LATENCY) { adevice->is_karaoke_fastcapture = true; } //END KARAOKE error: (void)FillHalFnPtrs(); AHAL_DBG("Exit"); return; } 请修改上面的代码在StreamOutPrimary::Standby中需要获取is_karaoke_status的值。
08-08
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值