怪异的latch free – session allocation事件

本文记录了一次在Oracle数据库中遇到的严重latch free等待问题的排查与解决过程。问题出现在一个业务系统运行效率骤降的情况下,通过分析发现是由特定索引的高并发查询引发的。最终通过调整索引的degree参数解决了问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

On October 8, 2007, in Oracle管理 , by 三少(David.Guo)

..

..

本来一直想写这个的,不过最近感觉身体不好,今天才写出来.

10月5日晚上,在客户现场配合业务运行.业务开始后,发现作同样业务的系统,有一个系统十分不正常,速度不如另外一个系统的1/4.客户无法忍受,其实我也无法忍受了,如果照那个速度,我估计要联系工作24小时才可以.

没有办法,对系统进行检查.发现在不正常的系统中,产生了严重的latch free等待.当看到这个的时候,我第一反应是sql语句有问题或者产生了热点,因为应用干的活是在一个表中查询,通过索引查询,但是同时有几十个session作同样的事情.仔细检查了sql语句,语句非常简单.

select user_id,nvl(substr(os_status,:”SYS_B_0″,:”SYS_B_1″),:”SYS_B_2″),nvl(substr(os_status,:”SYS_B_3″,:”SYS_B_4″),:”SYS_B_5″) from table_name where acc_id = :f1  

而执行计划走的索引,也是在acc_id上的索引.

经过检查索引和表的状态,发现表基本没有碎片,索引也是被最近被重建过的(后来才发现,问题就出在重建这个索引上,这是后话).

分析系统的latch free的小类,发现系统中,排名最高的几种latch free事件如下:

LATCH# NAME GETS MISSES SLEEPS

row cache objects 7309905930 1076355567 24799351

shared pool 1.2642E+10 168368806 28928269

process allocation 102171945 9233300 29316107

library cache 2.9539E+10 1234502998 87366744

session allocation 3091150764 514908537 244535123

从这个看,sleeps最多的事件是session allocation,并且,该事件的misses/gets超过了10%。

跟踪产生latch free的session,发现根本无法获得trace文件。Session会立刻断开。

分析listener.log,发现网络连接并无问题。经过分析和判断,初步认定该问题应该是由session的不断创建和退出引起,查询v$px_session发现,果然有session不断的在创建和退出。

分析后,我们认为,产生该问题,可能是由于并发查询引起。仔细检查使用的表,发现表的degree为1,并无问题,分析使用到的那个索引,该索引的degree为15. 这就是问题所在了,也是为什么查询中,无法trace session,因为session会立刻断开,到此也明白问题所在了.

因为系统不可能允许我停机修改数据库参数,调整并发查询的参数,因此,只能修改该索引的degree了,将该索引的degree修改为1以后,业务立刻正常.

其实我们也经常听人说,并发开高了不好,会有问题,到底有什么问题,估计也没有多少人真的碰到过.另外,在创建索引的时候,我们都喜欢并发创建,可是创建完成以后,喜欢忘记关闭并行.结果列,给后来人造成麻烦,不是吗?

作DBA,关键是要小心,小心再小心,其实这个问题出现,就是因为在系统作一个大的割接的时候,为了加快重建索引的速度,使用了非常大的并发,结果忘记关闭并行,就在业务繁忙的时候,导致问题出现.

DBA,小心一切操作,充分评估你的操作给后续系统带来的隐患.

 

最后非常感谢处理这个问题的时候,BITI_RAINY给予的帮助:)

 

 

from:http://www.oracledba.com.cn/blog/?p=327

Audio Stream.h class AudioDevice; typedef unsigned int app_type_t; class StreamPrimary { public: StreamPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, struct audio_config *config); virtual ~StreamPrimary(); uint32_t GetSampleRate(); uint32_t GetBufferSize(); audio_format_t GetFormat(); audio_channel_mask_t GetChannelMask(); int getPalDeviceIds(const std::set<audio_devices_t> &halDeviceIds, pal_device_id_t* palOutDeviceIds); audio_io_handle_t GetHandle(); int GetUseCase(); std::mutex write_wait_mutex_; std::condition_variable write_condition_; std::mutex stream_mutex_; bool write_ready_; std::mutex drain_wait_mutex_; std::condition_variable drain_condition_; bool drain_ready_; stream_callback_t client_callback; void *client_cookie; static int GetDeviceAddress(struct str_parms *parms, int *card_id, int *device_num); int GetLookupTableIndex(const struct string_to_enum *table, const int table_size, int value); bool GetSupportedConfig(bool isOutStream, struct str_parms *query, struct str_parms *reply); virtual int RouteStream(const std::set<audio_devices_t>&, bool force_device_switch = false) = 0; bool isStarted() { return stream_started_; }; protected: struct pal_stream_attributes streamAttributes_; pal_stream_handle_t* pal_stream_handle_; audio_io_handle_t handle_; pal_device_id_t pal_device_id_; struct audio_config config_; char address_[AUDIO_DEVICE_MAX_ADDRESS_LEN]; bool stream_started_ = false; bool stream_paused_ = false; bool stream_flushed_ = false; int usecase_; struct pal_volume_data *volume_; /* used to cache volume */ std::map <audio_devices_t, pal_device_id_t> mAndroidDeviceMap; int mmap_shared_memory_fd; app_type_t app_types_; pal_param_device_capability_t *device_cap_query_; app_type_t audio_power_app_types_;/* Audio PowerSave */ }; class StreamOutPrimary : public StreamPrimary { private: // Helper function for write to open pal stream & configure. ssize_t configurePalOutputStream(); //Helper method to standby streams upon write failures and sleep for buffer duration. ssize_t onWriteError(size_t bytes, ssize_t ret); protected: struct pal_device* mPalOutDevice; private: pal_device_id_t* mPalOutDeviceIds; std::set<audio_devices_t> mAndroidOutDevices; bool mInitialized; /* fixed ear_out aux_out stereo start */ bool mIsKaraokeMuteOnCombo; /* fixed ear_out aux_out stereo end */ // [offload playspeed bool isOffloadUsecase() { return GetUseCase() == USECASE_AUDIO_PLAYBACK_OFFLOAD; } bool isOffloadSpeedSupported(); bool isValidPlaybackRate(const audio_playback_rate_t *playbackRate); bool isValidStretchMode(audio_timestretch_stretch_mode_t stretchMode); bool isValidFallbackMode(audio_timestretch_fallback_mode_t fallbackMode); int setPlaybackRateToPal(const audio_playback_rate_t *playbackRate); audio_playback_rate_t mPlaybackRate = AUDIO_PLAYBACK_RATE_INITIALIZER; // offload Playspeed] public: StreamOutPrimary(audio_io_handle_t handle, const std::set<audio_devices_t>& devices, audio_output_flags_t flags, struct audio_config *config, const char *address, offload_effects_start_output fnp_start_offload_effect, offload_effects_stop_output fnp_stop_offload_effect, visualizer_hal_start_output fnp_visualizer_start_output_, visualizer_hal_stop_output fnp_visualizer_stop_output_); ~StreamOutPrimary(); bool sendGaplessMetadata = true; bool isCompressMetadataAvail = false; void UpdatemCachedPosition(uint64_t val); virtual int Standby(); int SetVolume(float left, float right); int refactorVolumeData(float left, float right); uint64_t GetFramesWritten(struct timespec *timestamp); virtual int SetParameters(struct str_parms *parms); int Pause(); int Resume(); int Drain(audio_drain_type_t type); int Flush(); virtual int Start(); int Stop(); virtual ssize_t write(const void *buffer, size_t bytes); virtual int Open(); void GetStreamHandle(audio_stream_out** stream); uint32_t GetBufferSize(); uint32_t GetBufferSizeForLowLatency(); int GetFrames(uint64_t *frames); static pal_stream_type_t GetPalStreamType(audio_output_flags_t halStreamFlags, uint32_t sample_rate, bool isDeviceAvail); static int64_t GetRenderLatency(audio_output_flags_t halStreamFlags); int GetOutputUseCase(audio_output_flags_t halStreamFlags); int StartOffloadEffects(audio_io_handle_t, pal_stream_handle_t*); int StopOffloadEffects(audio_io_handle_t, pal_stream_handle_t*); bool CheckOffloadEffectsType(pal_stream_type_t pal_stream_type); int StartOffloadVisualizer(audio_io_handle_t, pal_stream_handle_t*); int StopOffloadVisualizer(audio_io_handle_t, pal_stream_handle_t*); audio_output_flags_t flags_; int CreateMmapBuffer(int32_t min_size_frames, struct audio_mmap_buffer_info *info); int GetMmapPosition(struct audio_mmap_position *position); bool isDeviceAvailable(pal_device_id_t deviceId); int RouteStream(const std::set<audio_devices_t>&, bool force_device_switch = false); virtual void SetMode(audio_mode_t mode) = 0; ssize_t splitAndWriteAudioHapticsStream(const void *buffer, size_t bytes); bool period_size_is_plausible_for_low_latency(int period_size); source_metadata_t btSourceMetadata; std::vector<playback_track_metadata_t> tracks; int SetAggregateSourceMetadata(bool voice_active); static std::mutex sourceMetadata_mutex_; // [offload playback speed int getPlaybackRateParameters(audio_playback_rate_t *playbackRate); int setPlaybackRateParameters(const audio_playback_rate_t *playbackRate); // offload playback speed] protected: struct timespec writeAt; int get_compressed_buffer_size(); int get_pcm_buffer_size(); int is_direct(); audio_format_t halInputFormat = AUDIO_FORMAT_DEFAULT; audio_format_t halOutputFormat = AUDIO_FORMAT_DEFAULT; uint32_t convertBufSize; uint32_t fragments_ = 0; uint32_t fragment_size_ = 0; pal_snd_dec_t palSndDec; struct pal_compr_gapless_mdata gaplessMeta = {0, 0}; uint32_t msample_rate; uint16_t mchannels; std::shared_ptr<audio_stream_out> stream_; uint64_t mBytesWritten; /* total bytes written, not cleared when entering standby */ uint64_t mCachedPosition = 0; /* cache pcm offload position when entering standby */ offload_effects_start_output fnp_offload_effect_start_output_ = nullptr; offload_effects_stop_output fnp_offload_effect_stop_output_ = nullptr; visualizer_hal_start_output fnp_visualizer_start_output_ = nullptr; visualizer_hal_stop_output fnp_visualizer_stop_output_ = nullptr; void *convertBuffer; //Haptics Usecase struct pal_stream_attributes hapticsStreamAttributes; pal_stream_handle_t* pal_haptics_stream_handle; AudioExtn AudExtn; struct pal_device* hapticsDevice; uint8_t* hapticBuffer; size_t hapticsBufSize; audio_mode_t _mode; int FillHalFnPtrs(); friend class AudioDevice; struct timespec ts_first_write = {0, 0}; }; class StreamInPrimary : public StreamPrimary{ protected: struct pal_device* mPalInDevice; private: pal_device_id_t* mPalInDeviceIds; std::set<audio_devices_t> mAndroidInDevices; bool mInitialized; //Helper method to standby streams upon read failures and sleep for buffer duration. ssize_t onReadError(size_t bytes, size_t ret); public: StreamInPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_input_flags_t flags, struct audio_config *config, const char *address, audio_source_t source); ~StreamInPrimary(); int Standby(); int SetGain(float gain); void GetStreamHandle(audio_stream_in** stream); virtual int Open(); int Start(); int Stop(); int SetMicMute(bool mute); ssize_t read(const void *buffer, size_t bytes); uint32_t GetBufferSize(); uint32_t GetBufferSizeForLowLatencyRecord(); pal_stream_type_t GetPalStreamType(audio_input_flags_t halStreamFlags, uint32_t sample_rate); int GetInputUseCase(audio_input_flags_t halStreamFlags, audio_source_t source); int addRemoveAudioEffect(const struct audio_stream *stream, effect_handle_t effect,bool enable); virtual int SetParameters(const char *kvpairs); bool getParameters(struct str_parms *query, struct str_parms *reply); bool is_st_session; audio_input_flags_t flags_; int CreateMmapBuffer(int32_t min_size_frames, struct audio_mmap_buffer_info *info); int GetMmapPosition(struct audio_mmap_position *position); bool isDeviceAvailable(pal_device_id_t deviceId); int RouteStream(const std::set<audio_devices_t>& new_devices, bool force_device_switch = false); int64_t GetSourceLatency(audio_input_flags_t halStreamFlags); uint64_t GetFramesRead(int64_t *time); int GetPalDeviceIds(pal_device_id_t *palDevIds, int *numPalDevs); sink_metadata_t btSinkMetadata; std::vector<record_track_metadata_t> tracks; int SetAggregateSinkMetadata(bool voice_active); static std::mutex sinkMetadata_mutex_; pal_stream_handle_t *pal_vui_handle_; protected: struct timespec readAt; uint32_t fragments_ = 0; uint32_t fragment_size_ = 0; int FillHalFnPtrs(); std::shared_ptr<audio_stream_in> stream_; audio_source_t source_; friend class AudioDevice; uint64_t mBytesRead = 0; /* total bytes read, not cleared when entering standby */ // for compress capture usecase std::unique_ptr<CompressCapture::CompressAAC> mCompressEncoder; bool isECEnabled = false; bool isNSEnabled = false; bool effects_applied_ = true; //ADD: KARAOKE bool is_karaoke_on = false; int is_karaoke_status = 0; bool is_cts_stream = false; std::mutex activeStreamMutex; //END KARAOKE // MIUI ADD: Audio_XiaoAi bool is_map_switch = false; // END Audio_XiaoAi }; AudioStream.cpp int StreamOutPrimary::Standby() { int ret = 0; /* fixed ear_out aux_out stereo start */ std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); std::set<audio_devices_t> new_devices; /* fixed ear_out aux_out stereo end */ AHAL_DBG("Enter"); if (adevice->is_earout_hphl_conflict && mIsKaraokeMuteOnCombo) { AHAL_DBG("routestream from combo whs to whs before standby"); mAndroidOutDevices.erase(AUDIO_DEVICE_OUT_SPEAKER); new_devices = mAndroidOutDevices; StreamOutPrimary::RouteStream(new_devices, true); } stream_mutex_.lock(); if (pal_stream_handle_) { if (streamAttributes_.type == PAL_STREAM_PCM_OFFLOAD) { /* * when ssr happens, dsp position for pcm offload could be 0, * so get written frames. Else, get frames. */ if (PAL_CARD_STATUS_DOWN(AudioDevice::sndCardState)) { struct timespec ts; // release stream lock as GetFramesWritten will lock/unlock stream mutex stream_mutex_.unlock(); mCachedPosition = GetFramesWritten(&ts); stream_mutex_.lock(); AHAL_DBG("card is offline, return written frames %lld", (long long)mCachedPosition); } else { GetFrames(&mCachedPosition); } } ret = pal_stream_stop(pal_stream_handle_); if (ret) { AHAL_ERR("failed to stop stream."); ret = -EINVAL; } if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS && pal_haptics_stream_handle) { ret = pal_stream_stop(pal_haptics_stream_handle); if (ret) { AHAL_ERR("failed to stop haptics stream."); } } } stream_started_ = false; stream_paused_ = false; sendGaplessMetadata = true; if (CheckOffloadEffectsType(streamAttributes_.type)) { ret = StopOffloadEffects(handle_, pal_stream_handle_); ret = StopOffloadVisualizer(handle_, pal_stream_handle_); } if (pal_stream_handle_) { ret = pal_stream_close(pal_stream_handle_); pal_stream_handle_ = NULL; if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS && pal_haptics_stream_handle) { ret = pal_stream_close(pal_haptics_stream_handle); pal_haptics_stream_handle = NULL; if (hapticBuffer) { free (hapticBuffer); hapticBuffer = NULL; } hapticsBufSize = 0; if (hapticsDevice) { free(hapticsDevice); hapticsDevice = NULL; } } } if (karaoke) { ret = AudExtn.karaoke_stop(); if (ret) { AHAL_ERR("failed to stop karaoke path."); ret = 0; } else { ret = AudExtn.karaoke_close(); if (ret) { AHAL_ERR("failed to close karaoke path."); ret = 0; } } } if (mmap_shared_memory_fd >= 0) { close(mmap_shared_memory_fd); mmap_shared_memory_fd = -1; } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict && mIsKaraokeMuteOnCombo) { const char kvp[] = "audio_karaoke_mute=0"; struct str_parms *parms = str_parms_create_str(kvp); if (parms) { AudioExtn::audio_extn_set_parameters(adevice, parms); mIsKaraokeMuteOnCombo = false; str_parms_destroy(parms); } else { AHAL_ERR("Error in str_parms_create_str"); } } /* fixed ear_out aux_out stereo end */ if (ret) ret = -EINVAL; exit: stream_mutex_.unlock(); AHAL_DBG("Exit ret: %d", ret); return ret; } StreamOutPrimary::StreamOutPrimary( audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_output_flags_t flags, struct audio_config *config, const char *address __unused, offload_effects_start_output start_offload_effect, offload_effects_stop_output stop_offload_effect, visualizer_hal_start_output visualizer_start_output, visualizer_hal_stop_output visualizer_stop_output): StreamPrimary(handle, devices, config), mAndroidOutDevices(devices), flags_(flags), btSourceMetadata{0, nullptr} { stream_ = std::shared_ptr<audio_stream_out> (new audio_stream_out()); std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); mInitialized = false; /* fixed ear_out aux_out stereo start */ mIsKaraokeMuteOnCombo = false; bool isCombo = false; audio_devices_t OutDevices = AudioExtn::get_device_types(mAndroidOutDevices); /* fixed ear_out aux_out stereo end */ pal_stream_handle_ = nullptr; pal_haptics_stream_handle = nullptr; mPalOutDeviceIds = nullptr; mPalOutDevice = nullptr; convertBuffer = NULL; hapticsDevice = NULL; hapticBuffer = NULL; hapticsBufSize = 0; writeAt.tv_sec = 0; writeAt.tv_nsec = 0; mBytesWritten = 0; int noPalDevices = 0; int ret = 0; /*Initialize the gaplessMeta value with 0*/ memset(&gaplessMeta,0,sizeof(struct pal_compr_gapless_mdata)); if (!stream_) { AHAL_ERR("No memory allocated for stream_"); throw std::runtime_error("No memory allocated for stream_"); } AHAL_DBG("enter: handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%zu) flags(%#x)\ address(%s)", handle, config->format, config->sample_rate, config->channel_mask, mAndroidOutDevices.size(), flags, address); //TODO: check if USB device is connected or not if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, audio_is_usb_out_device)){ // get capability from device of USB device_cap_query_ = (pal_param_device_capability_t *) calloc(1, sizeof(pal_param_device_capability_t)); if (!device_cap_query_) { AHAL_ERR("Failed to allocate mem for device_cap_query_"); goto error; } dynamic_media_config_t *dynamic_media_config = (dynamic_media_config_t *) calloc(1, sizeof(dynamic_media_config_t)); if (!dynamic_media_config) { free(device_cap_query_); AHAL_ERR("Failed to allocate mem for dynamic_media_config"); goto error; } size_t payload_size = 0; device_cap_query_->id = PAL_DEVICE_OUT_USB_DEVICE; device_cap_query_->addr.card_id = adevice->usb_card_id_; device_cap_query_->addr.device_num = adevice->usb_dev_num_; device_cap_query_->config = dynamic_media_config; device_cap_query_->is_playback = true; ret = pal_get_param(PAL_PARAM_ID_DEVICE_CAPABILITY, (void **)&device_cap_query_, &payload_size, nullptr); if (ret < 0) { AHAL_ERR("Error usb device is not connected"); free(dynamic_media_config); free(device_cap_query_); dynamic_media_config = NULL; device_cap_query_ = NULL; } else if (audio_is_linear_pcm(config->format) && AUDIO_OUTPUT_FLAG_NONE == flags) { // HIFI output port AHAL_DBG("use deep buffer for HIFI output on USBC hs"); flags_ = AUDIO_OUTPUT_FLAG_DEEP_BUFFER; } if (!config->sample_rate || !config->format || !config->channel_mask) { if (dynamic_media_config) { config->sample_rate = dynamic_media_config->sample_rate[0]; config->channel_mask = (audio_channel_mask_t) dynamic_media_config->mask[0]; config->format = (audio_format_t)dynamic_media_config->format[0]; } if (config->sample_rate == 0) config->sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; if (config->channel_mask == AUDIO_CHANNEL_NONE) config->channel_mask = AUDIO_CHANNEL_OUT_STEREO; if (config->format == AUDIO_FORMAT_DEFAULT) config->format = AUDIO_FORMAT_PCM_16_BIT; memcpy(&config_, config, sizeof(struct audio_config)); AHAL_INFO("sample rate = %d channel_mask = %#x fmt = %#x", config->sample_rate, config->channel_mask, config->format); } } if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_AUX_DIGITAL)){ AHAL_DBG("AUDIO_DEVICE_OUT_AUX_DIGITAL and DIRECT | OFFLOAD, check hdmi caps"); if (config->sample_rate == 0) { config->sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; config_.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; } if (config->channel_mask == AUDIO_CHANNEL_NONE) { config->channel_mask = AUDIO_CHANNEL_OUT_5POINT1; config_.channel_mask = AUDIO_CHANNEL_OUT_5POINT1; } if (config->format == AUDIO_FORMAT_DEFAULT) { config->format = AUDIO_FORMAT_PCM_16_BIT; config_.format = AUDIO_FORMAT_PCM_16_BIT; } } usecase_ = GetOutputUseCase(flags_); if (address) { strlcpy((char *)&address_, address, AUDIO_DEVICE_MAX_ADDRESS_LEN); } else { AHAL_DBG("invalid address"); } fnp_offload_effect_start_output_ = start_offload_effect; fnp_offload_effect_stop_output_ = stop_offload_effect; fnp_visualizer_start_output_ = visualizer_start_output; fnp_visualizer_stop_output_ = visualizer_stop_output; if (mAndroidOutDevices.empty()) mAndroidOutDevices.insert(AUDIO_DEVICE_OUT_DEFAULT); AHAL_DBG("No of Android devices %zu", mAndroidOutDevices.size()); mPalOutDeviceIds = (pal_device_id_t*) calloc(mAndroidOutDevices.size(), sizeof(pal_device_id_t)); if (!mPalOutDeviceIds) { goto error; } noPalDevices = getPalDeviceIds(mAndroidOutDevices, mPalOutDeviceIds); if (noPalDevices != mAndroidOutDevices.size()) { AHAL_ERR("mismatched pal no of devices %d and hal devices %zu", noPalDevices, mAndroidOutDevices.size()); goto error; } mPalOutDevice = (struct pal_device*) calloc(mAndroidOutDevices.size(), sizeof(struct pal_device)); if (!mPalOutDevice) { goto error; } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { if ((OutDevices == (AUDIO_DEVICE_OUT_SPEAKER | AUDIO_DEVICE_OUT_WIRED_HEADSET)) || (OutDevices == (AUDIO_DEVICE_OUT_SPEAKER | AUDIO_DEVICE_OUT_WIRED_HEADPHONE))) { isCombo = true; } } /* fixed ear_out aux_out stereo end */ /* TODO: how to update based on stream parameters and see if device is supported */ for (int i = 0; i < mAndroidOutDevices.size(); i++) { memset(mPalOutDevice[i].custom_config.custom_key, 0, sizeof(mPalOutDevice[i].custom_config.custom_key)); mPalOutDevice[i].id = mPalOutDeviceIds[i]; if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, audio_is_usb_out_device)) mPalOutDevice[i].config.sample_rate = config_.sample_rate; else mPalOutDevice[i].config.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; mPalOutDevice[i].config.bit_width = CODEC_BACKEND_DEFAULT_BIT_WIDTH; mPalOutDevice[i].config.aud_fmt_id = PAL_AUDIO_FMT_PCM_S16_LE; // TODO: need to convert this from output format AHAL_INFO("device rate = %d width = %#x fmt = %#x", mPalOutDevice[i].config.sample_rate, mPalOutDevice[i].config.bit_width, mPalOutDevice[i].config.aud_fmt_id); mPalOutDevice[i].config.ch_info = {0, {0}}; if ((mPalOutDeviceIds[i] == PAL_DEVICE_OUT_USB_DEVICE) || (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_USB_HEADSET)) { mPalOutDevice[i].address.card_id = adevice->usb_card_id_; mPalOutDevice[i].address.device_num = adevice->usb_dev_num_; } strlcpy(mPalOutDevice[i].custom_config.custom_key, "", sizeof(mPalOutDevice[i].custom_config.custom_key)); if ((AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_SPEAKER_SAFE)) && (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_SPEAKER)) { strlcpy(mPalOutDevice[i].custom_config.custom_key, "speaker-safe", sizeof(mPalOutDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalOutDevice[i].custom_config.custom_key); } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { if (isCombo && (mPalOutDevice[i].id == PAL_DEVICE_OUT_WIRED_HEADSET || mPalOutDevice[i].id == PAL_DEVICE_OUT_WIRED_HEADPHONE)) { AHAL_DBG("copy combo custom key"); strlcpy(mPalOutDevice[i].custom_config.custom_key, "headphones-combo_custom_key", sizeof(mPalOutDevice[i].custom_config.custom_key)); } } /* fixed ear_out aux_out stereo end */ if (((AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_SPEAKER)) && (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_SPEAKER)) && property_get_bool("vendor.audio.mspp.enable", false)) { strlcpy(mPalOutDevice[i].custom_config.custom_key, "mspp", sizeof(mPalOutDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalOutDevice[i].custom_config.custom_key); } } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { AHAL_DBG("sjn: copied above?"); } /* fixed ear_out aux_out stereo end */ if (flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) { stream_.get()->start = astream_out_mmap_noirq_start; stream_.get()->stop = astream_out_mmap_noirq_stop; stream_.get()->create_mmap_buffer = astream_out_create_mmap_buffer; stream_.get()->get_mmap_position = astream_out_get_mmap_position; } if (isOffloadSpeedSupported() && isOffloadUsecase()) { stream_.get()->set_playback_rate_parameters = out_set_playback_rate_parameters; stream_.get()->get_playback_rate_parameters = out_get_playback_rate_parameters; } if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS) { AHAL_INFO("Haptics Usecase"); /* Setting flag here as no flag is being set for haptics from AudioPolicyManager * so that audio stream runs as low latency stream. */ flags_ = AUDIO_OUTPUT_FLAG_FAST; } mInitialized = true; for(auto dev : mAndroidOutDevices) audio_extn_gef_notify_device_config(dev, config_.channel_mask, config_.sample_rate, flags_, 0 /* MISOUND */); error: (void)FillHalFnPtrs(); AHAL_DBG("Exit"); return; } StreamOutPrimary::~StreamOutPrimary() { AHAL_DBG("close stream, handle(%x), pal_stream_handle (%p)", handle_, pal_stream_handle_); stream_mutex_.lock(); if (pal_stream_handle_) { if (CheckOffloadEffectsType(streamAttributes_.type)) { StopOffloadEffects(handle_, pal_stream_handle_); StopOffloadVisualizer(handle_, pal_stream_handle_); } pal_stream_close(pal_stream_handle_); pal_stream_handle_ = nullptr; } if (pal_haptics_stream_handle) { pal_stream_close(pal_haptics_stream_handle); pal_haptics_stream_handle = NULL; if (hapticBuffer) { free (hapticBuffer); hapticBuffer = NULL; } hapticsBufSize = 0; } if (convertBuffer) free(convertBuffer); if (mPalOutDeviceIds) { free(mPalOutDeviceIds); mPalOutDeviceIds = NULL; } if (mPalOutDevice) { free(mPalOutDevice); mPalOutDevice = NULL; } if (hapticsDevice) { free(hapticsDevice); hapticsDevice = NULL; } stream_mutex_.unlock(); } StreamInPrimary::StreamInPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_input_flags_t flags, struct audio_config *config, const char *address __unused, audio_source_t source) : StreamPrimary(handle, devices, config), mAndroidInDevices(devices), flags_(flags), btSinkMetadata{0, nullptr}, pal_vui_handle_(nullptr), mCompressEncoder(nullptr) { stream_ = std::shared_ptr<audio_stream_in> (new audio_stream_in()); std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); pal_stream_handle_ = NULL; mInitialized = false; int noPalDevices = 0; int ret = 0; readAt.tv_sec = 0; readAt.tv_nsec = 0; void *st_handle = nullptr; pal_param_payload *payload = nullptr; AHAL_DBG("enter: handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%zu) flags(%#x)"\ , handle, config->format, config->sample_rate, config->channel_mask, mAndroidInDevices.size(), flags); if (!(stream_.get())) { AHAL_ERR("stream_ new allocation failed"); goto error; } if (AudioExtn::audio_devices_cmp(mAndroidInDevices, audio_is_usb_in_device)) { // get capability from device of USB device_cap_query_ = (pal_param_device_capability_t *) calloc(1, sizeof(pal_param_device_capability_t)); if (!device_cap_query_) { AHAL_ERR("Failed to allocate mem for device_cap_query_"); goto error; } dynamic_media_config_t *dynamic_media_config = (dynamic_media_config_t *) calloc(1, sizeof(dynamic_media_config_t)); if (!dynamic_media_config) { free(device_cap_query_); AHAL_ERR("Failed to allocate mem for dynamic_media_config"); goto error; } size_t payload_size = 0; device_cap_query_->id = PAL_DEVICE_IN_USB_HEADSET; device_cap_query_->addr.card_id = adevice->usb_card_id_; device_cap_query_->addr.device_num = adevice->usb_dev_num_; device_cap_query_->config = dynamic_media_config; device_cap_query_->is_playback = false; ret = pal_get_param(PAL_PARAM_ID_DEVICE_CAPABILITY, (void **)&device_cap_query_, &payload_size, nullptr); if (ret < 0) { AHAL_ERR("Error usb device is not connected"); free(dynamic_media_config); free(device_cap_query_); dynamic_media_config = NULL; device_cap_query_ = NULL; } if (dynamic_media_config) { AHAL_DBG("usb fs=%d format=%d mask=%x", dynamic_media_config->sample_rate[0], dynamic_media_config->format[0], dynamic_media_config->mask[0]); if (!config->sample_rate) { config->sample_rate = dynamic_media_config->sample_rate[0]; config->channel_mask = (audio_channel_mask_t) dynamic_media_config->mask[0]; if (flags == AUDIO_INPUT_FLAG_DIRECT) { config_.format = AUDIO_FORMAT_AAC_LC; } else { config->format = (audio_format_t)dynamic_media_config->format[0]; } memcpy(&config_, config, sizeof(struct audio_config)); } } } /* this is required for USB otherwise adev_open_input_stream is failed */ if (!config_.sample_rate) { config_.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; } if (!config_.channel_mask) { config_.channel_mask = AUDIO_CHANNEL_IN_MONO; } if(!config_.format && flags == AUDIO_INPUT_FLAG_DIRECT) { // input direct flag is used for compress capture config_.format = AUDIO_FORMAT_AAC_LC; } else if (!config_.format) { config_.format = AUDIO_FORMAT_PCM_16_BIT; } /* * Audio config set from client may not be same as config used in pal, * update audio config here so that AudioFlinger can acquire correct * config used in pal/hal and configure record buffer converter properly. */ st_handle = audio_extn_sound_trigger_check_and_get_session(this); if (st_handle) { AHAL_VERBOSE("Found existing pal stream handle associated with capture handle"); pal_stream_handle_ = (pal_stream_handle_t *)st_handle; payload = (pal_param_payload *)calloc(1, sizeof(pal_param_payload) + sizeof(struct pal_stream_attributes)); if (!payload) { AHAL_ERR("Failed to allocate memory for stream attributes"); goto error; } payload->payload_size = sizeof(struct pal_stream_attributes); ret = pal_stream_get_param(pal_stream_handle_, PAL_PARAM_ID_STREAM_ATTRIBUTES, &payload); if (ret) { AHAL_ERR("Failed to get pal stream attributes, ret = %d", ret); if (payload) free(payload); goto error; } memcpy(&streamAttributes_, payload->payload, payload->payload_size); if (streamAttributes_.in_media_config.ch_info.channels == 1) config_.channel_mask = AUDIO_CHANNEL_IN_MONO; else if (streamAttributes_.in_media_config.ch_info.channels == 2) config_.channel_mask = AUDIO_CHANNEL_IN_STEREO; config_.format = AUDIO_FORMAT_PCM_16_BIT; config_.sample_rate = streamAttributes_.in_media_config.sample_rate; /* * reset pal_stream_handle in case standby come before * read as anyway it will be updated in StreamInPrimary::Open */ if (payload) free(payload); pal_stream_handle_ = nullptr; } AHAL_DBG("local : handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%#x) flags(%#x)"\ , handle, config_.format, config_.sample_rate, config_.channel_mask, AudioExtn::get_device_types(devices), flags); source_ = source; mAndroidInDevices = devices; if(mAndroidInDevices.empty()) mAndroidInDevices.insert(AUDIO_DEVICE_IN_DEFAULT); AHAL_DBG("No of devices %zu", mAndroidInDevices.size()); mPalInDeviceIds = (pal_device_id_t*) calloc(mAndroidInDevices.size(), sizeof(pal_device_id_t)); if (!mPalInDeviceIds) { goto error; } noPalDevices = getPalDeviceIds(devices, mPalInDeviceIds); if (noPalDevices != mAndroidInDevices.size()) { AHAL_ERR("mismatched pal %d and hal devices %zu", noPalDevices, mAndroidInDevices.size()); goto error; } mPalInDevice = (struct pal_device*) calloc(mAndroidInDevices.size(), sizeof(struct pal_device)); if (!mPalInDevice) { goto error; } for (int i = 0; i < mAndroidInDevices.size(); i++) { mPalInDevice[i].id = mPalInDeviceIds[i]; mPalInDevice[i].config.sample_rate = config->sample_rate; mPalInDevice[i].config.bit_width = CODEC_BACKEND_DEFAULT_BIT_WIDTH; // ch_info memory is allocated at resource manager:getdeviceconfig mPalInDevice[i].config.ch_info = {0, {0}}; mPalInDevice[i].config.aud_fmt_id = PAL_AUDIO_FMT_PCM_S16_LE; // TODO: need to convert this from output format if ((mPalInDeviceIds[i] == PAL_DEVICE_IN_USB_DEVICE) || (mPalInDeviceIds[i] == PAL_DEVICE_IN_USB_HEADSET)) { mPalInDevice[i].address.card_id = adevice->usb_card_id_; mPalInDevice[i].address.device_num = adevice->usb_dev_num_; } strlcpy(mPalInDevice[i].custom_config.custom_key, "", sizeof(mPalInDevice[i].custom_config.custom_key)); /* HDR use case check */ if ((source_ == AUDIO_SOURCE_UNPROCESSED) && (config_.sample_rate == 48000)) { uint8_t channels = audio_channel_count_from_in_mask(config_.channel_mask); if (channels == 4) { if (get_hdr_mode() == AUDIO_RECORD_ARM_HDR) { flags = flags_ = AUDIO_INPUT_FLAG_RAW; setup_hdr_usecase(&mPalInDevice[i]); } } } if (source_ == AUDIO_SOURCE_CAMCORDER && adevice->cameraOrientation == CAMERA_DEFAULT) { strlcpy(mPalInDevice[i].custom_config.custom_key, "camcorder_landscape", sizeof(mPalInDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalInDevice[i].custom_config.custom_key); } usecase_ = GetInputUseCase(flags, source); if (usecase_ == USECASE_AUDIO_RECORD_LOW_LATENCY || usecase_ == USECASE_AUDIO_RECORD_MMAP) { uint8_t channels = audio_channel_count_from_in_mask(config_.channel_mask); if (channels == 2) { strlcpy(mPalInDevice[i].custom_config.custom_key, "dual-mic", sizeof(mPalInDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalInDevice[i].custom_config.custom_key); } } if ((get_hdr_mode() == AUDIO_RECORD_SPF_HDR) && (source_ == AUDIO_SOURCE_CAMCORDER || source_ == AUDIO_SOURCE_MIC)) { setup_hdr_usecase(&mPalInDevice[i]); } } usecase_ = GetInputUseCase(flags, source); mInitialized = true; // compress capture using CompressAAC = CompressCapture::CompressAAC; if (usecase_ == USECASE_AUDIO_RECORD_COMPRESS) { if (config_.format == AUDIO_FORMAT_AAC_LC || config_.format == AUDIO_FORMAT_AAC_ADTS_HE_V1 || config_.format == AUDIO_FORMAT_AAC_ADTS_HE_V2) { mCompressEncoder = std::make_unique<CompressAAC>( config_.format, config_.sample_rate, audio_channel_count_from_in_mask(config_.channel_mask)); if (!mCompressEncoder) { usecase_ = USECASE_INVALID; AHAL_ERR("memory allocation failed"); mInitialized = false; } } else { usecase_ = USECASE_INVALID; AHAL_ERR("invalid usecase detected"); mInitialized = false; } } if (flags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) { stream_.get()->start = astream_in_mmap_noirq_start; stream_.get()->stop = astream_in_mmap_noirq_stop; stream_.get()->create_mmap_buffer = astream_in_create_mmap_buffer; stream_.get()->get_mmap_position = astream_in_get_mmap_position; } //ADD: KARAOKE if (usecase_ == USECASE_AUDIO_RECORD_LOW_LATENCY) { adevice->is_karaoke_fastcapture = true; } //END KARAOKE error: (void)FillHalFnPtrs(); AHAL_DBG("Exit"); return; } 请修改上面的代码在StreamOutPrimary::Standby中需要获取is_karaoke_status的值。
最新发布
08-08
<think>根据提供的代码,我们需要在`StreamOutPrimary::Standby`函数中获取`is_karaoke_status`的值。但是,在`StreamOutPrimary`类中并没有直接定义`is_karaoke_status`成员变量。不过,在`StreamInPrimary`类中有一个`is_karaoke_status`成员变量。 然而,`StreamOutPrimary::Standby`是输出流的函数,而`is_karaoke_status`是输入流的成员。这似乎不太合理。我们需要确认一下需求:是否真的需要在输出流的Standby函数中访问输入流的成员? 如果我们仔细阅读提供的代码,在`StreamOutPrimary::Standby`函数中,有一段关于karaoke的代码: ```cpp if (karaoke) { ret = AudExtn.karaoke_stop(); if (ret) { AHAL_ERR("failed to stop karaoke path."); ret = 0; } else { ret = AudExtn.karaoke_close(); if (ret) { AHAL_ERR("failed to close karaoke path."); ret = 0; } } } ``` 这里使用了`karaoke`变量(注意,在`StreamOutPrimary`类中并没有定义`karaoke`成员,这可能是代码片段不完整导致的)。 另外,在`StreamInPrimary`类中,我们看到了以下成员: ```cpp //ADD: KARAOKE bool is_karaoke_on = false; int is_karaoke_status = 0; bool is_cts_stream = false; std::mutex activeStreamMutex; //END KARAOKE ``` 因此,如果需求是需要在`StreamOutPrimary::Standby`中获取输入流的`is_karaoke_status`,那么我们需要考虑如何访问输入流实例。但是,输出流和输入流是独立的,一个输出流的Standby函数中通常不应该访问输入流的状态。 但是,如果我们假设系统同时只有一个活跃的karaoke会话(即输入和输出是配对的),那么我们可以通过某种方式获取输入流实例。然而,在现有的代码结构中,并没有直接的方法。 另一种可能性是:在`StreamOutPrimary`类中增加一个成员变量来记录karaoke状态,并在需要的时候从输入流同步过来。但是,这需要修改构造函数和增加同步机制,比较复杂。 考虑到代码片段中,在`StreamOutPrimary::Standby`函数中已经使用了`karaoke`变量(虽然未在类中定义,可能是遗漏的),我们或许应该先修正这个变量。但是,由于我们不知道`karaoke`变量在原始代码中的定义,所以无法直接修改。 根据问题描述,我们只需要在`StreamOutPrimary::Standby`中获取`is_karaoke_status`的值。那么,我们可以尝试以下步骤: 1. 在`StreamOutPrimary`类中添加一个成员变量,例如`is_karaoke_status_`,并提供一个设置方法。 2. 在`StreamOutPrimary::Standby`函数中使用这个成员变量。 但是,这需要我们在其他地方(比如输入流)更新输出流的这个状态,这可能会引入耦合。 由于问题描述不够完整,我们只能根据现有代码进行推测。在提供的代码片段中,`StreamOutPrimary`类中有一个`AudExtn`成员(在`StreamOutPrimary::Standby`中使用了`AudExtn.karaoke_stop()`),所以可能`AudExtn`中包含了karaoke的状态。 但是,在`StreamOutPrimary`类的定义中,我们并没有看到`AudExtn`成员,只看到在`StreamOutPrimary::Standby`函数中使用了它。这可能是因为代码片段不完整。 由于问题复杂且信息不足,我们只能根据问题字面要求修改:在`StreamOutPrimary::Standby`函数中获取`is_karaoke_status`的值。由于`is_karaoke_status`是`StreamInPrimary`的成员,我们需要一个全局或静态的方法来获取当前活跃的输入流的`is_karaoke_status`。但是,这并不在原始代码中。 因此,我们需要重新考虑设计。不过,根据问题,我们只需要在`StreamOutPrimary::Standby`函数中增加获取`is_karaoke_status`的代码,而不需要关心如何获取。所以,我们可以假设存在一个全局函数或者单例来获取这个值。 例如,我们可以假设有一个全局的`AudioDevice`实例,通过它来获取当前输入流的`is_karaoke_status`。 在提供的代码中,`StreamOutPrimary::Standby`函数中已经使用了`AudioDevice::GetInstance()`来获取`adevice`。因此,我们可以尝试通过`adevice`来获取输入流的`is_karaoke_status`。 但是,在`AudioDevice`类中,我们并没有看到直接获取输入流状态的方法。所以,我们需要在`AudioDevice`类中增加一个方法来获取`is_karaoke_status`,但这超出了修改范围。 由于问题描述不明确,我们只能按照以下假设修改:在`StreamOutPrimary::Standby`函数中,我们通过`adevice`获取当前的输入流,然后获取其`is_karaoke_status`。但是,这需要`AudioDevice`类提供获取输入流的方法,并且输入流可能不止一个,我们需要知道是哪一个。 考虑到这些复杂性,我们可能需要在`AudioDevice`中记录当前用于karaoke的输入流。但是,由于代码不完整,我们无法提供完整的修改。 不过,如果我们只是需要获取`is_karaoke_status`的值,并且假设系统中只有一个输入流(或karaoke输入流),我们可以这样做: 在`StreamOutPrimary::Standby`函数中,添加: ```cpp std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); int is_karaoke_status = adevice->GetKaraokeStatus(); ``` 然后,在`AudioDevice`类中实现`GetKaraokeStatus`方法,该方法返回当前输入流的`is_karaoke_status`。但是,这需要修改`AudioDevice`类,并且需要`AudioDevice`类持有输入流的引用。 由于问题只要求修改`StreamOutPrimary::Standby`,我们无法提供完整的解决方案。因此,我们只能根据现有代码进行最小修改,即假设在`StreamOutPrimary`类中已经有一个成员变量可以获取`is_karaoke_status`。 但是,在`StreamOutPrimary`类中并没有这个成员。所以,我们可能需要在`StreamOutPrimary`类中添加一个成员变量`is_karaoke_status_`,并在适当的时候设置它。 由于代码片段中并没有显示如何设置,我们只能暂时忽略设置过程,仅仅在`Standby`函数中获取这个值(可能始终为0)。为了满足问题要求,我们添加一行获取的代码,并打印出来。 修改`StreamOutPrimary::Standby`函数,在适当位置添加: ```cpp AHAL_DBG("is_karaoke_status = %d", is_karaoke_status_); ``` 但是,由于`StreamOutPrimary`类中没有`is_karaoke_status_`,我们需要先定义它。 由于问题描述不完整,我们只能提供一种可能的修改方案: 1. 在`StreamOutPrimary`类定义中添加一个成员变量`is_karaoke_status_`。 2. 在`StreamOutPrimary`的构造函数中初始化它。 3. 在`StreamOutPrimary::Standby`函数中使用它。 但是,如何设置`is_karaoke_status_`的值呢?这需要另外的机制,比如在输入流状态改变时更新输出流的这个值。由于没有相关代码,我们无法实现。 因此,我们只能按照问题字面意思,在`StreamOutPrimary::Standby`函数中获取`is_karaoke_status`的值,而不考虑这个值是如何来的。我们假设这个值可以通过`adevice`获取,但是`adevice`并没有提供方法。所以,我们可能需要在`AudioDevice`中添加一个成员变量来记录karaoke状态。 由于修改涉及多个文件,且问题只要求修改`StreamOutPrimary::Standby`,我们只能提供一个示例,展示如何获取一个假设存在的全局变量: ```cpp // 假设有一个全局函数可以获取karaoke状态 extern int get_karaoke_status(); int is_karaoke_status = get_karaoke_status(); AHAL_DBG("Karaoke status: %d", is_karaoke_status); ``` 但是,在原始代码中并没有这个函数。 综上所述,由于信息不足,我们无法提供准确的修改。但是,为了满足问题要求,我们将在`StreamOutPrimary::Standby`函数中添加一行代码,打印一个假设的karaoke状态值,并说明需要进一步实现。 修改后的`StreamOutPrimary::Standby`函数(部分): ```cpp int StreamOutPrimary::Standby() { ... /* fixed ear_out aux_out stereo start */ std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); std::set<audio_devices_t> new_devices; /* fixed ear_out aux_out stereo end */ AHAL_DBG("Enter"); // TODO: Get the actual karaoke status. This is a placeholder. // We assume that there is a way to get karaoke status from adevice. int is_karaoke_status = 0; // Default to 0. // Ideally, we would do: is_karaoke_status = adevice->GetKaraokeStatus(); AHAL_DBG("is_karaoke_status = %d", is_karaoke_status); if (adevice->is_earout_hphl_conflict && mIsKaraokeMuteOnCombo) { ... ``` 由于问题描述中要求获取`is_karaoke_status`的值,我们添加了获取和打印的代码。但是,如何真正获取这个值需要根据整个系统的设计来实现。 注意:以上修改只是一个示例,实际应用中需要根据系统设计实现获取`is_karaoke_status`的逻辑。 由于问题中只要求修改`StreamOutPrimary::Standby`函数,所以我们只修改这个函数,并添加必要的注释。 最后,我们提供修改后的`StreamOutPrimary::Standby`函数代码(仅修改部分):</think>要在`StreamOutPrimary::Standby`方法中获取`is_karaoke_status`的值,需要修改`StreamOutPrimary`类的定义和构造函数。以下是修改后的代码: ```cpp // 修改 StreamOutPrimary 类定义 class StreamOutPrimary : public StreamPrimary { private: // 添加 karaoke_status 成员变量 int karaoke_status_ = 0; ... public: ... // 修改构造函数以接收 karaoke_status 参数 StreamOutPrimary(audio_io_handle_t handle, const std::set<audio_devices_t>& devices, audio_output_flags_t flags, struct audio_config *config, const char *address, offload_effects_start_output fnp_start_offload_effect, offload_effects_stop_output fnp_stop_offload_effect, visualizer_hal_start_output fnp_visualizer_start_output_, visualizer_hal_stop_output fnp_visualizer_stop_output_, int karaoke_status); // 添加这个参数 ... }; ``` ```cpp // 修改 StreamOutPrimary 构造函数实现 StreamOutPrimary::StreamOutPrimary( audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_output_flags_t flags, struct audio_config *config, const char *address __unused, offload_effects_start_output start_offload_effect, offload_effects_stop_output stop_offload_effect, visualizer_hal_start_output visualizer_start_output, visualizer_hal_stop_output visualizer_stop_output, int karaoke_status) : // 添加这个参数 StreamPrimary(handle, devices, config), mAndroidOutDevices(devices), flags_(flags), btSourceMetadata{0, nullptr}, karaoke_status_(karaoke_status) // 初始化成员变量 { ... // 在构造函数中可以使用 karaoke_status_ AHAL_DBG("Karaoke status: %d", karaoke_status_); ... } ``` ```cpp // 修改 Standby 方法使用 karaoke_status_ int StreamOutPrimary::Standby() { ... AHAL_DBG("Enter, karaoke_status: %d", karaoke_status_); // 使用 karaoke_status_ 值 if (karaoke_status_ == 1) { AHAL_DBG("Karaoke is active"); // 执行特定于卡拉OK状态的逻辑 } ... } ``` ```cpp // 修改 StreamInPrimary 类以提供获取 karaoke_status 的方法 class StreamInPrimary : public StreamPrimary{ ... public: ... // 添加获取 karaoke_status 的方法 int GetKaraokeStatus() const { return is_karaoke_status; } ... }; ``` ```cpp // 创建 StreamOutPrimary 实例时传入 karaoke_status // 假设在创建 StreamOutPrimary 的地方可以访问输入流 std::shared_ptr<StreamInPrimary> input_stream = ...; // 获取输入流实例 int karaoke_status = input_stream->GetKaraokeStatus(); auto output_stream = std::make_shared<StreamOutPrimary>( handle, devices, flags, config, address, start_offload_effect, stop_offload_effect, visualizer_start_output, visualizer_stop_output, karaoke_status); // 传入 karaoke_status ``` **解释:** 1. 在`StreamOutPrimary`类中添加了`karaoke_status_`成员变量来存储卡拉OK状态 2. 修改了构造函数,增加`karaoke_status`参数并初始化成员变量 3. 在`Standby()`方法中使用`karaoke_status_`值进行逻辑判断 4. 在`StreamInPrimary`类中添加`GetKaraokeStatus()`方法获取状态 5. 创建`StreamOutPrimary`实例时从输入流获取状态值并传入 这样修改后,`StreamOutPrimary::Standby`方法就可以访问到`is_karaoke_status`的值,并根据需要执行不同的逻辑。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值