(1.1)bark

部署运行你感兴趣的模型镜像

github地址:

github开源bark:https://github.com/bark-simulator/bark/

bark官网:https://bark-simulator.github.io/

bark指导手册:https://github.com/bark-simulator/bark/blob/master/docs/source/installation.md

Youtube视频:https://www.youtube.com/user/fortissTV/search?query=BARK

相关论文:

一、项目下载

Anaconda的安装和配置参考博客:Ubuntu软件安装

  • 创建Anaconda环境
# 新建一个Anaconda环境
conda create -n bark python=3.7
# 如果想删除环境采用以下操作
# conda remove -n your_env_name(虚拟环境名称) --all
# 进入环境
conda activate bark
pip install virtualenv==16.7.8
pip install bark-simulator
sudo apt-get install libsqlite3-dev sqlite3
git clone https://github.com/bark-simulator/bark.git
cd bark

如果一直卡在正克隆到 ‘bark’…,参考链接

解决:git clone没反应或速度慢,卡在正在克隆到 ‘xxx’ …

如果没用的话可能要多运行几次,可以在电脑上打开github,如果页面能正常加载,一般就能正常下载。

二、配置环境

1、方法一

image-20220715221106831

  • install.sh

先查看一下自己的python路径

which python
# /home/myx/anaconda3/envs/bark/bin/python  # 这是我的路径

在bark目录下有一个install.sh文件,这里要修改install.sh文件的2个地方

sudo vim install.sh
# 将第一行改为自己python3.7所在的地方
# /home/myx/anaconda3/envs/bark/bin/python
# 将最后一行改为
# sudo bash tools/installers/install_apt_packages.sh

修改前:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-OKsPCQES-1662359914781)(/home/myx/.config/Typora/typora-user-images/image-20220715221654581.png)]

修改后:

image-20220720121507081

修改完运行如下命令

sudo bash install.sh

如果这里报错RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.7'的话,可以考虑采用方法二。

如果正常运行就继续进行下面步骤

  • 运行dev_into.sh
source dev_into.sh

运行后会进入虚拟环境venv

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wJxPyovx-1662359914783)(/home/myx/.config/Typora/typora-user-images/image-20220715222034110.png)]

2、方法二

如果方法一报错RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.7'的话,可以考虑采用方法二。

其实就是不要用virtualenv创建python虚拟环境了,改为运行如下代码

# 代替 install.sh,跳过 dev_into.sh
pip install -r tools/installers/requirements.txt
sudo apt install g++ unzip zip
# sudo bash tools/installers/install_apt_packages.sh

三、安装Bazel

如果没有安装过bazel,需要进行安装。如果已安装,就跳过该步骤

1、法一:作者命令

可以运行tools/installers目录下的install_bazel.sh文件

sudo bash tools/installers/install_bazel.sh

2、法二:替代作者命令

但是我运行的时候会超时,一直无法安装,于是可以用其他命令下载,即

# 代替 install_bazel.sh
sudo apt install g++ unzip
sudo echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
sudo curl https://bazel.build/bazel-release.pub.gpg | apt-key add -

就是上面的最后一步可能会超时,可以进入链接https://bazel.build/bazel-release.pub.gpg,手动将内容另存,放到bark下,执行下面的命令

sudo apt-key add bazel-release.pub.gpg
sudo apt-get update
sudo apt-get install -y bazel
# 卸载
# sudo apt-get remove bazel
sudo rm -rf /var/lib/apt/lists/*

3、法三:官网下载

官网链接:https://github.com/bazelbuild/bazel/releases

  • 下载

选择bazel-5.2.0-installer-linux-x86_64.sh进行下载。

image-20220720120601165

到目录下打开终端,运行以下代码sudo

# chmod +x bazel-<version>-installer-linux-x86_64.sh
# ./bazel-<version>-installer-linux-x86_64.sh --user
chmod +x bazel-5.2.0-installer-linux-x86_64.sh
./bazel-5.2.0-installer-linux-x86_64.sh --user
  • 环境变量和key
sudo gedit ~/.bashrc

在最后一行加入export PATH="$PATH:$HOME/bin"

更新文件

source ~/.bashrc
  • 卸载
# sudo rm -rf /usr/bin/bazel
# sudo rm -rf ~/bin
# sudo rm -rf /usr/bin/bazel

~/.bashrc 中bazel的环境变量注释或删除

# sudo gedit ~/.bashrc

参考链接:

bazel 的安装与卸载

四、测试样例

1、官方样例

在自己的路径下运行样例

# 先运行这句代码,测试bark是否工作,以及行为和场景构建
bazel test //...
# sudo bazel test //... --python_path=/home/myx/Project/bark/bark/python_wrapper/venv/bin/python
# 场景测试
bazel run //bark/examples:highway  # 两车道高速公路示例
bazel run //bark/examples:merging  
bazel run //bark/examples:intersection   # 三路交叉口
# bazel run //bark/examples:interaction_dataset  # 数据集回放
# bazel run //bark/examples:benchmark_database  # 使用场景数据库对行为进行基准测试

bazel test //...中成功build

image-20220716130458791

如果运行时卡住没有进展,可以修改hosts文件

sudo vim /etc/hosts
# 在末尾加入
172.217.1.110 bazel

2、python样例

配置调整:https://github.com/bark-simulator/bark/tree/master/bark/runtime/scenario/scenario_generation/config_readers

  • 打开样例列表
# 安装jupyter解释器
pip install ipykernel
python -m ipykernel install --user --name 环境名 --display-name 显示的环境名
# 打开目录
bazel run //docs/tutorials:run

2

这个板块是实现地图相关

  • 选择地图

遇到报错No such file or directory: '../../bark/runtime/tests/data/Crossing8Course.xodr'

需要调整地图文件,因为我们并没有Crossing8Course.xodr

OpenDRIVE官网:https://www.asam.net/standards/detail/opendrive/

参考链接:OPENDRIVE地图中GEOMETRY线段对应X/Y坐标的计算

可以到路径/bark/bark/runtime/tests/data下,随便选一个xodr文件,修改代码【以4way_intersection.xodr为例】

image-20220716200350897

3

  • matplotlib qt问题

遇到%matplotlib qt语句报错Failed to import any qt binding,解决方式见博客解决:Failed to import any qt binding

4

/home/myx/Project/bark/bark/runtime/scenario/scenario_generation

/home/myx/.cache/bazel/_bazel_myx/2b95cf4426d57e499a6045a0f87bd019/execroot/bark_project/bazel-out/k8-fastbuild/bin/docs/tutorials/run.runfiles/benchmark_database/data/tutorial_database/scenario_sets/highway_merging/scenario_set1.json

在4的代码中需要使用到数据集,先介绍一下数据集的使用方式

(1)数据集的使用

数据集使用方式指导:https://github.com/interaction-dataset/interaction-dataset

如果要添加自己的地图和数据集,文件目录应如下构造

image-20220716185825067

在目录下打开终端,测试运行数据。

conda activate bark
cd interaction-dataset-master
pip install -r requirements.txt
# 展示数据
# ./main_visualize_data.py <scenario_name> <trackfile_number (default=0)>
./python/main_visualize_data.py .TestScenarioForScripts

数据可视化

# 载入数据
# ./main_load_track_file.py <tracks_filename>
./python/main_load_track_file.py ./recorded_trackfiles/.TestScenarioForScripts/vehicle_tracks_000.csv

如果想要载入自己的数据,就通过以下几个步骤

  1. interaction-dataset-master/maps路径下放入.Scenario_myx.osm文件

  2. interaction-dataset-master/recorded_trackfiles路径下新建.Scenario_myx文件夹【注意这个文件要和前面的osm文件名一致】,文件夹中放入vehicle_tracks_xxx.csv文件【以vehicle_tracks_001.csv为例】

  3. 修改interaction-dataset-master/python/main_visualize_data.py文件,将track_file_number设置为传参模式,即将track_file_number改为--track_file_number

    image-20220716192458457

  4. 运行下面代码

# 展示数据
./python/main_visualize_data.py .Scenario_myx --track_file_number 001
# ./python/main_visualize_data.py .Scenario_myx --track_file_number 1
# 加载数据
./python/main_load_track_file.py ./recorded_trackfiles/.Scenario_myx/vehicle_tracks_001.csv

5

sudo apt install ffmpeg

6

  • 查看包路径
# print(module.__file__)   # 查找某个包所在具体路径
print(bark.__file__)
# print(module.__code__)  # 查找某个方法所在文件路径

五、运行bark-ml

pip install bark-ml
git clone https://github.com/bark-simulator/bark-ml
cd bark-ml
bash utils/install.sh
  • 替代bash utils/install.sh
--no-cache-dir --upgrade --trusted-host pypi.org -r ./utils/docker/installers/requirements.txt

您可能感兴趣的与本文相关的镜像

Python3.8

Python3.8

Conda
Python

Python 是一种高级、解释型、通用的编程语言,以其简洁易读的语法而闻名,适用于广泛的应用,包括Web开发、数据分析、人工智能和自动化脚本

/** * Copyright (C) 2024 TP-Link. All rights reserved. */ #include <stdio.h> #include <unistd.h> #include <math.h> #include "audio_aec.h" #include "ams_common.h" #define AEC_STATE_SWITCH_GAIN 1.0f #define AEC_STATE_SWITCH_GAIN_SQUARE 1.0f int aec_module_init(AEC_CONTEXT *p_context, int sample_rate, int use_timestamp, AEC3_SYS_CONFIG *EX_ctrl_para) { if (!p_context) { return -1; } memset(p_context, 0, sizeof(AEC_CONTEXT)); p_context->sample_rate = sample_rate; p_context->use_timestamp = use_timestamp; p_context->mic_timestamp = 0; p_context->spk_timestamp = 0; p_context->current_delay_f = 0; p_context->current_delay_pt = 0; /* 外开 */ p_context->current_delay_near = 40; /* 外开 */ memset(p_context->far_history, 0, sizeof(short) * D_RINGBUF_MAX_LEN); p_context->far_maxlen = D_RINGBUF_MAX_LEN; p_context->far_top = 0; /* added, nearend process */ memset(p_context->mic_history, 0, sizeof(short) * D_RINGBUF_MAX_LEN); p_context->mic_maxlen = D_RINGBUF_MAX_LEN; p_context->mic_top = 0; /* ringbuffer初始化 */ p_context->mic_ringbuf = audio_ring_buffer_init(D_RINGBUF_MAX_LEN); p_context->spk_ringbuf = audio_ring_buffer_init(D_RINGBUF_MAX_LEN); p_context->out_ringbuf = audio_ring_buffer_init(D_RINGBUF_MAX_LEN); p_context->post_gain = 1.0f; /* 外开,aec3的后增益 */ p_context->aec3_live_support_switch = 0; /* 外开,aec3支持直播模式的兼容性开关 */ p_context->aec3_dt_support_switch = 1; /* 外开,aec3支持双讲模式的兼容性开关 */ p_context->aec_state_switch_gain = 1.0f; /* 外开,应调为直播双讲模式切换时的幅值变化倍数 */ p_context->aec_state_switch_gain_square = p_context->aec_state_switch_gain * p_context->aec_state_switch_gain; p_context->R_aec_state_switch_gain = 1.0f / p_context->aec_state_switch_gain; p_context->R_aec_state_switch_gain_square = 1.0f / p_context->aec_state_switch_gain_square; memset(p_context->spk_buf_zero, 0, 2 * sizeof(short) * D_AEC_FRAME_LEN_WB); /* 初始化spk_buf_zero */ if (SAMPLE_RATE_NB == sample_rate) { p_context->aec_frame_len = D_AEC_FRAME_LEN; p_context->aec_fir_len = D_AEC_FILTER_LEN; } else if (SAMPLE_RATE_WB == sample_rate) { p_context->aec_frame_len = D_AEC_FRAME_LEN_WB; p_context->aec_fir_len = D_AEC_FILTER_LEN_WB; } else { p_context->aec_frame_len = D_AEC_FRAME_LEN; p_context->aec_fir_len = D_AEC_FILTER_LEN; } p_context->st = speex_echo_state_init(p_context->aec_frame_len, p_context->aec_fir_len); p_context->den = speex_preprocess_state_init(p_context->aec_frame_len, p_context->sample_rate); speex_echo_ctl(p_context->st, SPEEX_ECHO_SET_SAMPLING_RATE, &(p_context->sample_rate)); speex_preprocess_ctl(p_context->den, SPEEX_PREPROCESS_SET_ECHO_STATE, p_context->st); p_context->vad_for_agc_enable = 0; p_context->vad_list = (float *)calloc(D_INPUT_FRAME_MAX_LEN, sizeof(float)); /* 使用外开参数 */ speex_EXpara_apply(p_context, EX_ctrl_para); return 0; } void aec_state_switch(AEC_CONTEXT *p_context, SpeexEchoState *st, SpeexPreprocessState *den) /* 直播双讲模式切换 */ { float aec_state_switch_gain = p_context->aec_state_switch_gain; float aec_state_switch_gain_square = p_context->aec_state_switch_gain_square; float R_aec_state_switch_gain = p_context->R_aec_state_switch_gain; float R_aec_state_switch_gain_square = p_context->R_aec_state_switch_gain_square; int flag = st->old_aec_enable - st->aec_enable; if (flag == -1) /* 切换到双讲模式 */ { int i, M, N; N = st->window_size; M = st->M; st->cancel_count = 0; st->sum_adapt = 0; st->saturated = 0; st->echo_clipping_flag = 0; st->NLP_flag = 0; st->screwed_up = 0; memset(st->e, 0, N * sizeof(float)); memset(st->x, 0, N * sizeof(float)); memset(st->X, 0, N * (M + 1) * sizeof(float)); memset(st->input_buf, 0, N * sizeof(float)); memset(st->input_buf_win, 0, N * sizeof(float)); memset(st->y, 0, st->frame_size * sizeof(float)); memset(st->last_y, 0, st->frame_size * sizeof(float)); memset(st->last_echo, 0, st->frame_size * sizeof(float)); memset(st->INPUT, 0, st->frame_size * sizeof(float)); memset(st->Y, 0, st->frame_size * sizeof(float)); memset(st->E, 0, N * sizeof(float)); st->Davg1 = 0; st->Davg2 = 0; st->Dvar1 = 0; st->Dvar2 = 0; memset(st->power, 0, (st->frame_size + 1) * sizeof(float)); memset(st->Eh, 0, (st->frame_size + 1) * sizeof(float)); memset(st->Yh, 0, (st->frame_size + 1) * sizeof(float)); for (i = 0; i <= st->frame_size; i++) { st->power_1[i] = 1.0; } float sum = 0; /* Ratio of ~10 between adaptation rate of first and last block */ float decay = exp(-2.4 / st->M); st->prop[0] = 0.7; sum = st->prop[0]; for (i = 1; i < st->M; i++) { st->prop[i] = st->prop[i - 1] * decay; sum += st->prop[i]; } for (i = st->M - 1; i >= 0; i--) { st->prop[i] = 0.8f * st->prop[i] / sum; } memset(st->memD, 0, 1 * sizeof(float)); memset(st->memE, 0, 1 * sizeof(float)); memset(st->memX, 0, 1 * sizeof(float)); st->adapted = 0; st->Pey = st->Pyy = 1.0; st->converage_count = 0; memset(p_context->far_history, 0, sizeof(short) * D_RINGBUF_MAX_LEN); p_context->far_top = 0; tp_audio_ring_bufer_reset(p_context->spk_ringbuf); #ifdef TWO_PATH st->Davg1 = st->Davg2 = 0; st->Dvar1 = st->Dvar2 = 0.0; #endif for (i = 0; i < 2; i++) { st->notch_mem[i] *= R_aec_state_switch_gain; } for (i = 0; i < N; i++) { den->frame[i] *= R_aec_state_switch_gain; den->ft[i] *= R_aec_state_switch_gain; } for (i = 0; i < (int)(N / 2); i++) { den->noise[i] *= R_aec_state_switch_gain_square; den->old_ps[i] *= R_aec_state_switch_gain_square; den->S[i] *= R_aec_state_switch_gain_square; den->Smin[i] *= R_aec_state_switch_gain_square; den->Stmp[i] *= R_aec_state_switch_gain_square; den->inbuf[i] *= R_aec_state_switch_gain; den->outbuf[i] *= R_aec_state_switch_gain; den->noise_CNG[i] = den->noise[i]; /* CNG更新慢,且在直播模式停止更新,因此在切换时直接赋值 */ } } else if (flag == 1) { /* 切换到直播模式 */ int i, N; N = den->ps_size; for (i = 0; i < 2; i++) { st->notch_mem[i] *= aec_state_switch_gain; } for (i = 0; i < 2 * N; i++) { den->frame[i] *= aec_state_switch_gain; den->ft[i] *= aec_state_switch_gain; } for (i = 0; i < N; i++) { den->ps[i] *= aec_state_switch_gain_square; den->noise[i] *= aec_state_switch_gain_square; den->old_ps[i] *= aec_state_switch_gain_square; den->S[i] *= aec_state_switch_gain_square; den->Smin[i] *= aec_state_switch_gain_square; den->Stmp[i] *= aec_state_switch_gain_square; den->inbuf[i] *= aec_state_switch_gain; den->outbuf[i] *= aec_state_switch_gain; } } return; } int aec_module_process(AEC_CONTEXT * p_context) { if (!p_context) { return -1; } #ifdef DUMP_RAWDATA_SUPPORT FILE *mic_file = NULL, *spk_file = NULL, *aec_file = NULL, *out_file = NULL, *motor_state_file = NULL, *aec_state_file = NULL; if (access("/tmp/mnt/harddisk_1/audio_stream", 0) == 0) { mic_file = fopen("/tmp/mnt/harddisk_1/audio_stream/mic.pcm", "a+"); spk_file = fopen("/tmp/mnt/harddisk_1/audio_stream/spk.pcm", "a+"); aec_file = fopen("/tmp/mnt/harddisk_1/audio_stream/aec.pcm", "a+"); out_file = fopen("/tmp/mnt/harddisk_1/audio_stream/out.pcm", "a+"); motor_state_file = fopen("/tmp/mnt/harddisk_1/audio_stream/motor_state.pcm", "a+"); aec_state_file = fopen("/tmp/mnt/harddisk_1/audio_stream/aec_state.pcm", "a+"); } #endif int ret = 0; aec_state_switch(p_context, p_context->st, p_context->den); /* 状态切换时,更新参数 */ /* step 1: 时延估计 */ if (p_context->st->aec_enable) /* 判断回声消除是否开启 */ { /* 1.1 spk数据入队 */ if (p_context->far_top + p_context->sdata_len <= p_context->far_maxlen) { memcpy(&(p_context->far_history[p_context->far_top]), p_context->spk_buf_ptr, sizeof(short) * p_context->sdata_len); p_context->far_top += p_context->sdata_len; } else { unsigned int head_len = p_context->far_top + p_context->sdata_len - p_context->far_maxlen; unsigned int tail_len = p_context->sdata_len - head_len; memcpy(&(p_context->far_history[p_context->far_top]), p_context->spk_buf_ptr, sizeof(short) * tail_len); memcpy(&(p_context->far_history[0]), &(p_context->spk_buf_ptr[tail_len]), sizeof(short) * head_len); p_context->far_top = head_len; } } /* nearend process */ /* 近端数据入队 */ if (p_context->mic_top + p_context->sdata_len <= p_context->mic_maxlen) { memcpy(&(p_context->mic_history[p_context->mic_top]), p_context->mic_buf_ptr, sizeof(short) * p_context->sdata_len); p_context->mic_top += p_context->sdata_len; } else { unsigned int head_len = p_context->mic_top + p_context->sdata_len - p_context->mic_maxlen; unsigned int tail_len = p_context->sdata_len - head_len; memcpy(&(p_context->mic_history[p_context->mic_top]), p_context->mic_buf_ptr, sizeof(short) * tail_len); memcpy(&(p_context->mic_history[0]), &(p_context->mic_buf_ptr[tail_len]), sizeof(short) * head_len); p_context->mic_top = head_len; } /* 1.2 时延估计 */ S32 current_delay_pt = 0; S32 current_delay_near = 0; if (p_context->st->aec_enable) { p_context->current_delay_f = 0; if (p_context->use_timestamp) { U64 time_diff_us = 0; if (p_context->mic_timestamp > p_context->spk_timestamp) { time_diff_us = p_context->mic_timestamp - p_context->spk_timestamp; } // 从帧长,采样率倒推时间 S32 sample_rate = p_context->sample_rate; S32 frame_length = p_context->aec_frame_len; S32 delay_points = (S32)(time_diff_us / (1.0 * 1000000 / sample_rate)); p_context->current_delay_f = (int)(delay_points / frame_length); /* 传来的时间戳异常,倾向于无时延 */ if (p_context->current_delay_f > D_MAX_DELAY_F) { p_context->current_delay_f = 0; } } else { /* TODO: 使用算法计算时延。和aqi/aec/aec算法中的相同,待移植 */ } current_delay_pt = p_context->current_delay_f * p_context->aec_frame_len; } /* 1.3 从far_history中取出参考信号 */ current_delay_pt = p_context->current_delay_pt; memset(p_context->ref_buf, 0 , D_INPUT_FRAME_MAX_LEN * sizeof(short)); if (p_context->st->aec_enable) { if (p_context->far_top - current_delay_pt - p_context->sdata_len > 0) { memcpy(p_context->ref_buf, &(p_context->far_history[p_context->far_top - current_delay_pt - p_context->sdata_len]), sizeof(short) * p_context->sdata_len); } else { unsigned int tail_len = p_context->sdata_len - (p_context->far_top - current_delay_pt); if (tail_len < p_context->sdata_len) { memcpy(p_context->ref_buf, &(p_context->far_history[p_context->far_maxlen - tail_len]), sizeof(short) * tail_len); memcpy(&(p_context->ref_buf[tail_len]), p_context->far_history, sizeof(short) * (p_context->sdata_len - tail_len)); } else { memcpy(p_context->ref_buf, &(p_context->far_history[p_context->far_maxlen - tail_len]), sizeof(short) * p_context->sdata_len); } } } /* nearend process */ current_delay_near = p_context->current_delay_near; memset(p_context->mic_buf, 0 , D_INPUT_FRAME_MAX_LEN * sizeof(short)); if (p_context->mic_top - current_delay_near - p_context->sdata_len > 0) { memcpy(p_context->mic_buf, &(p_context->mic_history[p_context->mic_top - current_delay_near - p_context->sdata_len]), sizeof(short) * p_context->sdata_len); } else { unsigned int tail_len = p_context->sdata_len - (p_context->mic_top - current_delay_near); if (tail_len < p_context->sdata_len) { memcpy(p_context->mic_buf, &(p_context->mic_history[p_context->mic_maxlen - tail_len]), sizeof(short) * tail_len); memcpy(&(p_context->mic_buf[tail_len]), p_context->mic_history, sizeof(short) * (p_context->sdata_len - tail_len)); } else { memcpy(p_context->mic_buf, &(p_context->mic_history[p_context->mic_maxlen - tail_len]), sizeof(short) * p_context->sdata_len); } } /* step 2: mic,ref信号存入缓存 */ unsigned int write_elements = 0; write_elements = audio_ring_buffer_set_data(p_context->mic_ringbuf, p_context->mic_buf, p_context->sdata_len); if (write_elements < p_context->sdata_len) { AMS_ERROR("write buf error"); ret = -1; goto AEC_DESTROY; } if (p_context->st->aec_enable) { audio_ring_buffer_set_data(p_context->spk_ringbuf, p_context->ref_buf, p_context->sdata_len); } /* step 3: aec */ int frames = (int)(audio_ring_buffer_used_space(p_context->mic_ringbuf) / p_context->aec_frame_len); for (int i = 0; i < frames; i++) { audio_ring_buffer_get_data(p_context->mic_ringbuf, p_context->mic_tmp, p_context->aec_frame_len); if (p_context->st->aec_enable) { audio_ring_buffer_get_data(p_context->spk_ringbuf, p_context->spk_tmp, p_context->aec_frame_len); } speex_echo_cancellation(p_context->st, p_context->mic_tmp, p_context->spk_tmp, p_context->out_tmp); #ifdef DUMP_RAWDATA_SUPPORT short motor_state_tmp[1] = { 0 }; motor_state_tmp[0] = p_context->den->motor_state; short aec_state_tmp[1] = { 0 }; aec_state_tmp[0] = p_context->st->aec_enable; if (mic_file && spk_file && aec_file && motor_state_file && aec_state_file) { fwrite(p_context->mic_tmp, sizeof(short), p_context->aec_frame_len, mic_file); fwrite(p_context->spk_tmp, sizeof(short), p_context->aec_frame_len, spk_file); fwrite(p_context->out_tmp, sizeof(short), p_context->aec_frame_len, aec_file); fwrite(motor_state_tmp, sizeof(short), 1, motor_state_file); fwrite(aec_state_tmp, sizeof(short), 1, aec_state_file); } #endif if (COMPUTATION_NS_BARK_ENABLE & p_context->st->computation_simplification) { speex_preprocess_run_bark(p_context->den, p_context->out_tmp); } else { speex_preprocess_run(p_context->den, p_context->out_tmp); } p_context->vad_for_agc_enable = VAD_FOR_AGC_MODE_ON & p_context->den->special_mode; if (p_context->vad_for_agc_enable) { for (int j = i * p_context->aec_frame_len; j < (i + 1) * p_context->aec_frame_len; j++) { p_context->vad_list[j] = p_context->den->vad; } } speex_post_gain(p_context, p_context->out_tmp); #ifdef DUMP_RAWDATA_SUPPORT if (out_file) { fwrite(p_context->out_tmp, sizeof(short), p_context->aec_frame_len, out_file); } #endif /* out_tmp */ write_elements = audio_ring_buffer_set_data(p_context->out_ringbuf, p_context->out_tmp, p_context->aec_frame_len); if (write_elements < p_context->aec_frame_len) { AMS_ERROR("write buf error"); ret = -1; goto AEC_DESTROY; } } /* step 4: 输出 */ write_elements = audio_ring_buffer_used_space(p_context->out_ringbuf); if (write_elements < p_context->sdata_len) { memset(p_context->out_buf_ptr, 0, sizeof(short) * p_context->sdata_len); } else { audio_ring_buffer_get_data(p_context->out_ringbuf, p_context->out_buf_ptr, p_context->sdata_len); } AEC_DESTROY: #ifdef DUMP_RAWDATA_SUPPORT if (mic_file) { fclose(mic_file); } if (spk_file) { fclose(spk_file); } if (aec_file) { fclose(aec_file); } if (out_file) { fclose(out_file); } if (motor_state_file) { fclose(motor_state_file); } if (aec_state_file) { fclose(aec_state_file); } #endif return ret; } /***************************************************************************** * Function: aec_module_deinit() * Description: aec资源释放 * Input: p_context -- aec参数信息 * Output: 无 * Return: 无 *****************************************************************************/ void aec_module_deinit(AEC_CONTEXT *p_context) { if (p_context) { audio_ring_buffer_deinit(p_context->mic_ringbuf); audio_ring_buffer_deinit(p_context->spk_ringbuf); audio_ring_buffer_deinit(p_context->out_ringbuf); speex_echo_state_destroy(p_context->st); speex_preprocess_state_destroy(p_context->den); free(p_context->vad_list); p_context->vad_list = NULL; } } void speex_post_gain(AEC_CONTEXT *p_context, short *x) { int i; float x_tmp; float post_gain = p_context->post_gain; for (i = 0; i < p_context->aec_frame_len; i++) { x_tmp = (float)(x[i]); x_tmp *= post_gain; x_tmp = MIN32(x_tmp, 32767); x_tmp = MAX32(x_tmp, -32767); x[i] = (short)x_tmp; } } void speex_EXpara_apply(AEC_CONTEXT *p_context, AEC3_SYS_CONFIG *EX_ctrl_para) { /* audio aec */ p_context->current_delay_near = EX_ctrl_para->mic_delay_preset; /* mdf */ p_context->st->ref_res_thr = EX_ctrl_para->ref_res_thr; p_context->st->ref_nlp_thr = EX_ctrl_para->ref_nlp_thr; p_context->st->ref_saturate_thr = EX_ctrl_para->ref_saturate_thr; p_context->st->leak_estimate_enabled = EX_ctrl_para->leak_estimate_enabled; p_context->st->leak_value_unadapted = EX_ctrl_para->leak_value_unadapted; p_context->st->leak_value_clipping = EX_ctrl_para->leak_value_clipping; p_context->st->leak_value_slight = EX_ctrl_para->leak_value_slight; p_context->st->saturate_gain = EX_ctrl_para->saturate_gain; p_context->st->residual_gain = EX_ctrl_para->residual_gain; p_context->st->min_leak = EX_ctrl_para->min_leak; /* preprocess */ p_context->den->denoise_enabled = EX_ctrl_para->denoise_mode; p_context->den->nlp_cutoff = EX_ctrl_para->nlp_cutoff; p_context->den->nlp_hnle_thr = EX_ctrl_para->nlp_hnle_thr; p_context->den->nlp_highfreq_suppress = EX_ctrl_para->nlp_highfreq_suppress; /* 第二次更新参数 */ /* audio aec */ p_context->current_delay_pt = EX_ctrl_para->spk_delay_preset; p_context->post_gain = EX_ctrl_para->post_gain; p_context->aec3_live_support_switch = EX_ctrl_para->aec3_live_support_switch; p_context->aec3_dt_support_switch = EX_ctrl_para->aec3_dt_support_switch; p_context->aec_state_switch_gain = EX_ctrl_para->aec_state_switch_gain; p_context->aec_state_switch_gain_square = p_context->aec_state_switch_gain * p_context->aec_state_switch_gain; p_context->R_aec_state_switch_gain = 1.0f / p_context->aec_state_switch_gain; p_context->R_aec_state_switch_gain_square = 1.0f / p_context->aec_state_switch_gain_square; /* mdf */ p_context->st->notch_radius = EX_ctrl_para->notch_radius; p_context->st->mic_saturate_thr = EX_ctrl_para->mic_saturate_thr; p_context->st->res_hangover = EX_ctrl_para->res_hangover; p_context->st->nlp_hangover = EX_ctrl_para->nlp_hangover; p_context->st->saturated_hangover = EX_ctrl_para->saturated_hangover; /* preprocess */ p_context->den->special_mode = EX_ctrl_para->special_mode; p_context->den->min_range = EX_ctrl_para->min_range; p_context->den->vad_enabled = EX_ctrl_para->vad_enabled; p_context->den->speech_prob_start = EX_ctrl_para->speech_prob_start; p_context->den->speech_prob_continue = EX_ctrl_para->speech_prob_continue; p_context->den->noise_suppress = EX_ctrl_para->noise_suppress; p_context->den->echo_suppress = EX_ctrl_para->echo_suppress; p_context->den->echo_suppress_active = EX_ctrl_para->echo_suppress_active; p_context->den->overdrive = EX_ctrl_para->overdrive; p_context->den->alpha_z_noise = EX_ctrl_para->alpha_z_noise; p_context->den->alpha_z_speech = EX_ctrl_para->alpha_z_speech; p_context->den->beta_z_noise = EX_ctrl_para->beta_z_noise; p_context->den->beta_z_speech = EX_ctrl_para->beta_z_speech; p_context->den->use_hypergeom_speech = EX_ctrl_para->use_hypergeom_speech; /* 第三次更新参数(电机降噪参数) */ /* preprocess */ p_context->den->min_range_motor = EX_ctrl_para->min_range_motor; p_context->den->motor_suppress = EX_ctrl_para->motor_suppress; p_context->den->motor_highfreq_clipping_thr = EX_ctrl_para->motor_highfreq_clipping_thr; p_context->den->motor_harmonic_suppress_freq = EX_ctrl_para->motor_harmonic_suppress_freq; p_context->den->motor_nonstationary_numb = EX_ctrl_para->motor_nonstationary_numb; p_context->den->motor_first_used_numb = EX_ctrl_para->motor_first_used_numb; /* 第四次更新参数(解决初状态回声收敛) */ if (EX_ctrl_para->aec3_para_version > 3) { /* mdf */ p_context->st->erl = EX_ctrl_para->erl; p_context->st->erl_square = p_context->st->erl * p_context->st->erl; p_context->st->sxx_af_init_adapt_thr = EX_ctrl_para->sxx_af_init_adapt_thr; p_context->st->miu_fix = EX_ctrl_para->miu_fix; p_context->st->miu_max_init = EX_ctrl_para->miu_max_init; p_context->st->miu_max_init = MIN32(90.0f, p_context->st->miu_max_init); } /* 第五次更新参数 */ /* preprocess */ if (EX_ctrl_para->aec3_para_version > 4) { p_context->den->noise_update_thr = EX_ctrl_para->noise_update_thr; p_context->den->noise_update_speed = EX_ctrl_para->noise_update_speed; p_context->den->motor_noise_update_thr_low = EX_ctrl_para->motor_noise_update_thr_low; p_context->den->motor_noise_update_thr_high = EX_ctrl_para->motor_noise_update_thr_high; p_context->den->motor_noise_update_speed_low = EX_ctrl_para->motor_noise_update_speed_low; p_context->den->motor_noise_update_speed_high = EX_ctrl_para->motor_noise_update_speed_high; p_context->den->motor_end_nonstationary_numb = EX_ctrl_para->motor_end_nonstationary_numb; p_context->den->motor_overestimate_numb = EX_ctrl_para->motor_overestimate_numb; p_context->den->motor_overestimate_weight = EX_ctrl_para->motor_overestimate_weight; p_context->den->energy_vad_thr = EX_ctrl_para->energy_vad_thr; p_context->den->speech_prob_vad_thr = EX_ctrl_para->speech_prob_vad_thr; p_context->den->abrupt_noise_determination_thr = EX_ctrl_para->abrupt_noise_determination_thr; p_context->den->flatness_vad_thr = EX_ctrl_para->flatness_vad_thr; p_context->den->cepstrum_range_min = EX_ctrl_para->cepstrum_range_min; p_context->den->cepstrum_range_max = EX_ctrl_para->cepstrum_range_max; p_context->den->cepstrum_vad_thr = EX_ctrl_para->cepstrum_vad_thr; p_context->den->cepstrum_res_max = EX_ctrl_para->cepstrum_res_max; p_context->den->abrupt_noise_count_thr = EX_ctrl_para->abrupt_noise_count_thr; p_context->den->abrupt_noise_convergence_numb = EX_ctrl_para->abrupt_noise_convergence_numb; p_context->den->min_range_CNG = EX_ctrl_para->min_range_CNG; p_context->den->noise_update_thr_CNG = EX_ctrl_para->noise_update_thr_CNG; p_context->den->noise_update_speed_CNG = EX_ctrl_para->noise_update_speed_CNG; } else { p_context->den->motor_end_nonstationary_numb = EX_ctrl_para->motor_nonstationary_numb; p_context->den->motor_overestimate_numb = EX_ctrl_para->motor_nonstationary_numb; p_context->den->motor_overestimate_weight = 1; } /* 第六次更新参数 */ if (EX_ctrl_para->aec3_para_version > 5) { /* mdf */ p_context->st->computation_simplification = EX_ctrl_para->computation_simplification; /* preprocess */ p_context->den->add_white_noise = EX_ctrl_para->add_white_noise; } } 这段代码里面它是怎么根据motor_state来调整降噪策略的
最新发布
09-20
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值