Anaconda使用librosa报错NoBackendError和Couldn't find ffmpeg or avconv

本文解决在Anaconda环境下使用librosa时遇到的NoBackendError和找不到ffmpeg或avconv的问题。通过激活环境并安装ffmpeg,成功解决了运行时的警告和错误。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Anaconda使用librosa报错NoBackendError和Couldn’t find ffmpeg or avconv

问题

windows上安装了Anaconda,并且加载librosa使用时报错:
第一次运行这样报错:
conda\conda\envs\deeplearning1\lib\site-packages\pydub\utils.py:165: RuntimeWarning: Couldn’t find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn(“Couldn’t find ffmpeg or avconv - defaulting to ffmpeg, but may not work”, RuntimeWarning)
第二次运行报错:
NoBackendError

解决方法

下面说一下步骤:

  1. 从开始菜单找到anaconda3(64-bit),然后打开Anaconda Prompt;
  2. 输入命令:activate base(base是你的运行环境的名称),这个命令就是切换到你的运行环境当中;
  3. 然后输入命令:conda install ffmpeg -c conda-forge,就是安装ffmpeg;
  4. 完成之后重启Spyder,验证是否成功;
UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.24.3 warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}" F:\worktools\python\Anaconda\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) F:\worktools\python\Anaconda\lib\site-packages\pydub\utils.py:198: RuntimeWarning: Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning) Traceback (most recent call last): File "F:\worktools\python\python项目\deep learning\voice enhance\model_test.py", line 28, in <module> audio = AudioSegment.from_mp3(wav_path) File "F:\worktools\python\Anaconda\lib\site-packages\pydub\audio_segment.py", line 796, in from_mp3 return cls.from_file(file, 'mp3', parameters=parameters) File "F:\worktools\python\Anaconda\lib\site-packages\pydub\audio_segment.py", line 728, in from_file info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit) File "F:\worktools\python\Anaconda\lib\site-packages\pydub\utils.py", line 274, in mediainfo_json res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE) File "F:\worktools\python\Anaconda\lib\subprocess.py", line 951, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "F:\worktools\python\Anaconda\lib\subprocess.py", line 1420, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] 系统找不到指定的文件。
05-25
/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( The following values were not passed to `accelerate launch` and had defaults used instead: More than one GPU was found, enabling multi-GPU training. If this was unintended please pass in `--num_processes=1`. `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. /home/ustc/anaconda3/lib/python3.12/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) /home/ustc/anaconda3/lib/python3.12/site-packages/torch/nn/utils/weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`. WeightNorm.apply(module, name, dim) /home/ustc/anaconda3/lib/python3.12/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( [rank0]: Traceback (most recent call last): [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 92, in _call_target [rank0]: return _target_(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/modules/astral_quantization/default_model.py", line 22, in __init__ [rank0]: self.tokenizer = WhisperProcessor.from_pretrained( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/processing_utils.py", line 1079, in from_pretrained [rank0]: args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/processing_utils.py", line 1143, in _get_arguments_from_pretrained [rank0]: args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/feature_extraction_utils.py", line 384, in from_pretrained [rank0]: feature_extractor_dict, kwargs = cls.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/feature_extraction_utils.py", line 510, in get_feature_extractor_dict [rank0]: resolved_feature_extractor_file = cached_file( [rank0]: ^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/utils/hub.py", line 266, in cached_file [rank0]: file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/utils/hub.py", line 381, in cached_files [rank0]: raise OSError( [rank0]: OSError: ./checkpoints/hf_cache does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/./checkpoints/hf_cache/tree/main' for available files. [rank0]: The above exception was the direct cause of the following exception: [rank0]: Traceback (most recent call last): [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 345, in <module> [rank0]: main(args) [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 314, in main [rank0]: trainer = Trainer( [rank0]: ^^^^^^^^ [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 75, in __init__ [rank0]: self._init_models(train_cfm=train_cfm, train_ar=train_ar) [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 106, in _init_models [rank0]: self._init_main_model(train_cfm=train_cfm, train_ar=train_ar) [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 116, in _init_main_model [rank0]: self.model = hydra.utils.instantiate(cfg).to(self.device) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 226, in instantiate [rank0]: return instantiate_node( [rank0]: ^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 342, in instantiate_node [rank0]: value = instantiate_node( [rank0]: ^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 347, in instantiate_node [rank0]: return _call_target(_target_, partial, args, kwargs, full_key) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 97, in _call_target [rank0]: raise InstantiationException(msg) from e [rank0]: hydra.errors.InstantiationException: Error in call to target 'modules.astral_quantization.default_model.AstralQuantizer': [rank0]: OSError("./checkpoints/hf_cache does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/./checkpoints/hf_cache/tree/main' for available files.") [rank0]: full_key: content_extractor_narrow [rank0]:[W514 21:22:23.005946238 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator()) W0514 21:22:24.285000 132883023812096 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 3125405 closing signal SIGTERM W0514 21:22:24.285000 132883023812096 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 3125406 closing signal SIGTERM W0514 21:22:24.285000 132883023812096 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 3125407 closing signal SIGTERM E0514 21:22:24.500000 132883023812096 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 3125404) of binary: /home/ustc/anaconda3/bin/python Traceback (most recent call last): File "/home/ustc/anaconda3/bin/accelerate", line 8, in <module> sys.exit(main()) ^^^^^^ File "/home/ustc/anaconda3/lib/python3.12/site-packages/accelerate/commands/accelerate_cli.py", line 50, in main args.func(args) File "/home/ustc/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py", line 1204, in launch_command multi_gpu_launcher(args) File "/home/ustc/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py", line 825, in multi_gpu_launcher distrib_run.run(args) File "/home/ustc/anaconda3/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in run elastic_launch( File "/home/ustc/anaconda3/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 133, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ustc/anaconda3/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ train_v2.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2025-05-14_21:22:24 host : ustc-SYS-740GP-TNRT rank : 0 (local_rank: 0) exitcode : 1 (pid: 3125404) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ 分析原因
05-15
### Manim 中 FFmpeg 配置解决方案 为了使 Manim 正确识别并使用 FFmpeg,需确保 FFmpeg 已正确安装且其路径已加入系统的环境变量中。 #### 1. 安装 FFmpeg 并设置环境变量 对于 Windows 用户来说,可以通过多种方式获取 FFmpeg。一种推荐的方法是通过 Anaconda 来安装 FFmpeg[^3]。Anaconda 不仅简化了 Python 环境管理还提供了大量预编译好的科学计算库,其中包括 FFmpeg。当选择此方法时,在安装过程中务必勾选将 Anaconda 的 bin 文件夹添加到 PATH 环境变量选项。 另一种手动安装的方式是从官方网站或其他可信源下载 FFmpeg 压缩包,并解压至指定位置,例如 `C:\Program Files\ffmpeg`。之后,需要更新系统环境变量中的 PATH 参数来包含 FFmpeg 可执行文件所在的目录,即 `C:\Program Files\ffmpeg\bin`[^1]。 #### 2. 使用相对路径调用 FFmpeg 假设已经成功设置了上述任一途径,则可以在任何命令提示符窗口内直接键入 `ffmpeg -version` 测试是否能够正常显示版本信息而无需提供完整的路径名。然而,如果希望在特定项目根目录下调用位于子文件夹内的 FFmpeg 实例(如桌面下的例子),可以采用如下形式: ```bash .\ffmpeg\ffmpeg-7.0.1-full_build\bin\ffmpeg.exe -version ``` 这表明即使不在全局范围内注册 FFmpeg 路径也可以局部地利用该工具完成视频处理任务[^2]。 #### 3. 修改 Manim 设置以适应本地 FFmpeg 安装 为了让 Manim 成功找到 FFmpegavconv 编码器,默认情况下程序会在几个常见位置查找这些二进制文件;但如果它们未被放置于预期地点,则可能遇到错误消息指出无法定位编码器。此时可通过修改配置文件或传递参数给渲染函数显式告知应用程序 FFmpeg 所处的确切地址。 具体操作取决于所使用的 Manim 版本以及个人偏好。一般而言,有两种主要策略可供选择: - **调整配置文件**:编辑 `.manim.cfg` 文件(通常位于用户主目录下)并将 `[video-export]` 部分中的 `ffmpeg_executable` 字段设为指向实际存在的 FFmpeg 应用程序的位置。 - **临时覆盖默认行为**:每次启动动画制作流程之前设定环境变量 FFMPEG_BINARY 或者是在脚本内部动态改变 os.environ['FFMPEG_BINARY'] 的值为正确的可执行文件路径。 以上措施应该能有效解决 Manim 寻找不到所需多媒体处理器的问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值