ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate`

作者在使用MacM1设备和Python3.10.14环境下,虽然成功安装了Accelerate(版本0.29.3)及bitsandbytes(版本0.42.0),但仍遇到关于8位量化错误的ImportError。文章提示需确保已正确安装并更新bitsandbytes库。
部署运行你感兴趣的模型镜像

ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`

 我的一些版本:

mac M1

python3.10.14

Successfully installed accelerate-0.29.3

bitsandbyte 0.42.0

但依然报错

您可能感兴趣的与本文相关的镜像

Python3.10

Python3.10

Conda
Python

Python 是一种高级、解释型、通用的编程语言,以其简洁易读的语法而闻名,适用于广泛的应用,包括Web开发、数据分析、人工智能和自动化脚本

--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[6], line 7 1 bnb_config = BitsAndBytesConfig( 2 load_in_4bit=True, 3 bnb_4bit_quant_type="nf4", 4 bnb_4bit_compute_type=TORCH_DTYPE 5 ) ----> 7 model = AutoModelForCausalLM.from_pretrained( 8 MODEL_NAME, 9 device_map="auto", # 自动分配到 GPU/CPU 10 quantization_config=bnb_config, # 量化配置 11 torch_dtype=TORCH_DTYPE, # 指定数据类型 12 trust_remote_code=False 13 ) 15 # 启用梯度检查点(节省内存,但会变慢) 16 model.gradient_checkpointing_enable() File f:\Programmer\python\MyAI\.venv\Lib\site-packages\transformers\models\auto\auto_factory.py:563, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 561 elif type(config) in cls._model_mapping.keys(): 562 model_class = _get_model_class(config, cls._model_mapping) --> 563 return model_class.from_pretrained( 564 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 565 ) 566 raise ValueError( 567 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" 568 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." 569 ) File f:\Programmer\python\MyAI\.venv\Lib\site-packages\transformers\modeling_utils.py:3165, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs) 3162 hf_quantizer = None 3164 if hf_quantizer is not None: -> 3165 hf_quantizer.validate_environment( 3166 torch_dtype=torch_dtype, from_tf=from_tf, from_flax=from_flax, device_map=device_map 3167 ) 3168 torch_dtype = hf_quantizer.update_torch_dtype(torch_dtype) 3169 device_map = hf_quantizer.update_device_map(device_map) File f:\Programmer\python\MyAI\.venv\Lib\site-packages\transformers\quantizers\quantizer_bnb_4bit.py:62, in Bnb4BitHfQuantizer.validate_environment(self, *args, **kwargs) 60 def validate_environment(self, *args, **kwargs): 61 if not (is_accelerate_available() and is_bitsandbytes_available()): ---> 62 raise ImportError( 63 "Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` " 64 "and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`" 65 ) 67 if kwargs.get("from_tf", False) or kwargs.get("from_flax", False): 68 raise ValueError( 69 "Converting into 4-bit or 8-bit weights from tf/flax weights is currently not supported, please make" 70 " sure the weights are in PyTorch format." 71 ) ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`
最新发布
11-29
评论 5
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值