模型简介
Jamba Reasoning 3B 是一个紧凑、开源的推理模型,由 AI21 Labs 发布,标志着 Jamba 模型家族新系列的第一个成员。它重新定义了 设备端(on-device) 智能模型的可能性。
核心亮点与创新
1. 创新的混合架构
Jamba Reasoning 3B 基于 AI21 Labs 新颖的 SSM-Transformer 混合架构构建。
- 高效内存利用: 它利用了一个比“普通”Transformer 架构小 888 倍的 KV 缓存(KV cache),即使在上下文增长时也能保持较低的内存使用。
- 性能提升: 相比于 DeepSeek、Google、Llama 和 Microsoft 等竞争对手,Jamba Reasoning 3B 实现了 2−52-52−5 倍的效率提升,同时取得了领先的智能基准成绩。
2. 强大的设备端能力
这款模型专为在个人设备上运行而设计:
- 轻量级: 其轻量级的内存占用使得开发者可以轻松地在自己的设备上下载和运行,包括 iPhone、Android、Mac 和 PC。
- 高速推理: 在 M3 MacBook Pro 上,即使在 32K32\text{K}32K 令牌的上下文长度下,它也能以 404040 令牌/秒的速度生成文本。
- 安全与离线使用: 由于它可以在设备上下载和定制,因此可以用于完全安全的应用,即使在没有网络的情况下也能持续运行。
3. 长上下文与卓越智能
Jamba Reasoning 3B 结合了低延迟、长上下文和领先的智能:
- 超长上下文: 它的上下文窗口长度为 256K256\text{K}256K 令牌,并且能够处理高达 1M1\text{M}1M 令牌的上下文。
- 稳定性能: 与大多数在上下文长度超过 32K32\text{K}32K 令牌后性能显著下降的纯 Transformer 模型不同,Jamba Reasoning 3B 在更长的上下文长度下仍能保持高效。
- 领先智能表现: 它的智能分数优于 DeepSeek、Google、Meta 和 Microsoft 的其他设备端模型,尤其在指令遵循(IFBench)和通用知识(MMLU-Pro 和 Humanity’s Last Exam)任务上表现出色。
模型概览
| 特性 | 详情 |
|---|---|
| 许可协议 | Apache 2.0(开源) |
| 参数数量 | 3B3\text{B}3B(紧凑型) |
| 上下文窗口 | 256K256\text{K}256K(可处理高达 1M1\text{M}1M 令牌) |
| 核心架构 | 混合 SSM-Transformer |
| 下载途径 | Hugging Face, Kaggle |
| 本地推理 | LM Studio, llama.cpp |
应用潜力
Jamba Reasoning 3B 为设备端的实验和部署开辟了新的可能性,尤其适用于:
- 企业应用: 例如,对法律或医疗文档进行快速、本地的处理和实体提取;或为公用事业公司的现场技术人员提供通过 PC 随时访问手册的能力。
- 个人应用: 例如,安全地根据您的文件数据库进行调整的生产力追踪器、对话和写作助手。
- 高级 AI 系统: 它能作为高级智能体应用中的精益组件,或用于多模态应用中,其中长上下文理解对输出质量至关重要。
模型加载
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ai21labs/AI21-Jamba-Reasoning-3B",
dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ai21labs/AI21-Jamba-Reasoning-3B")
模型结构
model
JambaForCausalLM(
(model): JambaModel(
(embed_tokens): Embedding(65536, 2560, padding_idx=0)
(layers): ModuleList(
(0-6): 7 x JambaMambaDecoderLayer(
(mamba): JambaMambaMixer(
(conv1d): Conv1d(5120, 5120, kernel_size=(4,), stride=(1,), padding=(3,), groups=5120)
(act): SiLUActivation()
(in_proj): Linear(in_features=2560, out_features=10240, bias=False)
(x_proj): Linear(in_features=5120, out_features=192, bias=False)
(dt_proj): Linear(in_features=160, out_features=5120, bias=True)
(out_proj): Linear(in_features=5120, out_features=2560, bias=False)
(dt_layernorm): JambaRMSNorm((160,), eps=1e-06)
(b_layernorm): JambaRMSNorm((16,), eps=1e-06)
(c_layernorm): JambaRMSNorm((16,), eps=1e-06)
)
(feed_forward): JambaMLP(
(gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
(up_proj): Linear(in_features=2560, out_features=8192, bias=False)
(down_proj): Linear(in_features=8192, out_features=2560, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
(pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
)
(7): JambaAttentionDecoderLayer(
(self_attn): JambaFlashAttention2(
(q_proj): Linear(in_features=2560, out_features=2560, bias=False)
(k_proj): Linear(in_features=2560, out_features=128, bias=False)
(v_proj): Linear(in_features=2560, out_features=128, bias=False)
(o_proj): Linear(in_features=2560, out_features=2560, bias=False)
)
(feed_forward): JambaMLP(
(gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
(up_proj): Linear(in_features=2560, out_features=8192, bias=False)
(down_proj): Linear(in_features=8192, out_features=2560, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
(pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
)
(8-20): 13 x JambaMambaDecoderLayer(
(mamba): JambaMambaMixer(
(conv1d): Conv1d(5120, 5120, kernel_size=(4,), stride=(1,), padding=(3,), groups=5120)
(act): SiLUActivation()
(in_proj): Linear(in_features=2560, out_features=10240, bias=False)
(x_proj): Linear(in_features=5120, out_features=192, bias=False)
(dt_proj): Linear(in_features=160, out_features=5120, bias=True)
(out_proj): Linear(in_features=5120, out_features=2560, bias=False)
(dt_layernorm): JambaRMSNorm((160,), eps=1e-06)
(b_layernorm): JambaRMSNorm((16,), eps=1e-06)
(c_layernorm): JambaRMSNorm((16,), eps=1e-06)
)
(feed_forward): JambaMLP(
(gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
(up_proj): Linear(in_features=2560, out_features=8192, bias=False)
(down_proj): Linear(in_features=8192, out_features=2560, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
(pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
)
(21): JambaAttentionDecoderLayer(
(self_attn): JambaFlashAttention2(
(q_proj): Linear(in_features=2560, out_features=2560, bias=False)
(k_proj): Linear(in_features=2560, out_features=128, bias=False)
(v_proj): Linear(in_features=2560, out_features=128, bias=False)
(o_proj): Linear(in_features=2560, out_features=2560, bias=False)
)
(feed_forward): JambaMLP(
(gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
(up_proj): Linear(in_features=2560, out_features=8192, bias=False)
(down_proj): Linear(in_features=8192, out_features=2560, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
(pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
)
(22-27): 6 x JambaMambaDecoderLayer(
(mamba): JambaMambaMixer(
(conv1d): Conv1d(5120, 5120, kernel_size=(4,), stride=(1,), padding=(3,), groups=5120)
(act): SiLUActivation()
(in_proj): Linear(in_features=2560, out_features=10240, bias=False)
(x_proj): Linear(in_features=5120, out_features=192, bias=False)
(dt_proj): Linear(in_features=160, out_features=5120, bias=True)
(out_proj): Linear(in_features=5120, out_features=2560, bias=False)
(dt_layernorm): JambaRMSNorm((160,), eps=1e-06)
(b_layernorm): JambaRMSNorm((16,), eps=1e-06)
(c_layernorm): JambaRMSNorm((16,), eps=1e-06)
)
(feed_forward): JambaMLP(
(gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
(up_proj): Linear(in_features=2560, out_features=8192, bias=False)
(down_proj): Linear(in_features=8192, out_features=2560, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
(pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
)
)
(final_layernorm): JambaRMSNorm((2560,), eps=1e-06)
)
(lm_head): Linear(in_features=2560, out_features=65536, bias=False)
)
模型参数配置
model.config
JambaConfig {
"architectures": [
"JambaForCausalLM"
],
"attention_dropout": 0.0,
"attn_layer_offset": 7,
"attn_layer_period": 14,
"bos_token_id": 1,
"dtype": "bfloat16",
"eos_token_id": [
2,
519
],
"expert_layer_offset": 1,
"expert_layer_period": 2,
"hidden_act": "silu",
"hidden_size": 2560,
"initializer_range": 0.02,
"intermediate_size": 8192,
"mamba_conv_bias": true,
"mamba_d_conv": 4,
"mamba_d_state": 16,
"mamba_dt_rank": 160,
"mamba_expand": 2,
"mamba_proj_bias": false,
"max_position_embeddings": 262144,
"model_type": "jamba",
"num_attention_heads": 20,
"num_experts": 1,
"num_experts_per_tok": 1,
"num_hidden_layers": 28,
"num_key_value_heads": 1,
"num_logits_to_keep": 1,
"output_router_logits": false,
"pad_token_id": 0,
"rms_norm_eps": 1e-06,
"router_aux_loss_coef": 0.001,
"sliding_window": null,
"tie_word_embeddings": true,
"transformers_version": "4.57.0",
"use_cache": true,
"use_mamba_kernels": true,
"vocab_size": 65536
}
模型使用
messages = [
{"role": "user", "content": "You are analyzing customer support tickets to decide which need escalation.\nTicket 1: 'App crashes when uploading files >50MB.'\nTicket 2: 'Forgot password, can’t log in.'\nTicket 3: 'Billing page missing enterprise pricing.'\nClassify each ticket as Critical, Medium, or Low and explain your reasoning.\n"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
outputs = model.generate(**tokenizer(prompts, return_tensors="pt").to(model.device), do_sample=True, temperature=0.6, max_new_tokens=4096)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
user
Begin by thinking about the reasoning process in the mind within <think> </think> tags and then proceed to give your response.
You are analyzing customer support tickets to decide which need escalation.
Ticket 1: 'App crashes when uploading files >50MB.'
Ticket 2: 'Forgot password, can’t log in.'
Ticket 3: 'Billing page missing enterprise pricing.'
Classify each ticket as Critical, Medium, or Low and explain your reasoning.
assistant
<think>
We need to classify each ticket as Critical, Medium, or Low. We need to explain reasoning.
Let's analyze each.
Ticket 1: "App crashes when uploading files >50MB."
This is a bug that causes the app to crash. That is a severe issue as it can break functionality for users. It likely affects many users. So this is Critical.
Ticket 2: "Forgot password, can’t log in."
This is a common issue. It's a user account issue. It's important but not critical? It's a medium severity because it's a common problem and can be resolved by resetting password. But it's not a system failure. So likely Medium.
Ticket 3: "Billing page missing enterprise pricing."
This is a UI/UX issue. The billing page is missing the enterprise pricing. That might be a missing feature but not a critical bug. It's more of a usability or information gap. So Low.
But we need to check: is missing enterprise pricing a critical issue? It might be that users can't see pricing, but maybe they can still use the basic plan. It's not a bug. So Low.
Thus:
Ticket 1: Critical
Ticket 2: Medium
Ticket 3: Low
We should provide reasoning.
Let's produce final answer.
</think>
**Ticket 1: “App crashes when uploading files >50MB.”**
This is a severe performance/stability issue that can cause the entire application to fail for all users. A crash during a critical operation (file upload) affects the core functionality and can lead to data loss or service outages.
**Ticket 2: “Forgot password, can’t log in.”**
While this is a common user experience problem, it is not a system‑breaking failure. Users can typically reset their passwords, so the impact is moderate.
**Ticket 3: “Billing page missing enterprise pricing.”**
The absence of the enterprise pricing tier is a usability or informational gap, but it does not prevent users from using the basic plan. It is a low‑priority issue that can be addressed with a UI update.
| Class | Reason |
|-------|--------|
| **Critical** | Ticket 1 – crash risk for all users. |
| **Medium** | Ticket 2 – requires user account recovery. |
| **Low** | Ticket 3 – missing pricing detail but no functional outage. |
449

被折叠的 条评论
为什么被折叠?



