Jamba Reasoning 3B 模型介绍

模型简介

Jamba Reasoning 3B 是一个紧凑、开源的推理模型,由 AI21 Labs 发布,标志着 Jamba 模型家族新系列的第一个成员。它重新定义了 设备端(on-device) 智能模型的可能性。


核心亮点与创新

1. 创新的混合架构

Jamba Reasoning 3B 基于 AI21 Labs 新颖的 SSM-Transformer 混合架构构建。

  • 高效内存利用: 它利用了一个比“普通”Transformer 架构888的 KV 缓存(KV cache),即使在上下文增长时也能保持较低的内存使用。
  • 性能提升: 相比于 DeepSeek、Google、Llama 和 Microsoft 等竞争对手,Jamba Reasoning 3B 实现了 2−52-525 倍的效率提升,同时取得了领先的智能基准成绩。
2. 强大的设备端能力

这款模型专为在个人设备上运行而设计:

  • 轻量级: 其轻量级的内存占用使得开发者可以轻松地在自己的设备上下载和运行,包括 iPhone、Android、Mac 和 PC
  • 高速推理: 在 M3 MacBook Pro 上,即使在 32K32\text{K}32K 令牌的上下文长度下,它也能以 404040 令牌/秒的速度生成文本。
  • 安全与离线使用: 由于它可以在设备上下载和定制,因此可以用于完全安全的应用,即使在没有网络的情况下也能持续运行。
3. 长上下文与卓越智能

Jamba Reasoning 3B 结合了低延迟、长上下文和领先的智能:

  • 超长上下文: 它的上下文窗口长度为 256K256\text{K}256K 令牌,并且能够处理高达 1M1\text{M}1M 令牌的上下文。
  • 稳定性能: 与大多数在上下文长度超过 32K32\text{K}32K 令牌后性能显著下降的纯 Transformer 模型不同,Jamba Reasoning 3B 在更长的上下文长度下仍能保持高效。
  • 领先智能表现: 它的智能分数优于 DeepSeek、Google、Meta 和 Microsoft 的其他设备端模型,尤其在指令遵循(IFBench)和通用知识(MMLU-Pro 和 Humanity’s Last Exam)任务上表现出色。

模型概览

特性详情
许可协议Apache 2.0(开源)
参数数量3B3\text{B}3B(紧凑型)
上下文窗口256K256\text{K}256K(可处理高达 1M1\text{M}1M 令牌)
核心架构混合 SSM-Transformer
下载途径Hugging Face, Kaggle
本地推理LM Studio, llama.cpp

应用潜力

Jamba Reasoning 3B 为设备端的实验和部署开辟了新的可能性,尤其适用于:

  • 企业应用: 例如,对法律或医疗文档进行快速、本地的处理和实体提取;或为公用事业公司的现场技术人员提供通过 PC 随时访问手册的能力。
  • 个人应用: 例如,安全地根据您的文件数据库进行调整的生产力追踪器对话和写作助手
  • 高级 AI 系统: 它能作为高级智能体应用中的精益组件,或用于多模态应用中,其中长上下文理解对输出质量至关重要。

模型加载

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ai21labs/AI21-Jamba-Reasoning-3B",
                                             dtype=torch.bfloat16,
                                             attn_implementation="flash_attention_2",
                                             device_map="auto")

tokenizer = AutoTokenizer.from_pretrained("ai21labs/AI21-Jamba-Reasoning-3B")

模型结构

model
JambaForCausalLM(
  (model): JambaModel(
    (embed_tokens): Embedding(65536, 2560, padding_idx=0)
    (layers): ModuleList(
      (0-6): 7 x JambaMambaDecoderLayer(
        (mamba): JambaMambaMixer(
          (conv1d): Conv1d(5120, 5120, kernel_size=(4,), stride=(1,), padding=(3,), groups=5120)
          (act): SiLUActivation()
          (in_proj): Linear(in_features=2560, out_features=10240, bias=False)
          (x_proj): Linear(in_features=5120, out_features=192, bias=False)
          (dt_proj): Linear(in_features=160, out_features=5120, bias=True)
          (out_proj): Linear(in_features=5120, out_features=2560, bias=False)
          (dt_layernorm): JambaRMSNorm((160,), eps=1e-06)
          (b_layernorm): JambaRMSNorm((16,), eps=1e-06)
          (c_layernorm): JambaRMSNorm((16,), eps=1e-06)
        )
        (feed_forward): JambaMLP(
          (gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (up_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (down_proj): Linear(in_features=8192, out_features=2560, bias=False)
          (act_fn): SiLUActivation()
        )
        (input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
        (pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
      )
      (7): JambaAttentionDecoderLayer(
        (self_attn): JambaFlashAttention2(
          (q_proj): Linear(in_features=2560, out_features=2560, bias=False)
          (k_proj): Linear(in_features=2560, out_features=128, bias=False)
          (v_proj): Linear(in_features=2560, out_features=128, bias=False)
          (o_proj): Linear(in_features=2560, out_features=2560, bias=False)
        )
        (feed_forward): JambaMLP(
          (gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (up_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (down_proj): Linear(in_features=8192, out_features=2560, bias=False)
          (act_fn): SiLUActivation()
        )
        (input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
        (pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
      )
      (8-20): 13 x JambaMambaDecoderLayer(
        (mamba): JambaMambaMixer(
          (conv1d): Conv1d(5120, 5120, kernel_size=(4,), stride=(1,), padding=(3,), groups=5120)
          (act): SiLUActivation()
          (in_proj): Linear(in_features=2560, out_features=10240, bias=False)
          (x_proj): Linear(in_features=5120, out_features=192, bias=False)
          (dt_proj): Linear(in_features=160, out_features=5120, bias=True)
          (out_proj): Linear(in_features=5120, out_features=2560, bias=False)
          (dt_layernorm): JambaRMSNorm((160,), eps=1e-06)
          (b_layernorm): JambaRMSNorm((16,), eps=1e-06)
          (c_layernorm): JambaRMSNorm((16,), eps=1e-06)
        )
        (feed_forward): JambaMLP(
          (gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (up_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (down_proj): Linear(in_features=8192, out_features=2560, bias=False)
          (act_fn): SiLUActivation()
        )
        (input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
        (pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
      )
      (21): JambaAttentionDecoderLayer(
        (self_attn): JambaFlashAttention2(
          (q_proj): Linear(in_features=2560, out_features=2560, bias=False)
          (k_proj): Linear(in_features=2560, out_features=128, bias=False)
          (v_proj): Linear(in_features=2560, out_features=128, bias=False)
          (o_proj): Linear(in_features=2560, out_features=2560, bias=False)
        )
        (feed_forward): JambaMLP(
          (gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (up_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (down_proj): Linear(in_features=8192, out_features=2560, bias=False)
          (act_fn): SiLUActivation()
        )
        (input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
        (pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
      )
      (22-27): 6 x JambaMambaDecoderLayer(
        (mamba): JambaMambaMixer(
          (conv1d): Conv1d(5120, 5120, kernel_size=(4,), stride=(1,), padding=(3,), groups=5120)
          (act): SiLUActivation()
          (in_proj): Linear(in_features=2560, out_features=10240, bias=False)
          (x_proj): Linear(in_features=5120, out_features=192, bias=False)
          (dt_proj): Linear(in_features=160, out_features=5120, bias=True)
          (out_proj): Linear(in_features=5120, out_features=2560, bias=False)
          (dt_layernorm): JambaRMSNorm((160,), eps=1e-06)
          (b_layernorm): JambaRMSNorm((16,), eps=1e-06)
          (c_layernorm): JambaRMSNorm((16,), eps=1e-06)
        )
        (feed_forward): JambaMLP(
          (gate_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (up_proj): Linear(in_features=2560, out_features=8192, bias=False)
          (down_proj): Linear(in_features=8192, out_features=2560, bias=False)
          (act_fn): SiLUActivation()
        )
        (input_layernorm): JambaRMSNorm((2560,), eps=1e-06)
        (pre_ff_layernorm): JambaRMSNorm((2560,), eps=1e-06)
      )
    )
    (final_layernorm): JambaRMSNorm((2560,), eps=1e-06)
  )
  (lm_head): Linear(in_features=2560, out_features=65536, bias=False)
)

模型参数配置

model.config
JambaConfig {
  "architectures": [
    "JambaForCausalLM"
  ],
  "attention_dropout": 0.0,
  "attn_layer_offset": 7,
  "attn_layer_period": 14,
  "bos_token_id": 1,
  "dtype": "bfloat16",
  "eos_token_id": [
    2,
    519
  ],
  "expert_layer_offset": 1,
  "expert_layer_period": 2,
  "hidden_act": "silu",
  "hidden_size": 2560,
  "initializer_range": 0.02,
  "intermediate_size": 8192,
  "mamba_conv_bias": true,
  "mamba_d_conv": 4,
  "mamba_d_state": 16,
  "mamba_dt_rank": 160,
  "mamba_expand": 2,
  "mamba_proj_bias": false,
  "max_position_embeddings": 262144,
  "model_type": "jamba",
  "num_attention_heads": 20,
  "num_experts": 1,
  "num_experts_per_tok": 1,
  "num_hidden_layers": 28,
  "num_key_value_heads": 1,
  "num_logits_to_keep": 1,
  "output_router_logits": false,
  "pad_token_id": 0,
  "rms_norm_eps": 1e-06,
  "router_aux_loss_coef": 0.001,
  "sliding_window": null,
  "tie_word_embeddings": true,
  "transformers_version": "4.57.0",
  "use_cache": true,
  "use_mamba_kernels": true,
  "vocab_size": 65536
}

模型使用

messages = [
    {"role": "user", "content": "You are analyzing customer support tickets to decide which need escalation.\nTicket 1: 'App crashes when uploading files >50MB.'\nTicket 2: 'Forgot password, can’t log in.'\nTicket 3: 'Billing page missing enterprise pricing.'\nClassify each ticket as Critical, Medium, or Low and explain your reasoning.\n"},
]

prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

outputs = model.generate(**tokenizer(prompts, return_tensors="pt").to(model.device), do_sample=True, temperature=0.6, max_new_tokens=4096)

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
user
Begin by thinking about the reasoning process in the mind within <think> </think> tags and then proceed to give your response.
You are analyzing customer support tickets to decide which need escalation.
Ticket 1: 'App crashes when uploading files >50MB.'
Ticket 2: 'Forgot password, can’t log in.'
Ticket 3: 'Billing page missing enterprise pricing.'
Classify each ticket as Critical, Medium, or Low and explain your reasoning.

assistant
<think>
We need to classify each ticket as Critical, Medium, or Low. We need to explain reasoning.

Let's analyze each.

Ticket 1: "App crashes when uploading files >50MB."

This is a bug that causes the app to crash. That is a severe issue as it can break functionality for users. It likely affects many users. So this is Critical.

Ticket 2: "Forgot password, can’t log in."

This is a common issue. It's a user account issue. It's important but not critical? It's a medium severity because it's a common problem and can be resolved by resetting password. But it's not a system failure. So likely Medium.

Ticket 3: "Billing page missing enterprise pricing."

This is a UI/UX issue. The billing page is missing the enterprise pricing. That might be a missing feature but not a critical bug. It's more of a usability or information gap. So Low.

But we need to check: is missing enterprise pricing a critical issue? It might be that users can't see pricing, but maybe they can still use the basic plan. It's not a bug. So Low.

Thus:

Ticket 1: Critical

Ticket 2: Medium

Ticket 3: Low

We should provide reasoning.

Let's produce final answer.
</think>

**Ticket 1: “App crashes when uploading files >50MB.”**  
This is a severe performance/stability issue that can cause the entire application to fail for all users. A crash during a critical operation (file upload) affects the core functionality and can lead to data loss or service outages.  

**Ticket 2: “Forgot password, can’t log in.”**  
While this is a common user experience problem, it is not a system‑breaking failure. Users can typically reset their passwords, so the impact is moderate.  

**Ticket 3: “Billing page missing enterprise pricing.”**  
The absence of the enterprise pricing tier is a usability or informational gap, but it does not prevent users from using the basic plan. It is a low‑priority issue that can be addressed with a UI update.  

| Class | Reason |
|-------|--------|
| **Critical** | Ticket 1 – crash risk for all users. |
| **Medium** | Ticket 2 – requires user account recovery. |
| **Low** | Ticket 3 – missing pricing detail but no functional outage. |
解释一下2. Find the API endpoint below corresponding to your desired function in the app. Copy the code snippet, replacing the placeholder values with your own input data. Or use the API Recorder to automatically generate your API requests. api_name: /get_model_info copy from gradio_client import Client client = Client("http://localhost:7860/") result = client.predict( model_name="Aya-23-8B-Chat", api_name="/get_model_info" ) print(result) Accepts 1 parameter: model_name Literal[&#39;Aya-23-8B-Chat&#39;, &#39;Aya-23-35B-Chat&#39;, &#39;Baichuan-7B-Base&#39;, &#39;Baichuan-13B-Base&#39;, &#39;Baichuan-13B-Chat&#39;, &#39;Baichuan2-7B-Base&#39;, &#39;Baichuan2-13B-Base&#39;, &#39;Baichuan2-7B-Chat&#39;, &#39;Baichuan2-13B-Chat&#39;, &#39;BLOOM-560M&#39;, &#39;BLOOM-3B&#39;, &#39;BLOOM-7B1&#39;, &#39;BLOOMZ-560M&#39;, &#39;BLOOMZ-3B&#39;, &#39;BLOOMZ-7B1-mt&#39;, &#39;BlueLM-7B-Base&#39;, &#39;BlueLM-7B-Chat&#39;, &#39;Breeze-7B&#39;, &#39;Breeze-7B-Instruct&#39;, &#39;ChatGLM2-6B-Chat&#39;, &#39;ChatGLM3-6B-Base&#39;, &#39;ChatGLM3-6B-Chat&#39;, &#39;Chinese-Llama-2-1.3B&#39;, &#39;Chinese-Llama-2-7B&#39;, &#39;Chinese-Llama-2-13B&#39;, &#39;Chinese-Alpaca-2-1.3B-Chat&#39;, &#39;Chinese-Alpaca-2-7B-Chat&#39;, &#39;Chinese-Alpaca-2-13B-Chat&#39;, &#39;CodeGeeX4-9B-Chat&#39;, &#39;CodeGemma-7B&#39;, &#39;CodeGemma-7B-Instruct&#39;, &#39;CodeGemma-1.1-2B&#39;, &#39;CodeGemma-1.1-7B-Instruct&#39;, &#39;Codestral-22B-v0.1-Chat&#39;, &#39;CommandR-35B-Chat&#39;, &#39;CommandR-Plus-104B-Chat&#39;, &#39;CommandR-35B-4bit-Chat&#39;, &#39;CommandR-Plus-104B-4bit-Chat&#39;, &#39;DBRX-132B-Base&#39;, &#39;DBRX-132B-Instruct&#39;, &#39;DeepSeek-LLM-7B-Base&#39;, &#39;DeepSeek-LLM-67B-Base&#39;, &#39;DeepSeek-LLM-7B-Chat&#39;, &#39;DeepSeek-LLM-67B-Chat&#39;, &#39;DeepSeek-Math-7B-Base&#39;, &#39;DeepSeek-Math-7B-Instruct&#39;, &#39;DeepSeek-MoE-16B-Base&#39;, &#39;DeepSeek-MoE-16B-Chat&#39;, &#39;DeepSeek-V2-16B-Base&#39;, &#39;DeepSeek-V2-236B-Base&#39;, &#39;DeepSeek-V2-16B-Chat&#39;, &#39;DeepSeek-V2-236B-Chat&#39;, &#39;DeepSeek-Coder-V2-16B-Base&#39;, &#39;DeepSeek-Coder-V2-236B-Base&#39;, &#39;DeepSeek-Coder-V2-16B-Instruct&#39;, &#39;DeepSeek-Coder-V2-236B-Instruct&#39;, &#39;DeepSeek-Coder-6.7B-Base&#39;, &#39;DeepSeek-Coder-7B-Base&#39;, &#39;DeepSeek-Coder-33B-Base&#39;, &#39;DeepSeek-Coder-6.7B-Instruct&#39;, &#39;DeepSeek-Coder-7B-Instruct&#39;, &#39;DeepSeek-Coder-33B-Instruct&#39;, &#39;DeepSeek-V2-0628-236B-Chat&#39;, &#39;DeepSeek-V2.5-236B-Chat&#39;, &#39;DeepSeek-V2.5-1210-236B-Chat&#39;, &#39;DeepSeek-V3-671B-Base&#39;, &#39;DeepSeek-V3-671B-Chat&#39;, &#39;DeepSeek-V3-0324-671B-Chat&#39;, &#39;DeepSeek-R1-1.5B-Distill&#39;, &#39;DeepSeek-R1-7B-Distill&#39;, &#39;DeepSeek-R1-8B-Distill&#39;, &#39;DeepSeek-R1-14B-Distill&#39;, &#39;DeepSeek-R1-32B-Distill&#39;, &#39;DeepSeek-R1-70B-Distill&#39;, &#39;DeepSeek-R1-671B-Chat-Zero&#39;, &#39;DeepSeek-R1-671B-Chat&#39;, &#39;DeepSeek-R1-0528-8B-Distill&#39;, &#39;DeepSeek-R1-0528-671B-Chat&#39;, &#39;Devstral-Small-2507-Instruct&#39;, &#39;EXAONE-3.0-7.8B-Instruct&#39;, &#39;Falcon-7B&#39;, &#39;Falcon-11B&#39;, &#39;Falcon-40B&#39;, &#39;Falcon-180B&#39;, &#39;Falcon-7B-Instruct&#39;, &#39;Falcon-40B-Instruct&#39;, &#39;Falcon-180B-Chat&#39;, &#39;Falcon-H1-0.5B-Base&#39;, &#39;Falcon-H1-1.5B-Base&#39;, &#39;Falcon-H1-1.5B-Deep-Base&#39;, &#39;Falcon-H1-3B-Base&#39;, &#39;Falcon-H1-7B-Base&#39;, &#39;Falcon-H1-34B-Base&#39;, &#39;Falcon-H1-0.5B-Instruct&#39;, &#39;Falcon-H1-1.5B-Instruct&#39;, &#39;Falcon-H1-1.5B-Deep-Instruct&#39;, &#39;Falcon-H1-3B-Instruct&#39;, &#39;Falcon-H1-7B-Instruct&#39;, &#39;Falcon-H1-34B-Instruct&#39;, &#39;Gemma-2B&#39;, &#39;Gemma-7B&#39;, &#39;Gemma-2B-Instruct&#39;, &#39;Gemma-7B-Instruct&#39;, &#39;Gemma-1.1-2B-Instruct&#39;, &#39;Gemma-1.1-7B-Instruct&#39;, &#39;Gemma-2-2B&#39;, &#39;Gemma-2-9B&#39;, &#39;Gemma-2-27B&#39;, &#39;Gemma-2-2B-Instruct&#39;, &#39;Gemma-2-9B-Instruct&#39;, &#39;Gemma-2-27B-Instruct&#39;, &#39;Gemma-3-1B&#39;, &#39;Gemma-3-1B-Instruct&#39;, &#39;MedGemma-27B-Instruct&#39;, &#39;Gemma-3-4B&#39;, &#39;Gemma-3-12B&#39;, &#39;Gemma-3-27B&#39;, &#39;Gemma-3-4B-Instruct&#39;, &#39;Gemma-3-12B-Instruct&#39;, &#39;Gemma-3-27B-Instruct&#39;, &#39;MedGemma-4B&#39;, &#39;MedGemma-4B-Instruct&#39;, &#39;Gemma-3n-E2B&#39;, &#39;Gemma-3n-E4B&#39;, &#39;Gemma-3n-E2B-Instruct&#39;, &#39;Gemma-3n-E4B-Instruct&#39;, &#39;GLM-4-9B&#39;, &#39;GLM-4-9B-Chat&#39;, &#39;GLM-4-9B-1M-Chat&#39;, &#39;GLM-4-0414-9B-Chat&#39;, &#39;GLM-4-0414-32B-Base&#39;, &#39;GLM-4-0414-32B-Chat&#39;, &#39;GLM-4.1V-9B-Base&#39;, &#39;GLM-4.1V-9B-Thinking&#39;, &#39;GLM-Z1-0414-9B-Chat&#39;, &#39;GLM-Z1-0414-32B-Chat&#39;, &#39;GPT-2-Small&#39;, &#39;GPT-2-Medium&#39;, &#39;GPT-2-Large&#39;, &#39;GPT-2-XL&#39;, &#39;Granite-3.0-1B-A400M-Base&#39;, &#39;Granite-3.0-3B-A800M-Base&#39;, &#39;Granite-3.0-2B-Base&#39;, &#39;Granite-3.0-8B-Base&#39;, &#39;Granite-3.0-1B-A400M-Instruct&#39;, &#39;Granite-3.0-3B-A800M-Instruct&#39;, &#39;Granite-3.0-2B-Instruct&#39;, &#39;Granite-3.0-8B-Instruct&#39;, &#39;Granite-3.1-1B-A400M-Base&#39;, &#39;Granite-3.1-3B-A800M-Base&#39;, &#39;Granite-3.1-2B-Base&#39;, &#39;Granite-3.1-8B-Base&#39;, &#39;Granite-3.1-1B-A400M-Instruct&#39;, &#39;Granite-3.1-3B-A800M-Instruct&#39;, &#39;Granite-3.1-2B-Instruct&#39;, &#39;Granite-3.1-8B-Instruct&#39;, &#39;Granite-3.2-2B-Instruct&#39;, &#39;Granite-3.2-8B-Instruct&#39;, &#39;Granite-3.3-2B-Base&#39;, &#39;Granite-3.3-8B-Base&#39;, &#39;Granite-3.3-2B-Instruct&#39;, &#39;Granite-3.3-8B-Instruct&#39;, &#39;Granite-Vision-3.2-2B&#39;, &#39;Hunyuan-7B-Instruct&#39;, &#39;Index-1.9B-Base&#39;, &#39;Index-1.9B-Base-Pure&#39;, &#39;Index-1.9B-Chat&#39;, &#39;Index-1.9B-Character-Chat&#39;, &#39;Index-1.9B-Chat-32K&#39;, &#39;InternLM-7B&#39;, &#39;InternLM-20B&#39;, &#39;InternLM-7B-Chat&#39;, &#39;InternLM-20B-Chat&#39;, &#39;InternLM2-7B&#39;, &#39;InternLM2-20B&#39;, &#39;InternLM2-7B-Chat&#39;, &#39;InternLM2-20B-Chat&#39;, &#39;InternLM2.5-1.8B&#39;, &#39;InternLM2.5-7B&#39;, &#39;InternLM2.5-20B&#39;, &#39;InternLM2.5-1.8B-Chat&#39;, &#39;InternLM2.5-7B-Chat&#39;, &#39;InternLM2.5-7B-1M-Chat&#39;, &#39;InternLM2.5-20B-Chat&#39;, &#39;InternLM3-8B-Chat&#39;, &#39;InternVL2.5-2B-MPO&#39;, &#39;InternVL2.5-8B-MPO&#39;, &#39;InternVL3-1B-hf&#39;, &#39;InternVL3-2B-hf&#39;, &#39;InternVL3-8B-hf&#39;, &#39;InternVL3-14B-hf&#39;, &#39;InternVL3-38B-hf&#39;, &#39;InternVL3-78B-hf&#39;, &#39;Jamba-v0.1&#39;, &#39;Kimi-Dev-72B-Instruct&#39;, &#39;Kimi-VL-A3B-Instruct&#39;, &#39;Kimi-VL-A3B-Thinking&#39;, &#39;Kimi-VL-A3B-Thinking-2506&#39;, &#39;LingoWhale-8B&#39;, &#39;Llama-7B&#39;, &#39;Llama-13B&#39;, &#39;Llama-30B&#39;, &#39;Llama-65B&#39;, &#39;Llama-2-7B&#39;, &#39;Llama-2-13B&#39;, &#39;Llama-2-70B&#39;, &#39;Llama-2-7B-Chat&#39;, &#39;Llama-2-13B-Chat&#39;, &#39;Llama-2-70B-Chat&#39;, &#39;Llama-3-8B&#39;, &#39;Llama-3-70B&#39;, &#39;Llama-3-8B-Instruct&#39;, &#39;Llama-3-70B-Instruct&#39;, &#39;Llama-3-8B-Chinese-Chat&#39;, &#39;Llama-3-70B-Chinese-Chat&#39;, &#39;Llama-3.1-8B&#39;, &#39;Llama-3.1-70B&#39;, &#39;Llama-3.1-405B&#39;, &#39;Llama-3.1-8B-Instruct&#39;, &#39;Llama-3.1-70B-Instruct&#39;, &#39;Llama-3.1-405B-Instruct&#39;, &#39;Llama-3.1-8B-Chinese-Chat&#39;, &#39;Llama-3.1-70B-Chinese-Chat&#39;, &#39;Llama-3.2-1B&#39;, &#39;Llama-3.2-3B&#39;, &#39;Llama-3.2-1B-Instruct&#39;, &#39;Llama-3.2-3B-Instruct&#39;, &#39;Llama-3.3-70B-Instruct&#39;, &#39;Llama-3.2-11B-Vision&#39;, &#39;Llama-3.2-11B-Vision-Instruct&#39;, &#39;Llama-3.2-90B-Vision&#39;, &#39;Llama-3.2-90B-Vision-Instruct&#39;, &#39;Llama-4-Scout-17B-16E&#39;, &#39;Llama-4-Scout-17B-16E-Instruct&#39;, &#39;Llama-4-Maverick-17B-128E&#39;, &#39;Llama-4-Maverick-17B-128E-Instruct&#39;, &#39;LLaVA-1.5-7B-Chat&#39;, &#39;LLaVA-1.5-13B-Chat&#39;, &#39;LLaVA-NeXT-7B-Chat&#39;, &#39;LLaVA-NeXT-13B-Chat&#39;, &#39;LLaVA-NeXT-Mistral-7B-Chat&#39;, &#39;LLaVA-NeXT-Llama3-8B-Chat&#39;, &#39;LLaVA-NeXT-34B-Chat&#39;, &#39;LLaVA-NeXT-72B-Chat&#39;, &#39;LLaVA-NeXT-110B-Chat&#39;, &#39;LLaVA-NeXT-Video-7B-Chat&#39;, &#39;LLaVA-NeXT-Video-7B-DPO-Chat&#39;, &#39;LLaVA-NeXT-Video-7B-32k-Chat&#39;, &#39;LLaVA-NeXT-Video-34B-Chat&#39;, &#39;LLaVA-NeXT-Video-34B-DPO-Chat&#39;, &#39;Marco-o1-Chat&#39;, &#39;MiMo-7B-Base&#39;, &#39;MiMo-7B-Instruct&#39;, &#39;MiMo-7B-Instruct-RL&#39;, &#39;MiMo-7B-RL-ZERO&#39;, &#39;MiMo-7B-VL-Instruct&#39;, &#39;MiMo-7B-VL-RL&#39;, &#39;MiniCPM-2B-SFT-Chat&#39;, &#39;MiniCPM-2B-DPO-Chat&#39;, &#39;MiniCPM3-4B-Chat&#39;, &#39;MiniCPM4-0.5B-Chat&#39;, &#39;MiniCPM4-8B-Chat&#39;, &#39;MiniCPM-o-2_6&#39;, &#39;MiniCPM-V-2_6&#39;, &#39;Ministral-8B-Instruct-2410&#39;, &#39;Mistral-Nemo-Base-2407&#39;, &#39;Mistral-Nemo-Instruct-2407&#39;, &#39;Mistral-7B-v0.1&#39;, &#39;Mistral-7B-v0.2&#39;, &#39;Mistral-7B-v0.3&#39;, &#39;Mistral-7B-Instruct-v0.1&#39;, &#39;Mistral-7B-Instruct-v0.2&#39;, &#39;Mistral-7B-Instruct-v0.3&#39;, &#39;Mistral-Small-24B-Base-2501&#39;, &#39;Mistral-Small-24B-Instruct-2501&#39;, &#39;Mistral-Small-3.1-24B-Base&#39;, &#39;Mistral-Small-3.1-24B-Instruct&#39;, &#39;Mistral-Small-3.2-24B-Instruct&#39;, &#39;Mixtral-8x7B-v0.1&#39;, &#39;Mixtral-8x22B-v0.1&#39;, &#39;Mixtral-8x7B-v0.1-Instruct&#39;, &#39;Mixtral-8x22B-v0.1-Instruct&#39;, &#39;Moonlight-16B-A3B&#39;, &#39;Moonlight-16B-A3B-Instruct&#39;, &#39;OLMo-1B&#39;, &#39;OLMo-7B&#39;, &#39;OLMo-7B-Chat&#39;, &#39;OLMo-1.7-7B&#39;, &#39;OpenChat3.5-7B-Chat&#39;, &#39;OpenChat3.6-8B-Chat&#39;, &#39;OpenCoder-1.5B-Base&#39;, &#39;OpenCoder-8B-Base&#39;, &#39;OpenCoder-1.5B-Instruct&#39;, &#39;OpenCoder-8B-Instruct&#39;, &#39;Orion-14B-Base&#39;, &#39;Orion-14B-Chat&#39;, &#39;Orion-14B-Long-Chat&#39;, &#39;Orion-14B-RAG-Chat&#39;, &#39;Orion-14B-Plugin-Chat&#39;, &#39;PaliGemma-3B-pt-224&#39;, &#39;PaliGemma-3B-pt-448&#39;, &#39;PaliGemma-3B-pt-896&#39;, &#39;PaliGemma-3B-mix-224&#39;, &#39;PaliGemma-3B-mix-448&#39;, &#39;PaliGemma2-3B-pt-224&#39;, &#39;PaliGemma2-3B-pt-448&#39;, &#39;PaliGemma2-3B-pt-896&#39;, &#39;PaliGemma2-10B-pt-224&#39;, &#39;PaliGemma2-10B-pt-448&#39;, &#39;PaliGemma2-10B-pt-896&#39;, &#39;PaliGemma2-28B-pt-224&#39;, &#39;PaliGemma2-28B-pt-448&#39;, &#39;PaliGemma2-28B-pt-896&#39;, &#39;PaliGemma2-3B-mix-224&#39;, &#39;PaliGemma2-3B-mix-448&#39;, &#39;PaliGemma2-10B-mix-224&#39;, &#39;PaliGemma2-10B-mix-448&#39;, &#39;PaliGemma2-28B-mix-224&#39;, &#39;PaliGemma2-28B-mix-448&#39;, &#39;Phi-1.5-1.3B&#39;, &#39;Phi-2-2.7B&#39;, &#39;Phi-3-4B-4k-Instruct&#39;, &#39;Phi-3-4B-128k-Instruct&#39;, &#39;Phi-3-14B-8k-Instruct&#39;, &#39;Phi-3-14B-128k-Instruct&#39;, &#39;Phi-3.5-4B-instruct&#39;, &#39;Phi-3.5-MoE-42B-A6.6B-instruct&#39;, &#39;Phi-3-7B-8k-Instruct&#39;, &#39;Phi-3-7B-128k-Instruct&#39;, &#39;Phi-4-14B-Instruct&#39;, &#39;Pixtral-12B&#39;, &#39;Qwen-1.8B&#39;, &#39;Qwen-7B&#39;, &#39;Qwen-14B&#39;, &#39;Qwen-72B&#39;, &#39;Qwen-1.8B-Chat&#39;, &#39;Qwen-7B-Chat&#39;, &#39;Qwen-14B-Chat&#39;, &#39;Qwen-72B-Chat&#39;, &#39;Qwen-1.8B-Chat-Int8&#39;, &#39;Qwen-1.8B-Chat-Int4&#39;, &#39;Qwen-7B-Chat-Int8&#39;, &#39;Qwen-7B-Chat-Int4&#39;, &#39;Qwen-14B-Chat-Int8&#39;, &#39;Qwen-14B-Chat-Int4&#39;, &#39;Qwen-72B-Chat-Int8&#39;, &#39;Qwen-72B-Chat-Int4&#39;, &#39;Qwen1.5-0.5B&#39;, &#39;Qwen1.5-1.8B&#39;, &#39;Qwen1.5-4B&#39;, &#39;Qwen1.5-7B&#39;, &#39;Qwen1.5-14B&#39;, &#39;Qwen1.5-32B&#39;, &#39;Qwen1.5-72B&#39;, &#39;Qwen1.5-110B&#39;, &#39;Qwen1.5-MoE-A2.7B&#39;, &#39;Qwen1.5-0.5B-Chat&#39;, &#39;Qwen1.5-1.8B-Chat&#39;, &#39;Qwen1.5-4B-Chat&#39;, &#39;Qwen1.5-7B-Chat&#39;, &#39;Qwen1.5-14B-Chat&#39;, &#39;Qwen1.5-32B-Chat&#39;, &#39;Qwen1.5-72B-Chat&#39;, &#39;Qwen1.5-110B-Chat&#39;, &#39;Qwen1.5-MoE-A2.7B-Chat&#39;, &#39;Qwen1.5-0.5B-Chat-GPTQ-Int8&#39;, &#39;Qwen1.5-0.5B-Chat-AWQ&#39;, &#39;Qwen1.5-1.8B-Chat-GPTQ-Int8&#39;, &#39;Qwen1.5-1.8B-Chat-AWQ&#39;, &#39;Qwen1.5-4B-Chat-GPTQ-Int8&#39;, &#39;Qwen1.5-4B-Chat-AWQ&#39;, &#39;Qwen1.5-7B-Chat-GPTQ-Int8&#39;, &#39;Qwen1.5-7B-Chat-AWQ&#39;, &#39;Qwen1.5-14B-Chat-GPTQ-Int8&#39;, &#39;Qwen1.5-14B-Chat-AWQ&#39;, &#39;Qwen1.5-32B-Chat-AWQ&#39;, &#39;Qwen1.5-72B-Chat-GPTQ-Int8&#39;, &#39;Qwen1.5-72B-Chat-AWQ&#39;, &#39;Qwen1.5-110B-Chat-AWQ&#39;, &#39;Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4&#39;, &#39;CodeQwen1.5-7B&#39;, &#39;CodeQwen1.5-7B-Chat&#39;, &#39;CodeQwen1.5-7B-Chat-AWQ&#39;, &#39;Qwen2-0.5B&#39;, &#39;Qwen2-1.5B&#39;, &#39;Qwen2-7B&#39;, &#39;Qwen2-72B&#39;, &#39;Qwen2-MoE-57B-A14B&#39;, &#39;Qwen2-0.5B-Instruct&#39;, &#39;Qwen2-1.5B-Instruct&#39;, &#39;Qwen2-7B-Instruct&#39;, &#39;Qwen2-72B-Instruct&#39;, &#39;Qwen2-MoE-57B-A14B-Instruct&#39;, &#39;Qwen2-0.5B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2-0.5B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-0.5B-Instruct-AWQ&#39;, &#39;Qwen2-1.5B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2-1.5B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-1.5B-Instruct-AWQ&#39;, &#39;Qwen2-7B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2-7B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-7B-Instruct-AWQ&#39;, &#39;Qwen2-72B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2-72B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-72B-Instruct-AWQ&#39;, &#39;Qwen2-57B-A14B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-Math-1.5B&#39;, &#39;Qwen2-Math-7B&#39;, &#39;Qwen2-Math-72B&#39;, &#39;Qwen2-Math-1.5B-Instruct&#39;, &#39;Qwen2-Math-7B-Instruct&#39;, &#39;Qwen2-Math-72B-Instruct&#39;, &#39;Qwen2.5-0.5B&#39;, &#39;Qwen2.5-1.5B&#39;, &#39;Qwen2.5-3B&#39;, &#39;Qwen2.5-7B&#39;, &#39;Qwen2.5-14B&#39;, &#39;Qwen2.5-32B&#39;, &#39;Qwen2.5-72B&#39;, &#39;Qwen2.5-0.5B-Instruct&#39;, &#39;Qwen2.5-1.5B-Instruct&#39;, &#39;Qwen2.5-3B-Instruct&#39;, &#39;Qwen2.5-7B-Instruct&#39;, &#39;Qwen2.5-14B-Instruct&#39;, &#39;Qwen2.5-32B-Instruct&#39;, &#39;Qwen2.5-72B-Instruct&#39;, &#39;Qwen2.5-7B-Instruct-1M&#39;, &#39;Qwen2.5-14B-Instruct-1M&#39;, &#39;Qwen2.5-0.5B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2.5-0.5B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2.5-0.5B-Instruct-AWQ&#39;, &#39;Qwen2.5-1.5B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2.5-1.5B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2.5-1.5B-Instruct-AWQ&#39;, &#39;Qwen2.5-3B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2.5-3B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2.5-3B-Instruct-AWQ&#39;, &#39;Qwen2.5-7B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2.5-7B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2.5-7B-Instruct-AWQ&#39;, &#39;Qwen2.5-14B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2.5-14B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2.5-14B-Instruct-AWQ&#39;, &#39;Qwen2.5-32B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2.5-32B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2.5-32B-Instruct-AWQ&#39;, &#39;Qwen2.5-72B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2.5-72B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2.5-72B-Instruct-AWQ&#39;, &#39;Qwen2.5-Coder-0.5B&#39;, &#39;Qwen2.5-Coder-1.5B&#39;, &#39;Qwen2.5-Coder-3B&#39;, &#39;Qwen2.5-Coder-7B&#39;, &#39;Qwen2.5-Coder-14B&#39;, &#39;Qwen2.5-Coder-32B&#39;, &#39;Qwen2.5-Coder-0.5B-Instruct&#39;, &#39;Qwen2.5-Coder-1.5B-Instruct&#39;, &#39;Qwen2.5-Coder-3B-Instruct&#39;, &#39;Qwen2.5-Coder-7B-Instruct&#39;, &#39;Qwen2.5-Coder-14B-Instruct&#39;, &#39;Qwen2.5-Coder-32B-Instruct&#39;, &#39;Qwen2.5-Math-1.5B&#39;, &#39;Qwen2.5-Math-7B&#39;, &#39;Qwen2.5-Math-72B&#39;, &#39;Qwen2.5-Math-1.5B-Instruct&#39;, &#39;Qwen2.5-Math-7B-Instruct&#39;, &#39;Qwen2.5-Math-72B-Instruct&#39;, &#39;QwQ-32B-Preview-Instruct&#39;, &#39;QwQ-32B-Instruct&#39;, &#39;Qwen3-0.6B-Base&#39;, &#39;Qwen3-1.7B-Base&#39;, &#39;Qwen3-4B-Base&#39;, &#39;Qwen3-8B-Base&#39;, &#39;Qwen3-14B-Base&#39;, &#39;Qwen3-30B-A3B-Base&#39;, &#39;Qwen3-0.6B-Instruct&#39;, &#39;Qwen3-1.7B-Instruct&#39;, &#39;Qwen3-4B-Instruct&#39;, &#39;Qwen3-8B-Instruct&#39;, &#39;Qwen3-14B-Instruct&#39;, &#39;Qwen3-32B-Instruct&#39;, &#39;Qwen3-30B-A3B-Instruct&#39;, &#39;Qwen3-235B-A22B-Instruct&#39;, &#39;Qwen3-0.6B-Instruct-GPTQ-Int8&#39;, &#39;Qwen3-1.7B-Instruct-GPTQ-Int8&#39;, &#39;Qwen3-4B-Instruct-AWQ&#39;, &#39;Qwen3-8B-Instruct-AWQ&#39;, &#39;Qwen3-14B-Instruct-AWQ&#39;, &#39;Qwen3-32B-Instruct-AWQ&#39;, &#39;Qwen3-30B-A3B-Instruct-GPTQ-Int4&#39;, &#39;Qwen3-235B-A22B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-Audio-7B&#39;, &#39;Qwen2-Audio-7B-Instruct&#39;, &#39;Qwen2.5-Omni-3B&#39;, &#39;Qwen2.5-Omni-7B&#39;, &#39;Qwen2.5-Omni-7B-GPTQ-Int4&#39;, &#39;Qwen2.5-Omni-7B-AWQ&#39;, &#39;Qwen2-VL-2B&#39;, &#39;Qwen2-VL-7B&#39;, &#39;Qwen2-VL-72B&#39;, &#39;Qwen2-VL-2B-Instruct&#39;, &#39;Qwen2-VL-7B-Instruct&#39;, &#39;Qwen2-VL-72B-Instruct&#39;, &#39;Qwen2-VL-2B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2-VL-2B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-VL-2B-Instruct-AWQ&#39;, &#39;Qwen2-VL-7B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2-VL-7B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-VL-7B-Instruct-AWQ&#39;, &#39;Qwen2-VL-72B-Instruct-GPTQ-Int8&#39;, &#39;Qwen2-VL-72B-Instruct-GPTQ-Int4&#39;, &#39;Qwen2-VL-72B-Instruct-AWQ&#39;, &#39;QVQ-72B-Preview&#39;, &#39;Qwen2.5-VL-3B-Instruct&#39;, &#39;Qwen2.5-VL-7B-Instruct&#39;, &#39;Qwen2.5-VL-32B-Instruct&#39;, &#39;Qwen2.5-VL-72B-Instruct&#39;, &#39;Qwen2.5-VL-3B-Instruct-AWQ&#39;, &#39;Qwen2.5-VL-7B-Instruct-AWQ&#39;, &#39;Qwen2.5-VL-72B-Instruct-AWQ&#39;, &#39;Seed-Coder-8B-Base&#39;, &#39;Seed-Coder-8B-Instruct&#39;, &#39;Seed-Coder-8B-Instruct-Reasoning&#39;, &#39;Skywork-13B-Base&#39;, &#39;Skywork-o1-Open-Llama-3.1-8B&#39;, &#39;SmolLM-135M&#39;, &#39;SmolLM-360M&#39;, &#39;SmolLM-1.7B&#39;, &#39;SmolLM-135M-Instruct&#39;, &#39;SmolLM-360M-Instruct&#39;, &#39;SmolLM-1.7B-Instruct&#39;, &#39;SmolLM2-135M&#39;, &#39;SmolLM2-360M&#39;, &#39;SmolLM2-1.7B&#39;, &#39;SmolLM2-135M-Instruct&#39;, &#39;SmolLM2-360M-Instruct&#39;, &#39;SmolLM2-1.7B-Instruct&#39;, &#39;SOLAR-10.7B-v1.0&#39;, &#39;SOLAR-10.7B-Instruct-v1.0&#39;, &#39;StarCoder2-3B&#39;, &#39;StarCoder2-7B&#39;, &#39;StarCoder2-15B&#39;, &#39;TeleChat-1B-Chat&#39;, &#39;TeleChat-7B-Chat&#39;, &#39;TeleChat-12B-Chat&#39;, &#39;TeleChat-52B-Chat&#39;, &#39;TeleChat2-3B-Chat&#39;, &#39;TeleChat2-7B-Chat&#39;, &#39;TeleChat2-35B-Chat&#39;, &#39;TeleChat2-115B-Chat&#39;, &#39;Vicuna-v1.5-7B-Chat&#39;, &#39;Vicuna-v1.5-13B-Chat&#39;, &#39;Video-LLaVA-7B-Chat&#39;, &#39;XuanYuan-6B&#39;, &#39;XuanYuan-70B&#39;, &#39;XuanYuan2-70B&#39;, &#39;XuanYuan-6B-Chat&#39;, &#39;XuanYuan-70B-Chat&#39;, &#39;XuanYuan2-70B-Chat&#39;, &#39;XuanYuan-6B-Chat-8bit&#39;, &#39;XuanYuan-6B-Chat-4bit&#39;, &#39;XuanYuan-70B-Chat-8bit&#39;, &#39;XuanYuan-70B-Chat-4bit&#39;, &#39;XuanYuan2-70B-Chat-8bit&#39;, &#39;XuanYuan2-70B-Chat-4bit&#39;, &#39;XVERSE-7B&#39;, &#39;XVERSE-13B&#39;, &#39;XVERSE-65B&#39;, &#39;XVERSE-65B-2&#39;, &#39;XVERSE-7B-Chat&#39;, &#39;XVERSE-13B-Chat&#39;, &#39;XVERSE-65B-Chat&#39;, &#39;XVERSE-MoE-A4.2B&#39;, &#39;XVERSE-7B-Chat-GPTQ-Int8&#39;, &#39;XVERSE-7B-Chat-GPTQ-Int4&#39;, &#39;XVERSE-13B-Chat-GPTQ-Int8&#39;, &#39;XVERSE-13B-Chat-GPTQ-Int4&#39;, &#39;XVERSE-65B-Chat-GPTQ-Int4&#39;, &#39;Yayi-7B&#39;, &#39;Yayi-13B&#39;, &#39;Yi-6B&#39;, &#39;Yi-9B&#39;, &#39;Yi-34B&#39;, &#39;Yi-6B-Chat&#39;, &#39;Yi-34B-Chat&#39;, &#39;Yi-6B-Chat-8bits&#39;, &#39;Yi-6B-Chat-4bits&#39;, &#39;Yi-34B-Chat-8bits&#39;, &#39;Yi-34B-Chat-4bits&#39;, &#39;Yi-1.5-6B&#39;, &#39;Yi-1.5-9B&#39;, &#39;Yi-1.5-34B&#39;, &#39;Yi-1.5-6B-Chat&#39;, &#39;Yi-1.5-9B-Chat&#39;, &#39;Yi-1.5-34B-Chat&#39;, &#39;Yi-Coder-1.5B&#39;, &#39;Yi-Coder-9B&#39;, &#39;Yi-Coder-1.5B-Chat&#39;, &#39;Yi-Coder-9B-Chat&#39;, &#39;Yi-VL-6B-Chat&#39;, &#39;Yi-VL-34B-Chat&#39;, &#39;Yuan2-2B-Chat&#39;, &#39;Yuan2-51B-Chat&#39;, &#39;Yuan2-102B-Chat&#39;, &#39;Zephyr-7B-Alpha-Chat&#39;, &#39;Zephyr-7B-Beta-Chat&#39;, &#39;Zephyr-141B-ORPO-Chat&#39;, &#39;Custom&#39;] Required The input value that is provided in the "parameter_5" Dropdown component.
07-17
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值