A Simple Math Problem

A Simple Math Problem

Time Limit : 3000/1000ms (Java/Other)   Memory Limit : 32768/32768K (Java/Other)
Total Submission(s) : 24   Accepted Submission(s) : 16
Problem Description
Lele now is thinking about a simple function f(x).

If x < 10 f(x) = x.
If x >= 10 f(x) = a0 * f(x-1) + a1 * f(x-2) + a2 * f(x-3) + …… + a9 * f(x-10);
And ai(0<=i<=9) can only be 0 or 1 .

Now, I will give a0 ~ a9 and two positive integers k and m ,and could you help Lele to caculate f(k)%m.
 

Input
The problem contains mutiple test cases.Please process to the end of file.
In each case, there will be two lines.
In the first line , there are two positive integers k and m. ( k<2*10^9 , m < 10^5 )
In the second line , there are ten integers represent a0 ~ a9.
 

Output
For each case, output f(k) % m in one line.
 

Sample Input
10 9999 1 1 1 1 1 1 1 1 1 1 20 500 1 0 1 0 1 0 1 0 1 0
 

Sample Output
45 104
 

Author
linle
 

Source
2007省赛集训队练习赛(6)_linle专场
 
#include<stdio.h>
#include<string.h>

int A[10][10], B[10][10], M;
void mul(int X[10][10], int Y[10][10])
{
	int Z[10][10];
	int i, j, k;
	memset(Z, 0, sizeof(Z));
	for (i = 0; i < 10; i++)
		for (j = 0; j < 10; j++)
			for (k = 0; k < 10; k++)
				Z[i][j] = (Z[i][j] + X[i][k] * Y[k][j] % M) % M;
	memcpy(X, Z, sizeof(Z));
}
int f(int n)
{
	int i;
	if (0<=n&&n<=9	)
		return n;
	n-=9;
	memset(B, 0, sizeof(B));
	for (i = 0; i < 10; i++)
		B[i][i] = 1;
	while (n)
	{
		if (n & 1)	mul(B, A);
		mul(A, A);
		n >>= 1;
	}
	return B[0][0] * 9 + B[0][1] * 8 + B[0][2] * 7 + B[0][3] * 6 + B[0][4] * 5 + B[0][5] * 4 + B[0][6] * 3 + B[0][7] * 2 + B[0][8] * 1 + B[0][9] * 0;
}

int main()
{
	int i, k, b, n, sum,j;
	while (scanf("%d%d", &k, &M) != EOF)
	{
		for (i = 0; i < 10;i++)
		scanf("%d", &A[0][i]);
		for (i = 1; i < 10; i++)
			for (j = 0; j < 10; j++)
			{
				if (j == i - 1)
					A[i][j] = 1;
				else
					A[i][j] = 0;
			}
		printf("%d\n", f(k)%M);
	}
	return 0;
}


请帮我分析解释一下:Searching to scale. To consider both salient and non-salient weights, we choose to automatically search for an optimal (per input channel) scaling factor that minimizes the output difference after quantization for a certain layer. This scaling factor should minimize the difference in output that occurs when quantizing the weights for a given layer, helping to maintain accuracy after quantization. Since the quantization function is not differentiable, we are not able to directly optimize the problem with vanilla backpropagation. There are some techniques relying on approximated gradients, which we found still suffer from unstable convergence. To make the process more stable, we define a search space for the optimal scale by analyzing the factors that will affect the choice of scaling factor. As shown in the last section, the saliency of weight channels is actually determined by the activation scale. Therefore, we simply use a very simple search space: s = sα X, α* = argminα L(sα X ) sX is the average magnitude of activation (per-channel), and we use a single hyperparameter α to balance between the protection of salient and non-salient channels. We can find the best α by a fast grid search over the interval of [0, 1]. We further apply weight clipping to minimize the MSE error of quantization. One of the key advantages of AWQ is its simplicity and efficiency. Unlike methods that rely on back-propagation or complex reconstruction processes, AWQ does not require fine-tuning or extensive calibration data. This makes it particularly well-suited for quantizing large pre-trained models, including instruction-tuned LMs and multi-modal LMs.
最新发布
07-22
<think>我们正在讨论AWQ(Activation-aware Weight Quantization)方法,该方法通过自动搜索每输入通道的最优缩放因子来同时考虑重要(salient)和非重要(non-salient)权重,以最小化量化后输出的差异。由于量化函数不可微,不能直接用反向传播优化,因此AWQ采用了一种基于分析定义的搜索空间,并利用一个超参数α进行快速网格搜索。 ### 核心步骤解析: 1. **目标**:为每个输入通道寻找最优缩放因子s,使得量化后的权重与原始权重在输出上的差异最小(即最小化输出误差)。 2. **挑战**:量化函数不可微,无法用梯度下降直接优化。 3. **解决方案**: - **定义搜索空间**:基于激活尺度(activation scale)决定缩放因子。具体地,缩放因子s被定义为: \( s = s_X^\alpha \) 其中 \( s_X \) 是每个通道的激活平均幅度(即该通道所有激活值的绝对值的平均值),α是一个待优化的超参数。 - **优化目标**:最小化损失函数 \( L(s) \),即量化后输出与原始输出的差异(如MSE)。 - **搜索方法**:在α的区间[0,1]上进行网格搜索(grid search),找到使损失L最小的α*。 4. **权重截断(Weight Clipping)**:在量化前对权重进行截断,以进一步减小量化误差(如将权重限制在[-c, c]范围内,优化截断阈值c以最小化MSE)。 ### 为什么有效? - **物理意义**:缩放因子s由激活幅度驱动,因为重要通道通常对应大激活值。通过 \( s = s_X^\alpha \) 的形式,α可以平衡: - α=0 → s=1:所有通道不缩放(相当于RTN)。 - α=1 → s=s_X:缩放比例完全由激活幅度决定(保护重要通道)。 - α在0~1之间:平滑过渡,避免过度缩放非重要通道。 - **效率**:单超参数α的网格搜索计算量极低(例如在[0,1]区间取10个点),无需反向传播或大量数据。 ### AWQ的优势: 1. **无需训练(Training-Free)**:不依赖梯度或反向传播,避免收敛不稳定问题。 2. **通用性**:适用于大模型(如LLM、多模态模型),因不改变权重分布,保留预训练知识。 3. **硬件友好**:仅对权重进行缩放,可通过预计算融合到量化权重中,推理时无额外开销。 ### 示例流程(伪代码): ```python for layer in model.layers: # 计算每通道激活幅度s_X (shape: [C_out]) s_X = average(|activations|, axis=0) # 沿输入通道维度平均 # 网格搜索最优α best_alpha, best_loss = None, float('inf') for alpha in [0, 0.1, 0.2, ..., 1.0]: s = s_X ** alpha # 计算缩放因子 scaled_weights = weights * s # 缩放权重 quantized_weights = quantize(scaled_weights) # 量化 dequant_weights = dequantize(quantized_weights) / s # 反量化(推理时) loss = compute_output_diff(layer(dequant_weights), original_output) if loss < best_loss: best_alpha = alpha best_loss = loss # 用最优α缩放并量化该层权重 s_opt = s_X ** best_alpha layer.quantized_weights = quantize(layer.weights * s_opt) layer.scale = s_opt # 存储缩放因子用于推理 ``` §§相关问题§§ 1. 为什么选择激活幅度(activation scale)而非权重幅度作为缩放因子的依据? 2. 网格搜索中如何确定α的搜索步长(如0.1)是否足够?是否存在更高效的搜索方法? 3. 权重截断(weight clipping)的具体实现如何与缩放因子搜索协同优化? 4. AWQ是否适用于动态激活范围的模型(如包含ReLU的CNN)? 5. 在超低比特量化(如2-bit)下,AWQ的搜索策略是否需要调整?</think>### AWQ(Activation-aware Weight Quantization)核心机制解析 您描述的正是AWQ的核心创新——**基于激活感知的缩放因子自动搜索**。该方法通过系统化解决缩放因子 \( s \) 的优化问题,平衡重要通道与非重要通道的量化误差。以下是关键原理分析: --- ### 一、核心问题定义 | **问题** | **传统方法局限** | **AWQ解决方案** | |---------------------------|-------------------------------|------------------------------| | **缩放因子 \( s \) 选择** | 手动设定(如固定 \( s=2 \)) | **自动搜索每通道最优 \( s \)** | | **优化目标** | 仅最小化权重量化误差 | **最小化网络输出差异** | | **可微分性** | 量化函数不可微,无法梯度优化 | **绕过梯度,采用搜索策略** | --- ### 二、搜索算法设计 #### 1. **搜索空间构建** - **理论基础**:通道重要性由**激活尺度(activation scale)** 决定(\( s_X = \text{通道激活绝对值均值} \)) - **参数化形式**: \( s = s_X^\alpha \) 其中 \(\alpha\) 是待优化的超参数,通过调节 \(\alpha\) 平衡两类通道: - \(\alpha \to 1\):强保护重要通道(\( s \approx s_X \)) - \(\alpha \to 0\):弱保护(\( s \approx 1 \)) #### 2. **优化目标函数** ```math \alpha^* = \arg\min_{\alpha \in [0,1]} \mathcal{L}(s_X^\alpha) ``` - \(\mathcal{L}\):量化前后**网络输出差异**(如MSE) - **关键优势**:直接优化最终任务精度,而非中间权重误差 #### 3. **高效搜索策略** | **方法** | **计算成本** | **稳定性** | **适用性** | |----------------|-------------|-----------|------------------| | 梯度近似 | 高 | 差(振荡) | 小模型 | | **网格搜索** | **极低** | **强** | **大模型** | - 在 \(\alpha \in [0,1]\) 区间均匀采样(如步长0.1) - 对每个 \(\alpha\) 计算 \(\mathcal{L}\),选择最优解 #### 4. **权重截断(Weight Clipping)** - **辅助优化**:在搜索同时约束权重范围 \([-c, c]\) - **目标**:最小化量化MSE误差 - **实现**:对每个通道独立优化截断阈值 \(c\) --- ### 三、AWQ技术优势 1. **免训练(Training-Free)** - 无需反向传播/微调,保留预训练知识完整性 2. **线性计算复杂度** - 网格搜索成本仅 \(O(1/\Delta\alpha)\),与模型规模无关 3. **通用性强** - 已验证于: - 千亿级LLM(如OPT、LLaMA) - 多模态模型(图像-文本联合编码) - 指令微调模型(保持对话能力) --- ### 四、与基线方法对比 | **方法** | 需要反向传播 | 需要校准数据 | 保护重要通道 | 保护非重要通道 | |----------------|-------------|-------------|-------------|---------------| | RTN(朴素量化) | ❌ | ❌ | ❌ | ❌ | | GPTQ | ✅ | ✅ | △ | △ | | **AWQ** | **❌** | **❌** | **✅** | **✅** | > △:部分保护,✅:系统化保护 --- ### 五、实现伪代码 ```python def awq_quantize(layer_weights, activations): # 计算每通道激活尺度 s_X s_X = np.mean(np.abs(activations), axis=0) # 沿通道维度平均 best_alpha, min_loss = None, float('inf') for alpha in np.linspace(0, 1, 11): # α ∈ [0,1] 网格搜索 s = s_X ** alpha # 计算缩放因子 scaled_weights = layer_weights * s quant_weights = quantize(scaled_weights) # 量化操作 dequant_weights = dequantize(quant_weights) / s # 反量化 # 计算输出差异损失(使用小批量校准数据) loss = compute_output_diff(dequant_weights, activations) if loss < min_loss: best_alpha = alpha min_loss = loss # 应用最优α并存储量化参数 s_opt = s_X ** best_alpha return quantize(layer_weights * s_opt), s_opt ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值