Irrelevant Elements UVA - 1635

本文介绍了解决UVA-1635问题的方法,通过使用唯一分解定理和杨辉三角迭代公式,有效地判断了多项式系数是否能被特定数m整除。文中提供了一个C++实现示例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

题目链接:Irrelevant Elements UVA - 1635

Translation:
题意见https://vjudge.net/problem/UVA-1635

Solution:

唯一分解定理,杨辉三角迭代公式 Cnk=nk1kCnk1 C k n = n − k − 1 k C k − 1 n

根据杨辉三角的迭代公式即可很容易得出最后一项的每一项系数。根据是否能够整除m,就可以得出这一项是否跟最后的结果有关。但是问题在于最后一项的数据范围太大,必须用高精度才能保存。所以直接对m取余来求解是行不通的。所以就必须用唯一分解定理:对m进行素因子分解,然后对于每一项m的素因子,求其在每一项Ci中的次数(对Ci也进行了唯一分解),如果一旦有在Ci中的次数<在m中的次数,那么就可以断定这个Ci肯定不能整除m.
据此就可以得出结果。

note:

1:注意最后判断能否整除 m 时,当在Ci中的次数>=在m中的次数并不能说明就一定能整除m。因为还有其它的素因子未判断。
2:这道题可以总结出一个利用唯一分解定理判断能否整除m的一个方法,这样就不需要高精度了。

#include<iostream>
#include<string>
#include<cstdio>
#include<cstring>
#include<cmath>
#include<algorithm>
#include<queue>
#include<stack>
#include<map>
#include<iomanip>
#define ll long long

using namespace std;


const int maxn = 100000 + 2;

int m, n;   //n个数
bool flag[maxn];    //表示第i个元素无关

void getPrimes(vector<int>& p) {    //求出m的所有素因子
    int tmp = m;

    int t = floor(sqrt(tmp) + 0.5);

    for(int i = 2; i <= t; i++) {
        if(tmp % i == 0) {
            p.push_back(i);
            while(tmp % i == 0) tmp /= i;
        }
    }
    if(tmp > 1)  p.push_back(tmp);
}

//c(n, k) = c(n, k-1) * (n-k+1) / k
int main()
{
    while(scanf("%d%d",&n,&m) != EOF) {
        memset(flag, 0, sizeof(flag));
        vector<int> primes;

        getPrimes(primes);  //求出m的所有素因子;

        --n;

        for(int i = 0; i < primes.size(); i++) {
            int tp = primes[i];
            int me = 0, pe = 0;
            int tmp = m;

            while(tmp % tp == 0) { tmp /= tp; me++; }   //求出这个素因子的次数

            for(int k = 1; k < n; k++) { //第一项和最后一项都是1,不用计算
                int up = n - k + 1;
                while(up % tp == 0) { up /= tp; pe++; }

                int down = k;
                while(down % tp == 0) { down /= tp; pe--; }

                if(pe < me)  flag[k] = true; //第k+1个数有关,note
            }
        }

        vector<int> ans;
        for(int k = 1; k < n; k++)
            if(!flag[k]) ans.push_back(k+1); // 编号从1开始
        cout << ans.size() << "\n";
        if(!ans.empty()) {
            cout << ans[0];
            for(int i = 1; i < ans.size(); i++) cout << " " << ans[i];
        }
        cout<<endl;
    }
    return 0;
}
### Self-Attention Mechanism in Neural Networks Introduction and Application In the context of neural networks, self-attention mechanisms allow models to weigh different parts of input data differently when processing sequences or other structured inputs. This approach enhances model performance by focusing on relevant elements while reducing noise from irrelevant ones. The core idea behind self-attention is that it enables each position (or token) within an input sequence to attend over all positions in previous layers, capturing dependencies between tokens regardless of their distance apart[^1]. For instance, in natural language processing tasks such as speech recognition where mel-frequency cepstral coefficients are used as features, this can help capture long-range dependencies more effectively than traditional recurrent architectures like LSTMs which suffer from vanishing gradient problems during training due to sequential nature [^2]. A typical implementation involves three matrices: Query(Q), Key(K), Value(V). These matrices transform embeddings into queries, keys, values respectively before computing dot products followed by softmax normalization across key vectors resulting in attention weights applied onto value vector producing final output representation per word/token after summing weighted contributions together: ```python import torch import torch.nn.functional as F def scaled_dot_product_attention(query, key, value): d_k = query.size(-1) scores = torch.matmul(query, key.transpose(-2,-1)) / math.sqrt(d_k) p_attn = F.softmax(scores,dim=-1) return torch.matmul(p_attn,value) class MultiHeadedAttention(nn.Module): def __init__(self,h,d_model): super().__init__() assert d_model % h ==0 self.d_k=d_model//h self.h=h self.linears=[nn.Linear(d_model,d_model)for _in range(4)] def forward(self,x): nbatches=x.shape[0] query,key,value=[ l(x).view(nbatches,-1,self.h,self.d_k).transpose(1,2) for l,x in zip(self.linears,(query,key,value)) ] x=self.scaled_dot_product_attention(query,key,value) ... ``` This method has been successfully utilized not only in NLP but also various domains including computer vision through transformers architecture variations designed specifically around these principles leading towards state-of-the-art results achieved recently.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值