Irrelevant Elements UVA - 1635

题意:对于给定的n个数a1,a2,...an,依次求出相邻两数之和,将得到一个新数列。重复上述操作时,最后结果将变成一个数。问这个数除以m的余数与那些数无关?

思路:1.在一般情况下,最后ai的系数是C(i - 1,n - 1)。这样问题就变成了C(0,n -1),C(1,n - 1),...C(n -1 ,n - 1)那些是m的倍数,可以用C(n,k) = C(k - 1,n) * (n - k + 1) / k来递推

2.由于n<=1e5,数据范围过大1,因此我们可用唯一分解因式分解将m分解成素数,fac[i]记录了m分解成的素数,num_m[i]表示fac[i]有几个,对于C(i - 1,n - 1)求出对于m分解成的所有素数有几个num_c[i],只有分解成的所有素数num_c[i]>=num_m[i],该项无关

#include <bits/stdc++.h>
#define ll long long
#define ull unsigned long long
#define INF 0x3f3f3f3f
#define mod 1000000007;
using namespace std;
//    freopen("in.txt","r",stdin);
//    freopen("out.txt","w",stdout);
const int maxn = 1e5 + 2;
int n,m,cnt,sum,ans[maxn];
int fac[maxn],num_m[maxn],num_c[maxn];
void init()
{
    cnt = 0;
    int tmp = m;
    memset(num_m,0,sizeof(num_m));
    memset(num_c,0,sizeof(num_c));
    for(int i = 2;i * i <= tmp;i++)
    {
        if(tmp % i == 0)
        {
            fac[++cnt] = i;
            while(tmp % i == 0)
            {
                tmp /= i;
                num_m[cnt]++;
            }
        }
    }
    if(tmp > 1) fac[++cnt] = tmp,num_m[cnt]++;
}
int test(int x)
{
    int a = n - x;
    int b = x;
    for(int i = 1;i <= cnt;i++)
    {
        while(a % fac[i] == 0) a /= fac[i],num_c[i]++;
        while(b % fac[i] == 0) b /= fac[i],num_c[i]--;
    }
    for(int i = 1;i <= cnt;i++)
        if(num_m[i] > num_c[i]) return 0;
    return 1;
}
void solve()
{
    sum = 0;
    for(int i = 1;i <= n - 1;i++)
    {
        if(test(i)) ans[++sum] = i + 1;
    }
    printf("%d\n",sum);
    for(int i = 1;i <= sum;i++)
    {
        if(i > 1) printf(" ");
        printf("%d",ans[i]);
    }
    printf("\n");
}
int main()
{
    while(scanf("%d%d",&n,&m) != EOF)
    {
        init();
        solve();
    }
    return 0;
}

 

### Self-Attention Mechanism in Neural Networks Introduction and Application In the context of neural networks, self-attention mechanisms allow models to weigh different parts of input data differently when processing sequences or other structured inputs. This approach enhances model performance by focusing on relevant elements while reducing noise from irrelevant ones. The core idea behind self-attention is that it enables each position (or token) within an input sequence to attend over all positions in previous layers, capturing dependencies between tokens regardless of their distance apart[^1]. For instance, in natural language processing tasks such as speech recognition where mel-frequency cepstral coefficients are used as features, this can help capture long-range dependencies more effectively than traditional recurrent architectures like LSTMs which suffer from vanishing gradient problems during training due to sequential nature [^2]. A typical implementation involves three matrices: Query(Q), Key(K), Value(V). These matrices transform embeddings into queries, keys, values respectively before computing dot products followed by softmax normalization across key vectors resulting in attention weights applied onto value vector producing final output representation per word/token after summing weighted contributions together: ```python import torch import torch.nn.functional as F def scaled_dot_product_attention(query, key, value): d_k = query.size(-1) scores = torch.matmul(query, key.transpose(-2,-1)) / math.sqrt(d_k) p_attn = F.softmax(scores,dim=-1) return torch.matmul(p_attn,value) class MultiHeadedAttention(nn.Module): def __init__(self,h,d_model): super().__init__() assert d_model % h ==0 self.d_k=d_model//h self.h=h self.linears=[nn.Linear(d_model,d_model)for _in range(4)] def forward(self,x): nbatches=x.shape[0] query,key,value=[ l(x).view(nbatches,-1,self.h,self.d_k).transpose(1,2) for l,x in zip(self.linears,(query,key,value)) ] x=self.scaled_dot_product_attention(query,key,value) ... ``` This method has been successfully utilized not only in NLP but also various domains including computer vision through transformers architecture variations designed specifically around these principles leading towards state-of-the-art results achieved recently.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值