Irrelevant Elements UVA - 1635

本文介绍了一种使用埃式筛法筛选素数的方法,并结合唯一分解定理实现组合数的快速计算。通过递推公式优化计算过程,提高了算法效率。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Think:
1埃式筛法筛取素数
2唯一分解定理
3组合数公式(C(k, n) = ((n-k+1)/k)*C(k-1, n))

vjudge题目链接

以下为Accepted代码

#include <cstdio>
#include <cstring>
#include <cmath>

using namespace std;

int is_primes[104000], pr, primes[104000], im, pm[24], em[24];///pm数组记录m分解之后的质因子,em数组记录m分解之后的质因子的指数
int n, m, tp, link[104000];

void Primes();///筛法筛取素数
void Primes_factor(int u);///唯一分解定理
bool Judge(int x, int y);///组合数递推公式(C(k, n) = ((n-k+1)/k)*C(k-1, n))

int main(){
    Primes();
    while(scanf("%d %d", &n, &m) != EOF){
        Primes_factor(m);

        int d = n - 1;
        tp = 0;
        for(int k = 1; k <= d; k++){
            int x = d - k + 1;
            int y = k;
            if(Judge(x, y)){
                link[tp++] = k+1;
            }
        }
        printf("%d\n", tp);
        for(int i = 0; i < tp; i++)
            printf("%d%c", link[i], i == tp-1? '\n': ' ');
        if(tp == 0) printf("\n");
    }
    return 0;
}
void Primes(){///埃氏筛法筛取素数
    pr = 0;
    memset(is_primes, 0, sizeof(is_primes));
    is_primes[1] = 1, is_primes[2] = 0;
    for(int i = 2; i <= 32000; i++){///题意可知唯一分解定理最大需要表示1e9,只需筛取sqrt(1e9)?
        if(!is_primes[i]){
            primes[pr++] = i;
            for(int j = i*2; j <= 32000; j += i){
                is_primes[j] = 1;
            }
        }
    }
}
void Primes_factor(int u){///唯一分解定理
    im = 0;
    memset(em, 0, sizeof(em));
    for(int i = 0; i < pr; i++){
        if(u % primes[i] == 0){
            pm[im] = primes[i];
            while(u % primes[i] == 0){
                u /= primes[i];
                em[im]++;
            }
            im++;
        }
    }
    if(u != 1){///分解之后u != 1,则u肯定为大于32000的一个素数,因此需要记录
        pm[im] = u;
        em[im]++;
        im++;
    }

}
/*bool Judge(int x, int y){
    for(int i = 0; i < im; i++){

        while(x%pm[i] == 0 && (x /= pm[i])){
            em[i]--;
        }
        while(y%pm[i] == 0 && (y /= pm[i])){
            em[i]++;
        }
        if(em[i] > 0)
            return false;
    }
    return true;
}*/
bool Judge(int x,int y)
{
    bool flag = true;
    for(int i=0; i<im; ++i)
    {
        while((x%pm[i]==0)&&(x/=pm[i]))
            em[i]--;///除式分子部分
        while(y%pm[i]==0&&(y/=pm[i]))
            em[i]++;///除式分母部分
        if(em[i]>0)
            flag = false;
        /*为什么直接在em[i]上面做操作,难道不会影响后面的判断吗?
        答:本题目用到的组合数递推公式(C(k, n) = ((n-k+1)/k)*C(k-1, n))中前一项的系数会对后一项产生影响,因此如果要判断第r项,则必须要预处理前r项
        */

    }
    return flag;
}
### Self-Attention Mechanism in Neural Networks Introduction and Application In the context of neural networks, self-attention mechanisms allow models to weigh different parts of input data differently when processing sequences or other structured inputs. This approach enhances model performance by focusing on relevant elements while reducing noise from irrelevant ones. The core idea behind self-attention is that it enables each position (or token) within an input sequence to attend over all positions in previous layers, capturing dependencies between tokens regardless of their distance apart[^1]. For instance, in natural language processing tasks such as speech recognition where mel-frequency cepstral coefficients are used as features, this can help capture long-range dependencies more effectively than traditional recurrent architectures like LSTMs which suffer from vanishing gradient problems during training due to sequential nature [^2]. A typical implementation involves three matrices: Query(Q), Key(K), Value(V). These matrices transform embeddings into queries, keys, values respectively before computing dot products followed by softmax normalization across key vectors resulting in attention weights applied onto value vector producing final output representation per word/token after summing weighted contributions together: ```python import torch import torch.nn.functional as F def scaled_dot_product_attention(query, key, value): d_k = query.size(-1) scores = torch.matmul(query, key.transpose(-2,-1)) / math.sqrt(d_k) p_attn = F.softmax(scores,dim=-1) return torch.matmul(p_attn,value) class MultiHeadedAttention(nn.Module): def __init__(self,h,d_model): super().__init__() assert d_model % h ==0 self.d_k=d_model//h self.h=h self.linears=[nn.Linear(d_model,d_model)for _in range(4)] def forward(self,x): nbatches=x.shape[0] query,key,value=[ l(x).view(nbatches,-1,self.h,self.d_k).transpose(1,2) for l,x in zip(self.linears,(query,key,value)) ] x=self.scaled_dot_product_attention(query,key,value) ... ``` This method has been successfully utilized not only in NLP but also various domains including computer vision through transformers architecture variations designed specifically around these principles leading towards state-of-the-art results achieved recently.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值