Mathematically Hard(欧拉函数打表+前缀和)

本文探讨了在数学竞赛编程中,如何利用欧拉函数及其性质解决特定问题。通过预处理和前缀和技巧,实现了对指定范围内数字的欧拉函数平方和的高效计算。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Mathematically some problems look hard. But with the help of the computer, some problems can be easily solvable.

In this problem, you will be given two integers a and b. You have to find the summation of the scores of the numbers from a to b (inclusive). The score of a number is defined as the following function.

score (x) = n2, where n is the number of relatively prime numbers with x, which are smaller than x

For example,

For 6, the relatively prime numbers with 6 are 1 and 5. So, score (6) = 22 = 4.

For 8, the relatively prime numbers with 8 are 1, 3, 5 and 7. So, score (8) = 42 = 16.

Now you have to solve this task.

Input
Input starts with an integer T (≤ 105), denoting the number of test cases.

Each case will contain two integers a and b (2 ≤ a ≤ b ≤ 5 * 106).

Output
For each case, print the case number and the summation of all the scores from a to b.

Sample Input

3
6 6
8 8
2 20

Sample Output

Case 1: 4
Case 2: 16
Case 3: 1237

题意·:给出两个数,确定一个范围,输出范围内数字的欧拉函数的平方的和。
欧拉函数:在数论,对正整数n,欧拉函数是小于或等于n的正整数中与n互质的数的数目;

欧拉函数打表:(对于没理解欧拉函数推导的我来说只能先记代码)

    for(i=0;i<=5010000;i++)
	a[i]=0;
	a[1]=1;
	for(i=2;i<=5010000;i++)
	{
		if(!a[i])
		{
			for(j=i;j<=5010000;j+=i)
			{
				if(!a[j])
				a[j]=j;
				a[j]=a[j]/i*(i-1);
			}
		}
	}

需要特别注意的是,打表的数组要用unsigned long long定义,直接用long long会爆,前者是20位,后者是19位。

代码:

#include"stdio.h"
unsigned long long a[5100000];//注意类型
void init()
{
	int i,j;
	for(i=0;i<=5010000;i++)
	a[i]=0;
	a[1]=1;
	for(i=2;i<=5010000;i++)
	{
		if(!a[i])
		{
			for(j=i;j<=5010000;j+=i)
			{
				if(!a[j])
				a[j]=j;
				a[j]=a[j]/i*(i-1);
			}
		}
	}
	for(i=2;i<=5010000;i++)//前缀和,注意是平方的和
	a[i]=a[i]*a[i]+a[i-1];
}
int main()
{
	init();
	int t,k=1;
	scanf("%d",&t);
	while(t--)
	{
		int m,n;
		scanf("%d %d",&m,&n);
		printf("Case %d: %llu\n",k++,a[n]-a[m-1]);
	}
	return 0;
}
### Triplet Loss Function in Deep Learning In the context of deep learning, **Triplet Loss** is a specific type of loss function designed to capture relationships between data points by comparing an anchor point with both positive and negative examples. This approach aims at ensuring that similar items are closer together while dissimilar ones remain further apart within the feature space. The formal definition involves three elements: an _anchor_, a _positive_ example (similar to the anchor), and a _negative_ example (dissimilar from the anchor). The goal is to minimize the distance between the anchor and its corresponding positive sample while maximizing the separation from any given negative instance[^2]. Mathematically, this can be expressed as follows: For each triplet $(a_i, p_i, n_i)$p_i$, and $ represent the anchor, positive, and negative samples respectively, the objective becomes minimizing the following expression: \[ L_{i}=\left[\|f(a_i)-f(p_i)\|^2-\|f(a_i)-f(n_i)\|^2+\alpha \right]^{+} \] Here $\|\cdot\|$ denotes Euclidean norm or L2-distance; $\alpha>0$ represents margin parameter which controls how far apart we want our pairs to stay even after optimization converges[^3]. When implementing such functionality using Python alongside TensorFlow/Keras libraries one might write code like so: ```python import tensorflow as tf def triplet_loss(y_true, y_pred, alpha=0.2): """ Implementation of the triplet loss function Arguments: y_true -- true labels, required when you define a loss in Keras, not used here. y_pred -- python list containing three objects: anchor -- the encodings for the anchor images, shape (None, embedding_size) positive -- the encodings for the positive images, shape (None, embedding_size) negative -- the encodings for the negative images, shape (None, embedding_size) Returns: loss -- real number, value of the loss """ total_lenght = y_pred.shape.as_list()[-1] anchor = y_pred[:,0:int(total_lenght*1/3)] positive = y_pred[:,int(total_lenght*1/3):int(total_lenght*2/3)] negative = y_pred[:,int(total_lenght*2/3):int(total_lenght)+1] # Compute distances between embeddings pos_dist = tf.reduce_sum(tf.square(anchor - positive), axis=-1) neg_dist = tf.reduce_sum(tf.square(anchor - negative), axis=-1) basic_loss = pos_dist - neg_dist + alpha loss = tf.maximum(basic_loss, 0.0) return tf.reduce_mean(loss) ``` This implementation ensures easy integration into neural network architectures built on top of popular frameworks without requiring significant modifications elsewhere in your pipeline[^4].
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值