居然。。。。B - Number of Containers

本文介绍了一种高效计算F(n)的方法,通过数学分析与图形表示,利用对称性减少重复计算,适用于大规模数据处理。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

For two integers m and k, k is said to be a container of m if k is divisible by m. Given 2 positive integers n and m (m < n), the function f(n, m) is defined to be the number of containers of m which are also no greater than n. For example, f(5, 1)=4, f(8, 2)=3, f(7, 3)=1, f(5, 4)=0…
Let us define another function F(n) by the following equation:
在这里插入图片描述
Now given a positive integer n, you are supposed to calculate the value of F( n).

Input
There are multiple test cases. The first line of input contains an integer T(T<=200) indicating the number of test cases. Then T test cases follow.
Each test case contains a positive integer n (0 < n <= 2000000000) in a single line.

Output
For each test case, output the result F(n) in a single line.

Sample Input
2
1
4

Sample Output
0
4
题意:就是如图所示,求 n/i-1;(0<i<n)的和,,,,由于数据高达20Y,所以,暴力就会T!!!

画图,画出函数图像:y = n/x,以 y = x对称
可以用 横坐标表示i 从该点画一条垂直的线 这条线上的所有整数点的个数就是 n/i
那么n/1+n/2+n/3+……n/(n-2)+n/(n-1)+n/n 可以表示为i*(n/i)=n这条线
答案就是这条线与坐标轴围成的面积内的整数点的个数
画一条x=y的线与xy=n相交 可以知道 面积关于x=y对称
我们求n/1+n/2+n/3+…… 只求到k=sqrt(n)处(1个梯形) 之后乘以2 (得到2个梯形的面积 其中有一个正方形的区域是重复的) 减去重复的区域k
k个 即可

这个方法也可以用来快速求(n/1+n/2+n/3+…+n/n)。

#include <cstdio>
#include <cstring>
#include <cstdlib>
#include <ctime>
#include <cmath>
#include <iostream>
#include <algorithm>
#include <string>
#include <set>
#include <map>
#include <queue>
#include <stack>
#include <vector>
#include <bitset>
using namespace std;
#define ll long long
#define MAXN 50005

int main()
{
    int t;
    cin >> t;
    while(t--)
    {
        ll n, sum=0;
        cin >> n;
        int c = sqrt(n);
        for(int i=1; i<=c; ++i)
            sum+=n/i;
        sum*=2;
        sum = sum-c*c-n;
        cout << sum << '\n';
    }
    return 0;
}
[root@hjh01 ~]# yarn node -list 2025-07-02 19:05:04,602 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hjh01/192.168.63.101:8032 Total Nodes:3 Node-Id Node-State Node-Http-Address Number-of-Running-Containers hjh03:41086 RUNNING hjh03:8042 0 hjh02:33452 RUNNING hjh02:8042 0 hjh01:34457 RUNNING hjh01:8042 0 [root@hjh01 ~]# yarn top 2025-07-02 19:05:23,316 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hjh01/192.168.63.101:8032 YARN top - 19:06:19, up 0d, 0:26, 0 active users, queue(s): root NodeManager(s): 3 total, 3 active, 0 unhealthy, 0 decommissioned, 0 lost, 0 reb Queue(s) Applications: 0 running, 2 submitted, 0 pending, 0 completed, 0 killed Queue(s) Mem(GB): 24 available, 0 allocated, 0 pending, 0 reserved Queue(s) VCores: 24 available, 0 allocated, 0 pending, 0 reserved Queue(s) Containers: 0 allocated, 0 pending, 0 reserved q APPLICATIONID USER TYPE QUEUE PRIOR #CONT [root@hjh01 ~]# vim /usr/local/hadoop/etc/hadoop/yarn-site.xml [root@hjh01 ~]# hive which: no hbase in (/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/local/jdk/bin:/usr/local/hadoop/bin:/usr/local/hadoop/sbin:/usr/local/hive/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Hive Session ID = e131a6d9-ab5a-4b4c-92ef-b2ef2c57e699 Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = f58a189b-6fa4-4d22-9937-2b2ffd8bf3f6 hive> SET mapreduce.map.memory.mb=2048; hive> SET mapreduce.reduce.memory.mb=4096; hive> SELECT COUNT(1) FROM hjh_table LIMIT 1; FAILED: SemanticException [Error 10001]: Line 1:21 Table not found 'hjh_table' hive> SELECT > city, > COUNT(*) AS user_count, > ROUND(AVG(age), 1) AS avg_age > FROM users > GROUP BY city > ORDER BY user_count DESC; Query ID = root_20250702192000_765e8e26-cd76-45c6-b01a-5817bcfe7ff9 Total jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1751506776899_0003, Tracking URL = http://hjh01:8088/proxy/application_1751506776899_0003/ Kill Command = /usr/local/hadoop/bin/mapred job -kill job_1751506776899_0003 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0 2025-07-02 19:20:53,194 Stage-1 map = 0%, reduce = 0% Ended Job = job_1751506776899_0003 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec hive>
最新发布
07-04
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值