CodeForces 469B Chat Online

本文介绍了一个算法问题,涉及计算两个朋友在线聊天的时间段。通过输入各自不同时段的在线时间,算法确定在特定范围内有多少个整数时间点适合双方进行在线聊天。
B. Chat Online
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output

Little X and Little Z are good friends. They always chat online. But both of them have schedules.

Little Z has fixed schedule. He always online at any moment of time between a1 and b1, betweena2 andb2, ..., betweenap andbp (all borders inclusive). But the schedule of Little X is quite strange, it depends on the time when he gets up. If he gets up at time0, he will be online at any moment of time between c1 andd1, betweenc2 andd2, ..., betweencq anddq (all borders inclusive). But if he gets up at timet, these segments will be shifted byt. They become [ci + t, di + t] (for alli).

If at a moment of time, both Little X and Little Z are online simultaneosly, they can chat online happily. You know that Little X can get up at an integer moment of time betweenl andr (both borders inclusive). Also you know that Little X wants to get up at the moment of time, that is suitable for chatting with Little Z (they must have at least one common moment of time in schedules). How many integer moments of time from the segment [l, r] suit for that?

Input

The first line contains four space-separated integers p, q, l, r (1 ≤  p, q ≤ 50; 0 ≤ l ≤ r ≤ 1000).

Each of the next p lines contains two space-separated integersai, bi (0 ≤ ai < bi ≤ 1000). Each of the next q lines contains two space-separated integerscj, dj (0 ≤ cj < dj ≤ 1000).

It's guaranteed that bi < ai + 1 anddj < cj + 1 for all validi and j.

Output

Output a single integer — the number of moments of time from the segment [l, r] which suit for online conversation.

Sample test(s)
Input
1 1 0 4
2 3
0 1
Output
3
Input
2 3 0 20
15 17
23 26
1 4
7 11
15 17
Output
20

打表

#include<cstring>
#include<iostream>
#include<algorithm>
using namespace std;

int main(){
    int p,q,l,r,i,j,t,sum,flag,online,a,b;
    int c[60],d[60],time[2010];
    while(cin>>p>>q>>l>>r){
        sum=0;
        memset(time,0,sizeof(time));
        for(i=0;i<p;i++){
            cin>>a>>b;
            for(j=a;j<=b;j++){
                time[j]=1;
            }
        }
        for(i=0;i<q;i++){
            cin>>c[i]>>d[i];
        }
        for(i=l;i<=r;i++){
            flag=0;
            for(j=0;j<q;j++){
                for(online=c[j]+i;online<=d[j]+i;online++){
                    if(time[online]){
                        sum++;
                        flag=1;
                        break;
                    }
                }
                if(flag){
                    break;
                }
            }
        }
        cout<<sum<<endl;
    }
    return 0;
}


 
Table of Contents Introduction Model Summary Model Downloads Evaluation Results Chat Website & API Platform How to Run Locally License Citation Contact 1. Introduction We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks. 2. Model Summary Architecture: Innovative Load Balancing Strategy and Training Objective On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing. We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance. It can also be used for speculative decoding for inference acceleration. Pre-Training: Towards Ultimate Training Efficiency We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model. Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap. This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead. At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours. Post-Training: Knowledge Distillation from DeepSeek-R1 We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3. 3. Model Downloads Model #Total Params #Activated Params Context Length Download DeepSeek-V3-Base 671B 37B 128K 🤗 Hugging Face DeepSeek-V3 671B 37B 128K 🤗 Hugging Face Note The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights. To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: How_to Run_Locally. For developers looking to dive deeper, we recommend exploring README_WEIGHTS.md for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback. 4. Evaluation Results Base Model Standard Benchmarks Benchmark (Metric) # Shots DeepSeek-V2 Qwen2.5 72B LLaMA3.1 405B DeepSeek-V3 Architecture - MoE Dense Dense MoE # Activated Params - 21B 72B 405B 37B # Total Params - 236B 72B 405B 671B English Pile-test (BPB) - 0.606 0.638 0.542 0.548 BBH (EM) 3-shot 78.8 79.8 82.9 87.5 MMLU (Acc.) 5-shot 78.4 85.0 84.4 87.1 MMLU-Redux (Acc.) 5-shot 75.6 83.2 81.3 86.2 MMLU-Pro (Acc.) 5-shot 51.4 58.3 52.8 64.4 DROP (F1) 3-shot 80.4 80.6 86.0 89.0 ARC-Easy (Acc.) 25-shot 97.6 98.4 98.4 98.9 ARC-Challenge (Acc.) 25-shot 92.2 94.5 95.3 95.3 HellaSwag (Acc.) 10-shot 87.1 84.8 89.2 88.9 PIQA (Acc.) 0-shot 83.9 82.6 85.9 84.7 WinoGrande (Acc.) 5-shot 86.3 82.3 85.2 84.9 RACE-Middle (Acc.) 5-shot 73.1 68.1 74.2 67.1 RACE-High (Acc.) 5-shot 52.6 50.3 56.8 51.3 TriviaQA (EM) 5-shot 80.0 71.9 82.7 82.9 NaturalQuestions (EM) 5-shot 38.6 33.2 41.5 40.0 AGIEval (Acc.) 0-shot 57.5 75.8 60.6 79.6 Code HumanEval (Pass@1) 0-shot 43.3 53.0 54.9 65.2 MBPP (Pass@1) 3-shot 65.0 72.6 68.4 75.4 LiveCodeBench-Base (Pass@1) 3-shot 11.6 12.9 15.5 19.4 CRUXEval-I (Acc.) 2-shot 52.5 59.1 58.5 67.3 CRUXEval-O (Acc.) 2-shot 49.8 59.9 59.9 69.8 Math GSM8K (EM) 8-shot 81.6 88.3 83.5 89.3 MATH (EM) 4-shot 43.4 54.4 49.0 61.6 MGSM (EM) 8-shot 63.6 76.2 69.9 79.8 CMath (EM) 3-shot 78.7 84.5 77.3 90.7 Chinese CLUEWSC (EM) 5-shot 82.0 82.5 83.0 82.7 C-Eval (Acc.) 5-shot 81.4 89.2 72.5 90.1 CMMLU (Acc.) 5-shot 84.0 89.5 73.7 88.8 CMRC (EM) 1-shot 77.4 75.8 76.0 76.3 C3 (Acc.) 0-shot 77.4 76.7 79.7 78.6 CCPM (Acc.) 0-shot 93.0 88.5 78.6 92.0 Multilingual MMMLU-non-English (Acc.) 5-shot 64.0 74.8 73.8 79.4 Note Best results are shown in bold. Scores with a gap not exceeding 0.3 are considered to be at the same level. DeepSeek-V3 achieves the best performance on most benchmarks, especially on math and code tasks. For more evaluation details, please check our paper. Context Window Evaluation results on the Needle In A Haystack (NIAH) tests. DeepSeek-V3 performs well across all context window lengths up to 128K. Chat Model Standard Benchmarks (Models larger than 67B) Benchmark (Metric) DeepSeek V2-0506 DeepSeek V2.5-0905 Qwen2.5 72B-Inst. Llama3.1 405B-Inst. Claude-3.5-Sonnet-1022 GPT-4o 0513 DeepSeek V3 Architecture MoE MoE Dense Dense - - MoE # Activated Params 21B 21B 72B 405B - - 37B # Total Params 236B 236B 72B 405B - - 671B English MMLU (EM) 78.2 80.6 85.3 88.6 88.3 87.2 88.5 MMLU-Redux (EM) 77.9 80.3 85.6 86.2 88.9 88.0 89.1 MMLU-Pro (EM) 58.5 66.2 71.6 73.3 78.0 72.6 75.9 DROP (3-shot F1) 83.0 87.8 76.7 88.7 88.3 83.7 91.6 IF-Eval (Prompt Strict) 57.7 80.6 84.1 86.0 86.5 84.3 86.1 GPQA-Diamond (Pass@1) 35.3 41.3 49.0 51.1 65.0 49.9 59.1 SimpleQA (Correct) 9.0 10.2 9.1 17.1 28.4 38.2 24.9 FRAMES (Acc.) 66.9 65.4 69.8 70.0 72.5 80.5 73.3 LongBench v2 (Acc.) 31.6 35.4 39.4 36.1 41.0 48.1 48.7 Code HumanEval-Mul (Pass@1) 69.3 77.4 77.3 77.2 81.7 80.5 82.6 LiveCodeBench (Pass@1-COT) 18.8 29.2 31.1 28.4 36.3 33.4 40.5 LiveCodeBench (Pass@1) 20.3 28.4 28.7 30.1 32.8 34.2 37.6 Codeforces (Percentile) 17.5 35.6 24.8 25.3 20.3 23.6 51.6 SWE Verified (Resolved) - 22.6 23.8 24.5 50.8 38.8 42.0 Aider-Edit (Acc.) 60.3 71.6 65.4 63.9 84.2 72.9 79.7 Aider-Polyglot (Acc.) - 18.2 7.6 5.8 45.3 16.0 49.6 Math AIME 2024 (Pass@1) 4.6 16.7 23.3 23.3 16.0 9.3 39.2 MATH-500 (EM) 56.3 74.7 80.0 73.8 78.3 74.6 90.2 CNMO 2024 (Pass@1) 2.8 10.8 15.9 6.8 13.1 10.8 43.2 Chinese CLUEWSC (EM) 89.9 90.4 91.4 84.7 85.4 87.9 90.9 C-Eval (EM) 78.6 79.5 86.1 61.5 76.7 76.0 86.5 C-SimpleQA (Correct) 48.5 54.1 48.4 50.4 51.3 59.3 64.8 Note All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models. Open Ended Generation Evaluation Model Arena-Hard AlpacaEval 2.0 DeepSeek-V2.5-0905 76.2 50.5 Qwen2.5-72B-Instruct 81.2 49.1 LLaMA-3.1 405B 69.3 40.5 GPT-4o-0513 80.4 51.1 Claude-Sonnet-3.5-1022 85.2 52.0 DeepSeek-V3 85.5 70.0 Note English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric. 5. Chat Website & API Platform You can chat with DeepSeek-V3 on DeepSeek's official website: chat.deepseek.com We also provide OpenAI-Compatible API at DeepSeek Platform: platform.deepseek.com 6. How to Run Locally DeepSeek-V3 can be deployed locally using the following hardware and open-source community software: DeepSeek-Infer Demo: We provide a simple and lightweight demo for FP8 and BF16 inference. SGLang: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes, with Multi-Token Prediction coming soon. LMDeploy: Enables efficient FP8 and BF16 inference for local and cloud deployment. TensorRT-LLM: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon. vLLM: Support DeepSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism. LightLLM: Supports efficient single-node or multi-node deployment for FP8 and BF16. AMD GPU: Enables running the DeepSeek-V3 model on AMD GPUs via SGLang in both BF16 and FP8 modes. Huawei Ascend NPU: Supports running DeepSeek-V3 on Huawei Ascend devices in both INT8 and BF16. Since FP8 training is natively adopted in our framework, we only provide FP8 weights. If you require BF16 weights for experimentation, you can use the provided conversion script to perform the transformation. Here is an example of converting FP8 weights to BF16: cd inference python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights Note Hugging Face's Transformers has not been directly supported yet. 6.1 Inference with DeepSeek-Infer Demo (example only) System Requirements Note Linux with Python 3.10 only. Mac and Windows are not supported. Dependencies: torch==2.4.1 triton==3.0.0 transformers==4.46.3 safetensors==0.4.5 Model Weights & Demo Code Preparation First, clone our DeepSeek-V3 GitHub repository: git clone https://github.com/deepseek-ai/DeepSeek-V3.git Navigate to the inference folder and install dependencies listed in requirements.txt. Easiest way is to use a package manager like conda or uv to create a new virtual environment and install the dependencies. cd DeepSeek-V3/inference pip install -r requirements.txt Download the model weights from Hugging Face, and put them into /path/to/DeepSeek-V3 folder. Model Weights Conversion Convert Hugging Face model weights to a specific format: python convert.py --hf-ckpt-path /path/to/DeepSeek-V3 --save-path /path/to/DeepSeek-V3-Demo --n-experts 256 --model-parallel 16 Run Then you can chat with DeepSeek-V3: torchrun --nnodes 2 --nproc-per-node 8 --node-rank $RANK --master-addr $ADDR generate.py --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --interactive --temperature 0.7 --max-new-tokens 200 Or batch inference on a given file: torchrun --nnodes 2 --nproc-per-node 8 --node-rank $RANK --master-addr $ADDR generate.py --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --input-file $FILE 6.2 Inference with SGLang (recommended) SGLang currently supports MLA optimizations, DP Attention, FP8 (W8A8), FP8 KV Cache, and Torch Compile, delivering state-of-the-art latency and throughput performance among open-source frameworks. Notably, SGLang v0.4.1 fully supports running DeepSeek-V3 on both NVIDIA and AMD GPUs, making it a highly versatile and robust solution. SGLang also supports multi-node tensor parallelism, enabling you to run this model on multiple network-connected machines. Multi-Token Prediction (MTP) is in development, and progress can be tracked in the optimization plan. Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3 6.3 Inference with LMDeploy (recommended) LMDeploy, a flexible and high-performance inference and serving framework tailored for large language models, now supports DeepSeek-V3. It offers both offline pipeline processing and online deployment capabilities, seamlessly integrating with PyTorch-based workflows. For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy, please refer to here: InternLM/lmdeploy#2960 6.4 Inference with TRT-LLM (recommended) TensorRT-LLM now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/deepseek_v3. 6.5 Inference with vLLM (recommended) vLLM v0.6.6 supports DeepSeek-V3 inference for FP8 and BF16 modes on both NVIDIA and AMD GPUs. Aside from standard techniques, vLLM offers pipeline parallelism allowing you to run this model on multiple machines connected by networks. For detailed guidance, please refer to the vLLM instructions. Please feel free to follow the enhancement plan as well. 6.6 Inference with LightLLM (recommended) LightLLM v1.0.1 supports single-machine and multi-machine tensor parallel deployment for DeepSeek-R1 (FP8/BF16) and provides mixed-precision deployment, with more quantization modes continuously integrated. For more details, please refer to LightLLM instructions. Additionally, LightLLM offers PD-disaggregation deployment for DeepSeek-V2, and the implementation of PD-disaggregation for DeepSeek-V3 is in development. 6.7 Recommended Inference Functionality with AMD GPUs In collaboration with the AMD team, we have achieved Day-One support for AMD GPUs using SGLang, with full compatibility for both FP8 and BF16 precision. For detailed guidance, please refer to the SGLang instructions. 6.8 Recommended Inference Functionality with Huawei Ascend NPUs The MindIE framework from the Huawei Ascend community has successfully adapted the BF16 version of DeepSeek-V3. For step-by-step guidance on Ascend NPUs, please follow the instructions here. 7. License This code repository is licensed under the MIT License. The use of DeepSeek-V3 Base/Chat models is subject to the Model License. DeepSeek-V3 series (including Base and Chat) supports commercial use. 8. Citation @misc{deepseekai2024deepseekv3technicalreport, title={DeepSeek-V3 Technical Report}, author={DeepSeek-AI}, year={2024}, eprint={2412.19437}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.19437}, } 9. Contact If you have any questions, please raise an issue or contact us at service@deepseek.com.
07-12
<think> 题目链接:https://codeforces.com/problemset/problem/519/B 题目大意:给出n个数字,然后给出n-1个数字(第一组),再给出n-2个数字(第二组)。第一组是原数组去掉一个元素,第二组是第一组再去掉一个元素。求被去掉的两个元素(按顺序,即第一个被去掉的是第一个答案,第二个被去掉的是第二个答案)。 方法1:使用求和法 原数组的和为s0,第一组数组的和为s1,则第一个被去掉的数为 s0 - s1。 第二组数组的和为s2,则第二个被去掉的数为 s1 - s2。 方法2:使用排序或哈希表(统计频率) 原数组为a,第一组数组为b,第二组数组为c。 第一个被去掉的元素:在a中出现但不在b中出现的元素,或者在a中比b中多出现一次的元素。 第二个被去掉的元素:在b中出现但不在c中出现的元素,或者在b中比c中多出现一次的元素。 考虑到可能有重复元素,所以使用统计频率的方法更可靠。 具体步骤: 1. 读入原数组a(长度为n),第一组数组b(长度为n-1),第二组数组c(长度为n-2)。 2. 分别统计a、b、c中每个元素的出现频率。 3. 第一个答案:遍历a的元素,找到在a中出现的次数比在b中多一次的元素(或者遍历b的元素,但注意a中可能有b没有的元素,所以遍历a更安全)。 具体:对于a中的每个元素x,如果a中x的频率比b中x的频率大1,则x就是第一个被去掉的。 但注意:因为b中可能没有x,所以当b中不存在x时,b中x的频率为0,此时a中x的频率应该是1(比0大1),所以成立。 4. 第二个答案:同理,遍历b的元素,找到在b中出现的次数比在c中多一次的元素。 但是需要注意:同一个元素可能在数组中出现多次,所以不能只用一个标记,必须统计频率。 另一种方法:由于题目保证答案唯一,我们可以用以下方法: 第一个答案 = 在数组a中出现但在数组b中少出现一次的元素(即a中比b中多一个的元素)。 第二个答案 = 在数组b中出现但在数组c中少出现一次的元素。 实现: 使用map来统计每个数组的元素频率。 但是注意:在统计第二个答案时,我们要在b中找比c中多一次的元素,而b中已经包含第一个答案,所以第一个答案在b中可能比a中少一次,但我们并不关心,因为第二个答案与第一个答案不同。 步骤: 1. 读入n,然后读入三个数组。 2. 分别用三个map统计a, b, c中每个元素的频率。 3. 遍历a中的每个元素,对于每个元素x,如果freq_a[x] != freq_b[x](实际上应该大1),则第一个答案就是x,并跳出(但注意可能有多个?题目保证唯一,所以找到一个即可?但是因为同一个元素出现多次,我们只需找到一个元素在a中频率比b中多1即可,但这样的元素只有一个,因为只去掉一个元素,所以整个数组a中只有一个元素会多一次。但实际上,如果有重复元素,那么可能有多个元素在a中的频率比b中多1?但是仔细分析:只去掉一个元素,所以整个数组a和b之间,只有被去掉的那个元素在a中的频率比b中多1,其他元素都相同。所以我们可以遍历a,找到第一个满足freq_a[x] > freq_b[x]的元素,然后输出并跳出?但是有可能这个元素在b中不存在,所以用freq_b[x]可能为0。 具体:我们可以这样: for (auto x: a) { if (freq_a[x] - freq_b[x] == 1) { first = x; break; } } 但是注意:在遍历时,同一个x可能出现多次,所以不能一找到就break,因为可能后面还有?不对,因为同一个x,我们通过频率比较,如果这个x在a中出现了k次,那么在b中出现了k-1次,那么当我们遍历到x时,就会满足条件。但是这样会多次检测到同一个x,所以我们可以记录下这个x,然后跳出循环?但是这样会重复记录同一个x多次。因此,我们可以遍历map,而不是遍历数组a。 改进:遍历频率映射: for (auto it : freq_a) { int x = it.first; if (it.second - freq_b[x] == 1) { first = x; break; } } 但是这里要注意:freq_b中可能没有x,所以我们需要检查x是否在freq_b中?或者我们初始化freq_b时,对于a中所有元素都初始化了?没有。所以我们可以这样:使用map的find方法,或者直接访问freq_b[x],如果x不在freq_b中,则返回0(因为我们使用map,默认值0)。 同样,第二个答案:遍历freq_b,找freq_b[x] - freq_c[x] == 1的元素。 但是注意:第一个答案也可能在b中出现(在c中不出现)吗?第二个答案只与b和c有关,所以这样找没有问题。 但是:第一个答案被去掉后,在b中就不存在了?不对,b是去掉第一个答案后的数组,所以第一个答案在b中出现的次数应该比a中少1(如果a中出现了多次,那么b中只少一次)。所以我们的方法正确。 但是:如果a中有重复元素,比如a=[2,2,3],b=[2,3],那么第一个被去掉的是2。那么freq_a[2]=2,freq_b[2]=1,所以2满足条件。而其他元素(3)在a和b中频率相等。 所以我们可以使用频率差的方法。 步骤: 1. 读入n,然后读入数组a(n个数),数组b(n-1个数),数组c(n-2个数)。 2. 定义三个map:freq_a, freq_b, freq_c。 3. 统计a中每个元素的频率,b中每个元素的频率,c中每个元素的频率。 4. 遍历freq_a,找到第一个满足 freq_a[x] - (freq_b.count(x)?freq_b[x]:0) == 1 的x,作为第一个答案。 5. 遍历freq_b,找到第一个满足 freq_b[x] - (freq_c.count(x)?freq_c[x]:0) == 1 的x,作为第二个答案。 但是注意:第一个答案可能有多个元素满足条件吗?题目保证唯一,所以只有一个元素会被去掉,所以只有一个元素满足这个条件。因此,我们找到第一个满足条件的x即可(但map遍历是随机的,所以最好直接遍历,然后输出找到的第一个,因为只有一个)。 但是,我们也可以不用map遍历,而是遍历数组a(但数组a有重复,所以可能重复找到同一个元素多次)。所以还是遍历map比较好。 但是注意:题目要求按顺序输出:先输出的第一个被去掉的,再输出第二个被去掉的。 所以,我们也可以这样: 第一个答案:遍历a,直到找到一个元素x,使得freq_a[x] > freq_b[x](并且我们每找到一个,就在freq_b[x]上加1?这样不行)。所以还是用map统计后直接查找。 代码实现: 注意:由于n最大10^5,所以用map统计不会超时(O(n log n))。 另外:也可以使用unordered_map(平均O(n))。 代码: 方法1:使用map(或unordered_map)统计频率。 但是:题目要求按顺序输出两个答案,所以先输出第一个被去掉的,再输出第二个。 步骤: 1. 读入数组a,b,c。 2. 统计频率: map<int, int> fa, fb, fc; for (int i=0; i<n; i++) fa[a[i]]++; for (int i=0; i<n-1; i++) fb[b[i]]++; for (int i=0; i<n-2; i++) fc[c[i]]++; 3. 然后: int first = 0, second = 0; for (auto it : fa) { int num = it.first; if (it.second != fb[num]) { // 因为只少一个,所以一定是fa[num] == fb[num] + 1 first = num; break; // 因为只有一个,所以找到就可以退出 } } for (auto it : fb) { int num = it.first; if (it.second != fc[num]) { second = num; break; } } 4. 输出 first 和 second。 但是,注意:在第一个循环中,我们遍历fa,然后比较fa[num]和fb[num](如果num不在fb中,fb[num]默认0)。但是当fa[num]不等于fb[num]时,我们直接取num作为第一个答案并跳出。因为只少一个元素,所以这样的元素只有一个。 但是:在第二个循环中,同样遍历fb,然后比较fb[num]和fc[num](如果num不在fc中,则fc[num]为0)。 但是:有没有可能第一个循环中,有多个元素满足fa[num]!=fb[num]?题目保证答案唯一,所以只有一个元素被去掉,所以只有一个元素满足fa[num]!=fb[num](即fa[num]比fb[num]大1,而其他都相等)。 但是,我们这样写可能会遇到问题:如果一个元素在a中不存在,但在fa中不会出现(因为我们遍历fa,所以不会出现这种情况)。所以正确。 但是:如果第一个答案在a中只出现一次,那么fb中这个元素就不存在(即fb[num]为0),此时fa[num]=1,fb[num]=0,满足不等。 所以,这个方法正确。 但是,有一个小问题:遍历map时,我们不知道顺序,但题目不要求顺序,只要输出那个被去掉的元素即可(因为答案唯一),所以任意顺序遍历map都可以。 但是,我们也可以不用map,而用数组求和法(更简单,且O(n)): long long s1 = 0, s2 = 0, s3 = 0; for (int i=0; i<n; i++) s1 += a[i]; for (int i=0; i<n-1; i++) s2 += b[i]; for (int i=0; i<n-2; i++) s3 += c[i]; int first = s1 - s2; int second = s2 - s3; 求和法更简单,且效率高(O(n)),但要注意数据范围:n最大10^5,元素最大10^9,那么s1最大10^14,所以用long long不会溢出(long long最大9e18)。 所以,推荐使用求和法。 但是,求和法有一个问题:如果数组中有重复元素,并且被去掉的元素是重复的,会不会影响?不会,因为求和法就是计算总和,去掉一个元素,那么两个数组的差就是这个元素。 所以,求和法完全正确。 因此,我们可以写两种方法,但这里推荐求和法(更简单高效)。 代码(求和法): #include <iostream> using namespace std; int main() { int n; cin >> n; long long a_sum = 0, b_sum = 0, c_sum = 0; int x; for (int i = 0; i < n; i++) { cin >> x; a_sum += x; } for (int i = 0; i < n-1; i++) { cin >> x; b_sum += x; } for (int i = 0; i < n-2; i++) { cin >> x; c_sum += x; } long long first = a_sum - b_sum; long long second = b_sum - c_sum; cout << first << endl; cout << second << endl; return 0; } 但是,题目要求输出两个答案,第一个答案和第二个答案。 但是,有没有可能多个不同的元素被去掉,但是和相同?题目没有说元素互不相同,所以有可能出现两个不同的元素和相同吗?题目要求去掉两个元素(但这两个元素可能相同),但是注意:在第一个数组中,去掉一个元素后得到第二个数组,再去掉一个元素得到第三个数组。所以两个被去掉的元素可能是相同的。 但是,用求和法:第一个答案 = a_sum - b_sum,第二个答案 = b_sum - c_sum。 如果两个被去掉的元素相同,比如a_sum - b_sum = X,b_sum - c_sum = X,那么输出两个X。 所以,求和法完全正确。 但是,题目中是否有说明元素互不相同?没有,所以可能重复。 因此,我们使用求和法。 但是,注意:题目要求输入输出格式,以及多个测试用例?看题目描述:输入第一行是n,然后第二行是n个整数(原数组),第三行是n-1个整数(第一组),第四行是n-2个整数(第二组)。 所以,我们只需要读入一次。 但是,题目中可能有多个测试用例?不,题目描述:The first line contains integer n (3≤n≤10^5) — the initial length of the array. 所以只有一个测试用例。 但是,我们再看题目:Input中第一行是n,所以代码按单组输入。 所以,最终代码: 注意:使用long long,因为元素最大10^9,n最大10^5,总和最大10^14。 代码: </think> 对于 Codeforces 519B 问题,我将提供两种优化解法:**求和法**(推荐)和**频率统计法**。原始问题需要找出两次删除操作中被移除的元素。 ### 解法一:求和法(O(n) 时间复杂度) 这是最优解法,利用数组和的差值直接计算被移除的元素: ```cpp #include <iostream> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int n; cin >> n; long long a_sum = 0, b_sum = 0, c_sum = 0; int x; // 读取原始数组并求和 for (int i = 0; i < n; i++) { cin >> x; a_sum += x; } // 读取第一次删除后的数组并求和 for (int i = 0; i < n - 1; i++) { cin >> x; b_sum += x; } // 读取第二次删除后的数组并求和 for (int i = 0; i < n - 2; i++) { cin >> x; c_sum += x; } // 计算被移除的元素 long long first_removed = a_sum - b_sum; long long second_removed = b_sum - c_sum; cout << first_removed << "\n"; cout << second_removed << "\n"; return 0; } ``` #### 代码解释: 1. **输入处理**: - 读取数组大小 `n` - 使用 `long long` 防止总和溢出(元素值 ≤10⁹,n≤10⁵,总和最大 10¹⁴) 2. **求和计算**: - `a_sum`:原始数组总和 - `b_sum`:第一次删除后数组(n-1 个元素)总和 - `c_sum`:第二次删除后数组(n-2 个元素)总和 3. **差值计算**: - `first_removed = a_sum - b_sum`:第一个被删除的元素 - `second_removed = b_sum - c_sum`:第二个被删除的元素 4. **时间复杂度**:O(n),仅需三次线性遍历 ### 解法二:频率统计法(O(n log n) 时间复杂度) 使用哈希表统计元素频率,适合元素值范围大的情况: ```cpp #include <iostream> #include <vector> #include <unordered_map> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int n; cin >> n; vector<int> a(n), b(n-1), c(n-2); unordered_map<int, int> freq_a, freq_b, freq_c; // 读取并统计原始数组频率 for (int i = 0; i < n; i++) { cin >> a[i]; freq_a[a[i]]++; } // 读取并统计第一次删除后数组频率 for (int i = 0; i < n - 1; i++) { cin >> b[i]; freq_b[b[i]]++; } // 读取并统计第二次删除后数组频率 for (int i = 0; i < n - 2; i++) { cin >> c[i]; freq_c[c[i]]++; } // 查找第一个被删除的元素 int first_removed = 0; for (int num : a) { if (freq_a[num] != freq_b[num]) { first_removed = num; break; } } // 查找第二个被删除的元素 int second_removed = 0; for (int num : b) { if (freq_b[num] != freq_c[num]) { second_removed = num; break; } } cout << first_removed << "\n"; cout << second_removed << "\n"; return 0; } ``` #### 代码解释: 1. **数据结构**: - 使用 `unordered_map` 存储元素频率(平均 O(1) 操作) - `vector` 存储输入数组 2. **频率统计**: - `freq_a`:原始数组频率 - `freq_b`:第一次删除后频率 - `freq_c`:第二次删除后频率 3. **查找被删元素**: - 第一个元素:在 `a` 中频率比 `b` 大 1 的元素 - 第二个元素:在 `b` 中频率比 `c` 大 1 的元素 4. **时间复杂度**:O(n log n)(哈希表操作均摊 O(1)) ### 两种解法对比 | 特性 | 求和法 | 频率统计法 | |--------------|---------------------------|--------------------------| | 时间复杂度 | O(n) | O(n)(均摊) | | 空间复杂度 | O(1) | O(n) | | 适用场景 | 推荐通用解法 | 元素范围极大时更稳定 | | 优势 | 代码简洁,内存占用少 | 不依赖数值计算 | | 注意事项 | 总和可能溢出(用long long)| 哈希冲突可能影响性能 | ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值