我是用DP做的。因为可以看到所有人的代码,所以把第一名的代码下载来看了看,他的也是DP。
后来看到官方的解答,其实用贪心+hash可以达到O(n)的效率。
rem的代码
我自己的代码
官方解答摘要
Working in the opposite direction, (*) is all we need to achieve; as long as you can partition the list of queries into such segments, it corresponds to a plan of saving the universe. You don't even care about which engine is used for one segment; any engine not appearing as a query on that segment will do. However, you might sometimes pick the same engine for two consecutive segments, laughing at yourself when you realize it; why don't I just join the two segments into one? Because your task is to use as few segments as possible, it is obvious that you want to make each segment as long as possible.
This leads to the greedy solution: Starting from the first query, add one query at a time to the current segment until the names of all S search engines have been encountered. Then we continue this process in a new segment until all queries are processed.
贪心+hash的代码

本文探讨了使用动态规划(DP)及贪心算法解决特定问题的方法。通过对比两种算法的实现,阐述了如何利用贪心策略结合哈希表达到线性时间复杂度的优化效果。文中提供了详细的代码示例及官方解答思路。
199

被折叠的 条评论
为什么被折叠?



