Cube Stacking

本文深入解析CubeStacking问题,利用并查集解决游戏策略,结合具体输入输出实例,详细阐述解题过程与算法应用,同时探讨问题背后的数学逻辑与算法优化。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Cube Stacking
Time Limit: 2000MS Memory Limit: 30000K
Total Submissions: 17959 Accepted: 6208
Case Time Limit: 1000MS

Description

Farmer John and Betsy are playing a game with N (1 <= N <= 30,000)identical cubes labeled 1 through N. They start with N stacks, each containing a single cube. Farmer John asks Betsy to perform P (1<= P <= 100,000) operation. There are two types of operations:
moves and counts.
* In a move operation, Farmer John asks Bessie to move the stack containing cube X on top of the stack containing cube Y.
* In a count operation, Farmer John asks Bessie to count the number of cubes on the stack with cube X that are under the cube X and report that value.

Write a program that can verify the results of the game.

Input

* Line 1: A single integer, P

* Lines 2..P+1: Each of these lines describes a legal operation. Line 2 describes the first operation, etc. Each line begins with a 'M' for a move operation or a 'C' for a count operation. For move operations, the line also contains two integers: X and Y.For count operations, the line also contains a single integer: X.

Note that the value for N does not appear in the input file. No move operation will request a move a stack onto itself.

Output

Print the output from each of the count operations in the same order as the input file.

Sample Input

6
M 1 6
C 1
M 2 4
M 2 6
C 3
C 4

Sample Output

1
0
2

代码:

#include<iostream>
#include<cstdio>
#define Maxn 30010
using namespace std;

int fa[Maxn],dis[Maxn],height[Maxn];
int findset(int x){
    if(fa[x]!=x){
        int t=fa[x];
        fa[x]=findset(fa[x]);
        dis[x]+=dis[t];
    }
    return fa[x];
}
void unionset(int x,int y){
    x=findset(x),y=findset(y);
    if(x==y) return;
    fa[y]=x;
    dis[y]=height[x];
    height[x]+=height[y];
}
int main()
{
    int p,a,b;
    char s;
    cin>>p;
    for(int i=0;i<Maxn;i++) fa[i]=i,dis[i]=0,height[i]=1;
    for(int i=0;i<p;i++){
        cin>>s;
        if(s=='M') {cin>>a>>b;unionset(a,b);}
        else{
            cin>>a;
            int t=findset(a);
            cout<<height[t]-dis[a]-1<<endl;
        }
    }
    return 0;
}


注:这题又是一道种类并查集,思想和食物链那道类似,关键这题的突破口是想到每个元素与并查集根的距离,通过每堆总结点数,即可换算出需要的结果,维护距离是在路径压缩时,这样可以做的原因,在我看来是,需要用到这点,这点必被查找必然更新。像这种并查集有带向量运算感,因此叫偏移向量也不为过。这题很明显距离是直接可加的,而食物链那题并不明显,那几个公式都要枚举所用情况,总结得到的,而且还带有模运算,但最终的公式回归到了向量运算。

我一直在想这是巧合吗?还是必然……其实我到现在还没完全想明白,但我更偏向巧合,因为食物链那题只有三种关系,并且必须设同类关系为0,否则就没有矢量性,而且巧合的是三种不同类生物构成一个矢量三角形……如果想要推广到n种关系,那么就不止要满足矢量n边形了,最重要的是必须满足加法性质,而且加完一圈可以绕回来,甚至从n点中任取m(m>=2)点,都需满足上述性质,这样才会有偏移矢量。

以上纯属个人看法,不喜勿喷!欢迎大牛前来指正!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

### Stacking in Machine Learning Ensemble Methods Stacking, also known as stacked generalization, is a meta-algorithm that combines multiple classification or regression models via a meta-classifier/meta-regressor. Unlike bagging and boosting which focus on reducing variance and bias respectively, stacking aims at increasing the predictive power by leveraging predictions from different algorithms. In stacking, the training set is split into two parts. The first part trains each base learner while the second part forms what can be considered an out-of-sample dataset used to train the combiner model. This ensures that when generating features for the level-one model, there's no data leakage between training sets of individual learners and the overall stacker[^1]. The process involves using outputs from various classifiers/regressors as inputs/features for another classifier/regressor called the blender or meta-model. By doing this, stacking allows capturing patterns missed individually by constituent models leading potentially better performance than any single algorithm could achieve alone. ```python from sklearn.datasets import make_regression from sklearn.model_selection import train_test_split from mlens.ensemble import SuperLearner from sklearn.linear_model import LinearRegression from sklearn.svm import SVR import numpy as np X, y = make_regression(n_samples=1000, n_features=20, noise=0.5) X_train, X_test, y_train, y_test = train_test_split(X, y) layer_1 = SuperLearner() layer_1.add([LinearRegression(), SVR()]) layer_1.add_meta(LinearRegression()) layer_1.fit(X_train, y_train) predictions = layer_1.predict(X_test) print(f"Predictions shape: {np.shape(predictions)}") ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值