Rumor

Rumor

 

Vova promised himself that he would never play computer games... But recently Firestorm — a well-known game developing company — published their newest game, World of Farcraft, and it became really popular. Of course, Vova started playing it.

Now he tries to solve a quest. The task is to come to a settlement named Overcity and spread a rumor in it.

Vova knows that there are n characters in Overcity. Some characters are friends to each other, and they share information they got. Also Vova knows that he can bribe each character so he or she starts spreading the rumor; i-th character wants ci gold in exchange for spreading the rumor. When a character hears the rumor, he tells it to all his friends, and they start spreading the rumor to their friends (for free), and so on.

The quest is finished when all n characters know the rumor. What is the minimum amount of gold Vova needs to spend in order to finish the quest?

Take a look at the notes if you think you haven't understood the problem completely.

Input

The first line contains two integer numbers n and m (1 ≤ n ≤ 105, 0 ≤ m ≤ 105) — the number of characters in Overcity and the number of pairs of friends.

The second line contains n integer numbers ci (0 ≤ ci ≤ 109) — the amount of gold i-th character asks to start spreading the rumor.

Then m lines follow, each containing a pair of numbers (xi, yi) which represent that characters xi and yi are friends (1 ≤ xi, yi ≤ nxi ≠ yi). It is guaranteed that each pair is listed at most once.

Output

Print one number — the minimum amount of gold Vova has to spend in order to finish the quest.

Example
Input
5 2
2 5 3 4 8
1 4
4 5
Output
10
Input
10 0
1 2 3 4 5 6 7 8 9 10
Output
55
Input
10 5
1 6 2 7 3 8 4 9 5 10
1 2
3 4
5 6
7 8
9 10
Output
15
Note

In the first example the best decision is to bribe the first character (he will spread the rumor to fourth character, and the fourth one will spread it to fifth). Also Vova has to bribe the second and the third characters, so they know the rumor.

In the second example Vova has to bribe everyone.

In the third example the optimal decision is to bribe the first, the third, the fifth, the seventh and the ninth characters.



题目大意:要散布谣言,让n个人知道,需要贿赂他们,第i个人被贿赂需要a[i]元,被贿赂的人可以免费帮忙散布谣言给认识的人,有m种关系,x与y有之间相互认识。

代码:

#include<stdio.h>
#include<string.h>
#include<algorithm>
using namespace std;
int a[100005],pre[100005];
int find(int x)
{
    if(x==pre[x])
        return x;
    else   \\这里不能直接改成  return  find(pre[x]),会超限。
    {
        pre[x]=find(pre[x]);
        return find(pre[x]);
    }
}
void join(int x,int y)
{
    int fx,fy;
    fx=find(x);
    fy=find(y);
    if(fx!=fy);
    pre[fy]=fx;
}
int main()
{
    int n,m;
    scanf("%d%d",&n,&m);
    for(int i=1;i<=n;i++)
    {
        scanf("%d",&a[i]);
        pre[i]=i;
    }
    int x,y;
    for(int i=0;i<m;i++)
    {
        scanf("%d%d",&x,&y);
        join(x,y);
    }
    for(int i=1;i<=n;i++)   \\让花的钱尽可能小,找到互相有关系的人当中贿赂起来最便宜的。
        a[find(i)]=min(a[i],a[find(i)]);
    long long  ans=0;  \\这里不能用int,会过不了
    for(int i=1;i<=n;i++)
        if(i==pre[i])
            ans+=a[i];
    printf("%lld\n",ans);
    return 0;
}




### 多模态融合与循环神经网络在微博谣言检测中的代码实现 以下是基于多模态融合和循环神经网络(RNN/LSTM/GRU)进行微博谣言检测的一个简化代码框架。此代码旨在展示如何通过文本和其他模态数据(如图像或用户行为)联合训练模型。 #### 数据预处理 为了有效利用多模态信息,通常需要对不同类型的输入数据进行标准化处理。 ```python import numpy as np from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences def preprocess_text(texts, max_len=100): tokenizer = Tokenizer(num_words=10000) tokenizer.fit_on_texts(texts) sequences = tokenizer.texts_to_sequences(texts) padded_sequences = pad_sequences(sequences, maxlen=max_len, padding='post') return padded_sequences # 示例:假设我们有两条微博文本 texts = ["这是一条真实的微博", "这条微博可能是假的"] padded_sequences = preprocess_text(texts) print(padded_sequences.shape) # 输出形状应为 (样本数, 序列长度) ``` #### 构建多模态融合模型 以下是一个简单的多模态融合架构,结合了文本特征提取模块(使用 LSTM 或 GRU)以及可能的其他模态(例如图片特征)。这里仅提供了一个基本版本。 ```python import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Embedding, LSTM, Dense, concatenate, Flatten # 文本分支 text_input = Input(shape=(100,), name="text_input") # 输入维度由最大序列长度决定 embedding_layer = Embedding(input_dim=10000, output_dim=128)(text_input) lstm_output = LSTM(units=64, return_sequences=False)(embedding_layer) # 图像或其他模态分支(假设已有的 CNN 提取器) image_input = Input(shape=(7, 7, 512), name="image_input") # 假设来自预训练 CNN 的特征图大小 flatten_image = Flatten()(image_input) dense_image = Dense(64, activation='relu')(flatten_image) # 融合层 concatenated_features = concatenate([lstm_output, dense_image]) output = Dense(1, activation='sigmoid', name="rumor_detection")(concatenated_features) model = Model(inputs=[text_input, image_input], outputs=output) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.summary() ``` 上述代码展示了如何构建一个多模态模型,其中: - **文本部分** 使用嵌入层和 LSTM 层来捕捉语义信息[^1]。 - **图像部分** 利用预训练的卷积神经网络提取的空间特征[^2]。 - **融合策略** 将两种模态的信息拼接在一起并通过全连接层完成最终预测。 #### 训练过程 对于实际应用而言,还需要准备相应的标签数据并调整超参数以获得最佳性能。 ```python # 假设有如下训练集 X_train_text = ... # 预处理后的文本数据 X_train_images = ... # 对应的图像特征矩阵 y_train = ... # 标签数组 history = model.fit( {"text_input": X_train_text, "image_input": X_train_images}, y_train, epochs=10, batch_size=32, validation_split=0.2 ) ``` --- ### 注意事项 尽管以上代码提供了基础思路,但在真实场景下还需考虑更多因素,比如传播树结构的影响[^3]、早期检测需求等。此外,具体效果取决于高质量的数据标注和合理的调参工作。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值