HD-2842 Chinese Rings(矩阵应用)

本文介绍了一种解决九连环问题的方法,通过数学建模和矩阵运算来计算最少步骤。详细解析了递推公式,并提供了C++实现代码。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Chinese Rings

Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)
Total Submission(s): 53 Accepted Submission(s): 32
 
Problem Description
Dumbear likes to play the Chinese Rings (Baguenaudier). It’s a game played with nine rings on a bar. The rules of this game are very simple: At first, the nine rings are all on the bar.
The first ring can be taken off or taken on with one step.
If the first k rings are all off and the (k + 1)th ring is on, then the (k + 2)th ring can be taken off or taken on with one step. (0 ≤ k ≤ 7)

Now consider a game with N (N ≤ 1,000,000,000) rings on a bar, Dumbear wants to make all the rings off the bar with least steps. But Dumbear is very dumb, so he wants you to help him.
 
Input
Each line of the input file contains a number N indicates the number of the rings on the bar. The last line of the input file contains a number "0".
 
Output
For each line, output an integer S indicates the least steps. For the integers may be very large, output S mod 200907.
 
Sample Input
1
4
0
 
Sample Output
1
10
 
 
Source
2009 Multi-University Training Contest 3 - Host by WHU 
 
九连环 ---> n连环,规则可以查一下

题意:给定n个环和规则:如果想取下第n个环那么要保证前n-2都取下,第n-1还在

思路:设取下第n个环的最短时间是f[n],那么要想取下第n个环,首先要取下前n-2个即:f[n-2]以及最后一个,所以是

f[n-2]+1, 还有一个第n-1个,要取下它首先要保证第n-2在,所以需要f[n-2](怎么取下来的怎么放上去),现在又要取第n-1个了, 综上所述:f[n] = 2*f[n-2] + f[n-1] + 1, 然后就是构造矩阵了:

不难推出来: | f[n]    |           | 1 2 1 |     | f[n-1]|

                       | f[n-1] |    =    | 1 0 0 | *   | f[n-2]|

                       | 1       |           | 0 0 1 |     |    1   |


不知道为什么用int就wa,换成longlong就a了,感觉数没有越界啊(

#include <iostream>
#include <cmath>
#include <string>
#include <algorithm>
#include <string.h>
#include <map>
#include <typeinfo>
using namespace std;
const int mod = 200907;
int width;
struct Matrix {
    long long a[35][35];
    void init() {
        memset(this->a, 0, sizeof(this->a));
    }
    Matrix& operator = (const Matrix& m) {
        for (int i = 0; i < 35; ++i)
            for (int j = 0; j < 35; ++j)
                a[i][j] = m.a[i][j];
        return *this;
    }
};

Matrix operator * (const Matrix& a, const Matrix& b) {
    Matrix c;
    c.init();
    for (int i = 0; i < width; ++i) {
        for (int j = 0; j < width; ++j) {
            for (int k = 0; k < width; ++k) {
                c.a[i][j] = (c.a[i][j] + a.a[i][k] * b.a[k][j]) % mod;
            }
        }
    }
    return c;
}

Matrix operator ^ (Matrix a, int k) {
    Matrix c;
    for (int i = 0; i < width; ++i)
        for (int j = 0; j < width; ++j) {
            if (i == j)
                c.a[i][j] = 1;
            else
                c.a[i][j] = 0;
        }
    while (k) {
        if (k&1)
            c = a * c;
        k >>= 1;
        a = a * a;
    }
    return c;
}

Matrix operator + (const Matrix& a, const Matrix& b) {
    Matrix c;
    c.init();
    for (int i = 0; i < width; ++i)
        for (int j = 0; j < width; ++j)
            c.a[i][j] = (a.a[i][j] + b.a[i][j]) % mod;
    return c;
}

int main() {
    width = 3;
    int n;
    Matrix f;
    f.init();
    f.a[0][0] = f.a[0][2] = f.a[1][0] = f.a[2][2] = 1;
    f.a[0][1] = 2;
    Matrix ini;
    ini.init();
    ini.a[0][0] = 2; ini.a[1][0] = ini.a[2][0] = 1;
    while (cin >> n && n) {
        if (n <= 2) {
            cout << n << endl;
        } else {
            Matrix  a;
            a = f ^ (n - 2);
            a = a * ini;
            cout << a.a[0][0] % mod<< endl;
        }
    }
}



### 如何使用 `bert-base-chinese` 模型进行中文自然语言处理任务 #### 加载模型与分词器 为了能够顺利加载并使用 `bert-base-chinese` 模型,需先安装 Hugging Face 的 Transformers 库。之后可以通过指定路径来初始化 BertModel 和 BertTokenizer 类实例。 ```python from transformers import BertModel, BertTokenizer # 初始化 BERT 模型和分词器 model = BertModel.from_pretrained('/opt/nlp/bert-base-chinese') # 载入本地存储的预训练权重文件[^3] tokenizer = BertTokenizer.from_pretrained('/opt/nlp/bert-base-chinese') # 使用相同的配置创建对应的分词工具 ``` #### 文本编码转换 当准备好了上述组件后,下一步就是将待分析的文字转化为适合输入给神经网络的形式——即 token ID 列表形式: ```python text = "你好世界" encoded_input = tokenizer(text, return_tensors='pt') print(encoded_input) ``` 这段代码会输出如下结构的数据字典,其中包含了经过编码后的 tokens id 数组以及 attention mask 等辅助信息。 #### 获取特征向量表示 最后一步则是调用已经加载好的 BERT 模型来进行前向传播计算,并提取最后一层隐藏状态作为句子级别的语义表达: ```python output = model(**encoded_input) # 取得 [CLS] 标记对应位置上的隐含层激活值(通常用于分类任务) cls_embedding = output.last_hidden_state[:,0,:].detach().numpy() print(cls_embedding.shape) # 输出形状应为 (batch_size, hidden_dim),这里 batch_size=1 ``` 通过这种方式可以方便快捷地获得高质量的语言学特征描述子,进而支持下游的各种 NLP 场景应用需求。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值