No. 15 - Fibonacci Sequences

No. 15 - Fibonacci Sequences


Problem: Please implement a function which returns the n th number in Fibonacci sequences with an input n. Fibonacci sequence is defined as:


Analysis: It is a classic interview questions to get numbers in Fibonacci sequences. We have different solutions for it, and their performance varies a lot.

Solution 1: Inefficient recursive solution

Fibonacci sequences are taken as examples to lecture recursive functions in many C/C++ textbooks, so most of candidates are familiar with the recursive solution. They feel confident and delighted when they meet this problem during interviews, because the can write the following code in short time:

long  long Fibonacci( unsigned  int n)
{
     if(n <= 0)
         return 0;

     if(n == 1)
         return 1;

     return Fibonacci(n - 1) + Fibonacci(n - 2);
}

Our textbooks take Fibonacci sequences as examples for recursive functions does not necessarily mean that recursion is a good solution for Fibonacci sequences. Interviewers may tell candidates that the performance of this recursive solution is quite bad, and ask them to analyze root causes.

Let us take f(10) as an example to analyze the recursive process. We have to get f(9) and f(8) before we get f(10). Meanwhile, f(8) and f(7) are needed before we get f(9). The dependency can be visualized in a tree as shown in Figure 1:



It is not difficult to notice that there are many duplicate nodes in the tree in Figure 1. The number of duplicated nodes increases dramatically when n increases.  Readers may have a try on the 100 thnumber if Fibonacci sequences to have intuitive ideas about how slow this recursive solution is.

Solution 2: Practical Solution with O(n) efficiency

It is easy to optimize performance fortunately if we calculate from bottom. That is to say, we get f(2) based on f(0) and f(1), and get f(3) based on f(1) and f(2). We follow this pattern till we get f(n). It is obvious that its time complexity is O(n). Its corresponding code is shown below:

long  long Fibonacci( unsigned n)
{
     int result[2] = {0, 1};
     if(n < 2)
         return result[n];

     long  long  fibNMinusOne = 1;
     long  long  fibNMinusTwo = 0;
     long  long  fibN = 0;
     for( unsigned  int i = 2; i <= n; ++ i)
    {
        fibN = fibNMinusOne + fibNMinusTwo;

        fibNMinusTwo = fibNMinusOne;
        fibNMinusOne = fibN;
    }

      return fibN;
}

Solution 3: O(logn) solution

Usually interviewers expect the O(n) solution above. However, there is an O( logn) solution available, which is based on an uncommon equation as shown below:


It is not difficult to prove this equation vithmathematical induction. Interested readers may have try.

Now the only problem is how to calculate power of a matrix. We can calculate power with exponent n in O( logn) time with the following equation:



The source code to get power of a matrix looks complicated, which is listed below:

#include  <cassert>

struct Matrix2By2
{
    Matrix2By2
    (
         long  long m00 = 0,
         long  long m01 = 0,
         long  long m10 = 0,
         long  long m11 = 0
    )
    :m_00(m00), m_01(m01), m_10(m10), m_11(m11)
    {
    }

     long  long m_00;
     long  long m_01;
     long  long m_10;
     long  long m_11;
};

Matrix2By2 MatrixMultiply
(
     const Matrix2By2& matrix1,
     const Matrix2By2& matrix2
)
{
     return Matrix2By2(
        matrix1.m_00 * matrix2.m_00 + matrix1.m_01 * matrix2.m_10,
        matrix1.m_00 * matrix2.m_01 + matrix1.m_01 * matrix2.m_11,
        matrix1.m_10 * matrix2.m_00 + matrix1.m_11 * matrix2.m_10,
        matrix1.m_10 * matrix2.m_01 + matrix1.m_11 * matrix2.m_11);
}

Matrix2By2 MatrixPower( unsigned  int n)
{
    assert(n > 0);

    Matrix2By2 matrix;
     if(n == 1)
    {
        matrix = Matrix2By2(1, 1, 1, 0);
    }
     else  if(n % 2 == 0)
    {
        matrix = MatrixPower(n / 2);
        matrix = MatrixMultiply(matrix, matrix);
    }
     else  if(n % 2 == 1)
    {
        matrix = MatrixPower((n - 1) / 2);
        matrix = MatrixMultiply(matrix, matrix);
        matrix = MatrixMultiply(matrix, Matrix2By2(1, 1, 1, 0));
    }

     return matrix;
}

long  long Fibonacci( unsigned  int n)
{
     int result[2] = {0, 1};
     if(n < 2)
         return result[n];

    Matrix2By2 PowerNMinus2 = MatrixPower(n - 1);
     return PowerNMinus2.m_00;
}

Even though it cost only O( logn) time in theory, its hidden constant parameter is quite big, so it is not treated as a practical solution in real software development. Additionally, it is not a recommended solution during interviews since its implementation code is very complex.

The author Harry He owns all the rights of this post. If you are going to use part of or the whole of this ariticle in your blog o webpages,  please add a reference to  http://codercareer.blogspot.com/. If you are going to use it in your books, please contact me (zhedahht@gmail.com) . Thanks. 

题目1.基于CNN+RNN的手写数字图像序列预测 一、实验背景 手写数字识别是计算机视觉中的经典任务。基于此,本实验可以设计一个更有趣的综合性实验:给定一组连续手写数字图像序列,模型需要预测序列中下一个数字。 二、实验要求: CNN(采用:自定义CNN或AlexNet或ResNet等)用于对MNIST手写数字图像进行分类。 RNN(采用:自定义RNN或LSTM或GRU等)用于数字序列进行预测(基于序列的前几个数字预测下一个数字)。 这个任务既涉及图像分类,又涉及序列建模,是CNN + RNN 综合应用的典型案例。 三、数据集 1.图像数据集 名称:MNIST 手写数字数据集,可直接使用PyTorch内置数据集。 torchvision.datasets.MNIST(root='./data', train=True, download=True) 内容:28×28 灰度图像,训练集 60,000 张,测试集 10,000 张 2.序列生成方法 将 MNIST 图像随机组成长度为 5 的序列,用前 4 个数字预测第 5 个数字作为标签。 序列规则示例: l 等差数列:1,3,5,7 → 预测9 l 等比数列:2,4,8,16 → 预测32 l 斐波那契:1,1,2,3 → 预测5 l 简单模式:1,2,1,2 → 预测1 这种方式可以快速生成序列数据,无需额外大数据集。 四、代码框架 1.生成数字序列数据集 class NumberSequenceDataset(Dataset): """数字序列数据集""" def __init__(self, num_samples=5000, sequence_length=5): self.num_samples = num_samples self.sequence_length = sequence_length self.sequences, self.labels = self._generate_sequences() def _generate_sequences(self): """生成多种模式的数字序列""" sequences = [] labels = [] for _ in range(self.num_samples): # 随机选择一种序列模式 pattern_type = np.random.choice(['arithmetic', 'geometric', 'fibonacci', 'alternating']) if pattern_type == 'arithmetic': # 等差数列 start = np.random.randint(0, 5) diff = np.random.randint(1, 4) sequence = [start + i * diff for i in range(self.sequence_length)] next_num = sequence[-1] + diff elif pattern_type == 'geometric': # 等比数列 start = np.random.randint(1, 3) ratio = np.random.randint(2, 4) sequence = [start * (ratio ** i) for i in range(self.sequence_length)] next_num = sequence[-1] * ratio elif pattern_type == 'fibonacci': # 斐波那契数列变种 a, b = np.random.randint(1, 3), np.random.randint(1, 3) sequence = [a, b] for i in range(2, self.sequence_length): sequence.append(sequence[i-1] + sequence[i-2]) next_num = sequence[-1] + sequence[-2] else: # alternating # 交替序列 a, b = np.random.randint(0, 5), np.random.randint(0, 5) sequence = [a if i % 2 == 0 else b for i in range(self.sequence_length)] next_num = a if self.sequence_length % 2 == 0 else b # 限制数字范围在0-9之间(模拟MNIST数字) sequence = [min(max(x, 0), 9) for x in sequence] next_num = min(max(next_num, 0), 9) sequences.append(sequence) labels.append(next_num) return np.array(sequences), np.array(labels) def __len__(self): return len(self.sequences) def __getitem__(self, idx): sequence = torch.FloatTensor(self.sequences[idx]) label = torch.LongTensor([self.labels[idx]]).squeeze() return sequence, label 2.数据加载和预处理 print("=== 数据准备 ===") # MNIST数据变换 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ]) # 加载MNIST数据集 train_mnist = torchvision.datasets.MNIST( root='./data', train=True, download=True, transform=transform ) test_mnist = torchvision.datasets.MNIST( root='./data', train=False, download=True, transform=transform ) # 创建数字序列数据集 train_sequences = NumberSequenceDataset(num_samples=5000, sequence_length=5) test_sequences = NumberSequenceDataset(num_samples=1000, sequence_length=5) # 创建数据加载器 mnist_train_loader = DataLoader(train_mnist, batch_size=64, shuffle=True) mnist_test_loader = DataLoader(test_mnist, batch_size=64, shuffle=False) sequence_train_loader = DataLoader(train_sequences, batch_size=64, shuffle=True) sequence_test_loader = DataLoader(test_sequences, batch_size=64, shuffle=False) print(f"MNIST训练集: {len(train_mnist)} 张图像") print(f"MNIST测试集: {len(test_mnist)} 张图像") print(f"序列训练集: {len(train_sequences)} 个序列") print(f"序列测试集: {len(test_sequences)} 个序列") 3.模型定义 (1)定义手写数字识别CNN模型(采用:自定义RNN或LSTM或GRU等):补充! (2)定义数字序列预测RNN模型(采用:自定义RNN或LSTM或GRU等):补充! 4.训练模型 (1)训练、优化CNN模型:补充! (2)训练、优化RNN模型:补充! 5.评估模型 (1)评估CNN模型:补充! (2)评估RNN模型:补充! 6.数字序列预测测试 (1)输入数字图像序列,实现下一个数字预测:补充!
最新发布
10-23
import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader import numpy as np import matplotlib.pyplot as plt from tqdm import tqdm import random # 设置随机种子 torch.manual_seed(42) np.random.seed(42) random.seed(42) # 1. 生成数字序列数据集 class NumberSequenceDataset(Dataset): """数字序列数据集""" def __init__(self, num_samples=5000, sequence_length=5): self.num_samples = num_samples self.sequence_length = sequence_length self.sequences, self.labels = self._generate_sequences() def _generate_sequences(self): """生成多种模式的数字序列""" sequences = [] labels = [] for _ in range(self.num_samples): # 随机选择一种序列模式 pattern_type = np.random.choice(['arithmetic', 'geometric', 'fibonacci', 'alternating']) if pattern_type == 'arithmetic': # 等差数列 start = np.random.randint(0, 5) diff = np.random.randint(1, 4) sequence = [start + i * diff for i in range(self.sequence_length)] next_num = sequence[-1] + diff elif pattern_type == 'geometric': # 等比数列 start = np.random.randint(1, 3) ratio = np.random.randint(2, 4) sequence = [start * (ratio ** i) for i in range(self.sequence_length)] next_num = sequence[-1] * ratio elif pattern_type == 'fibonacci': # 斐波那契数列变种 a, b = np.random.randint(1, 3), np.random.randint(1, 3) sequence = [a, b] for i in range(2, self.sequence_length): sequence.append(sequence[i - 1] + sequence[i - 2]) next_num = sequence[-1] + sequence[-2] else: # alternating # 交替序列 a, b = np.random.randint(0, 5), np.random.randint(0, 5) sequence = [a if i % 2 == 0 else b for i in range(self.sequence_length)] next_num = a if self.sequence_length % 2 == 0 else b # 限制数字范围在0-9之间(模拟MNIST数字) sequence = [min(max(x, 0), 9) for x in sequence] next_num = min(max(next_num, 0), 9) sequences.append(sequence) labels.append(next_num) return np.array(sequences), np.array(labels) def __len__(self): return len(self.sequences) def __getitem__(self, idx): sequence = torch.FloatTensor(self.sequences[idx]) label = torch.LongTensor([self.labels[idx]]).squeeze() return sequence, label # 2. 模型定义 class CNNModel(nn.Module): """CNN模型用于手写数字识别""" def __init__(self): super(CNNModel, self).__init__() self.conv_layers = nn.Sequential( # 第一层卷积 nn.Conv2d(1, 32, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # 第二层卷积 nn.Conv2d(32, 64, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # 第三层卷积 nn.Conv2d(64, 128, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.classifier = nn.Sequential( nn.Dropout(0.5), nn.Linear(128 * 3 * 3, 256), nn.ReLU(), nn.Dropout(0.5), nn.Linear(256, 10) ) def forward(self, x): x = self.conv_layers(x) x = x.view(x.size(0), -1) # 展平 x = self.classifier(x) return x class RNNModel(nn.Module): """RNN模型用于序列预测""" def __init__(self, input_size=1, hidden_size=128, num_layers=2, output_size=10): super(RNNModel, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers # 使用LSTM self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, dropout=0.3) self.dropout = nn.Dropout(0.5) self.fc = nn.Linear(hidden_size, output_size) def forward(self, x): # 添加序列维度 x = x.unsqueeze(-1) # [batch_size, seq_len, 1] # 初始化隐藏状态 h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device) c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device) # LSTM前向传播 out, _ = self.lstm(x, (h0, c0)) # 只使用最后一个时间步的输出 out = self.dropout(out[:, -1, :]) out = self.fc(out) return out # 3. 训练函数 def train_cnn_model(model, train_loader, test_loader, epochs=10): """训练CNN模型""" criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) train_losses = [] train_accuracies = [] test_accuracies = [] for epoch in range(epochs): model.train() running_loss = 0.0 correct = 0 total = 0 pbar = tqdm(train_loader, desc=f'CNN Epoch {epoch + 1}/{epochs}') for images, labels in pbar: images, labels = images.to(device), labels.to(device) optimizer.zero_grad() outputs = model(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() pbar.set_postfix({ 'Loss': f'{loss.item():.4f}', 'Acc': f'{100 * correct / total:.2f}%' }) scheduler.step() # 计算训练精度 train_acc = 100 * correct / total train_losses.append(running_loss / len(train_loader)) train_accuracies.append(train_acc) # 计算测试精度 test_acc = evaluate_cnn_model(model, test_loader) test_accuracies.append(test_acc) print(f'Epoch [{epoch + 1}/{epochs}], Loss: {running_loss / len(train_loader):.4f}, ' f'Train Acc: {train_acc:.2f}%, Test Acc: {test_acc:.2f}%') return train_losses, train_accuracies, test_accuracies def train_rnn_model(model, train_loader, test_loader, epochs=20): """训练RNN模型""" criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=8, gamma=0.1) train_losses = [] train_accuracies = [] test_accuracies = [] for epoch in range(epochs): model.train() running_loss = 0.0 correct = 0 total = 0 pbar = tqdm(train_loader, desc=f'RNN Epoch {epoch + 1}/{epochs}') for sequences, labels in pbar: sequences, labels = sequences.to(device), labels.to(device) optimizer.zero_grad() outputs = model(sequences) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() pbar.set_postfix({ 'Loss': f'{loss.item():.4f}', 'Acc': f'{100 * correct / total:.2f}%' }) scheduler.step() # 计算训练精度 train_acc = 100 * correct / total train_losses.append(running_loss / len(train_loader)) train_accuracies.append(train_acc) # 计算测试精度 test_acc = evaluate_rnn_model(model, test_loader) test_accuracies.append(test_acc) print(f'Epoch [{epoch + 1}/{epochs}], Loss: {running_loss / len(train_loader):.4f}, ' f'Train Acc: {train_acc:.2f}%, Test Acc: {test_acc:.2f}%') return train_losses, train_accuracies, test_accuracies # 4. 评估函数 def evaluate_cnn_model(model, test_loader): """评估CNN模型""" model.eval() correct = 0 total = 0 with torch.no_grad(): for images, labels in test_loader: images, labels = images.to(device), labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() return 100 * correct / total def evaluate_rnn_model(model, test_loader): """评估RNN模型""" model.eval() correct = 0 total = 0 with torch.no_grad(): for sequences, labels in test_loader: sequences, labels = sequences.to(device), labels.to(device) outputs = model(sequences) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() return 100 * correct / total # 5. 预测函数 def predict_next_number(model, sequence): """预测序列中的下一个数字""" model.eval() with torch.no_grad(): sequence_tensor = torch.FloatTensor(sequence).unsqueeze(0).to(device) output = model(sequence_tensor) predicted = torch.argmax(output, dim=1) return predicted.item() # 6. 可视化函数 def plot_training_results(train_losses, train_accuracies, test_accuracies, title): """绘制训练结果""" fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4)) # 绘制损失 ax1.plot(train_losses, label='Training Loss') ax1.set_xlabel('Epoch') ax1.set_ylabel('Loss') ax1.set_title(f'{title} - Training Loss') ax1.legend() ax1.grid(True) # 绘制精度 ax2.plot(train_accuracies, label='Training Accuracy') ax2.plot(test_accuracies, label='Test Accuracy') ax2.set_xlabel('Epoch') ax2.set_ylabel('Accuracy (%)') ax2.set_title(f'{title} - Accuracy') ax2.legend() ax2.grid(True) plt.tight_layout() plt.show() # 主程序 if __name__ == "__main__": # 设置设备 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"使用设备: {device}") # 数据准备 print("=== 数据准备 ===") transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ]) # 加载MNIST数据集 train_mnist = torchvision.datasets.MNIST( root='./data', train=True, download=True, transform=transform ) test_mnist = torchvision.datasets.MNIST( root='./data', train=False, download=True, transform=transform ) # 创建数字序列数据集 train_sequences = NumberSequenceDataset(num_samples=5000, sequence_length=5) test_sequences = NumberSequenceDataset(num_samples=1000, sequence_length=5) # 创建数据加载器 mnist_train_loader = DataLoader(train_mnist, batch_size=64, shuffle=True) mnist_test_loader = DataLoader(test_mnist, batch_size=64, shuffle=False) sequence_train_loader = DataLoader(train_sequences, batch_size=64, shuffle=True) sequence_test_loader = DataLoader(test_sequences, batch_size=64, shuffle=False) print(f"MNIST训练集: {len(train_mnist)} 张图像") print(f"MNIST测试集: {len(test_mnist)} 张图像") print(f"序列训练集: {len(train_sequences)} 个序列") print(f"序列测试集: {len(test_sequences)} 个序列") # 初始化模型 print("\n=== 模型初始化 ===") cnn_model = CNNModel().to(device) rnn_model = RNNModel().to(device) print("CNN模型结构:") print(cnn_model) print("\nRNN模型结构:") print(rnn_model) # 训练CNN模型 print("\n=== 训练CNN模型 ===") cnn_train_losses, cnn_train_accuracies, cnn_test_accuracies = train_cnn_model( cnn_model, mnist_train_loader, mnist_test_loader, epochs=10 ) # 训练RNN模型 print("\n=== 训练RNN模型 ===") rnn_train_losses, rnn_train_accuracies, rnn_test_accuracies = train_rnn_model( rnn_model, sequence_train_loader, sequence_test_loader, epochs=20 ) # 绘制训练结果 plot_training_results(cnn_train_losses, cnn_train_accuracies, cnn_test_accuracies, "CNN") plot_training_results(rnn_train_losses, rnn_train_accuracies, rnn_test_accuracies, "RNN") # 最终评估 print("\n=== 最终模型评估 ===") final_cnn_acc = evaluate_cnn_model(cnn_model, mnist_test_loader) final_rnn_acc = evaluate_rnn_model(rnn_model, sequence_test_loader) print(f"CNN模型测试精度: {final_cnn_acc:.2f}%") print(f"RNN模型测试精度: {final_rnn_acc:.2f}%") # 序列预测测试 print("\n=== 数字序列预测测试 ===") test_sequences = [ [1, 3, 5, 7], # 等差数列,应该预测9 [2, 4, 8, 16], # 等比数列,应该预测32但限制在9 [1, 1, 2, 3], # 斐波那契,应该预测5 [1, 2, 1, 2], # 交替序列,应该预测1 [0, 2, 4, 6], # 等差数列,应该预测8 ] for i, seq in enumerate(test_sequences): predicted = predict_next_number(rnn_model, seq) print(f"序列 {seq} -> 预测下一个数字: {predicted}") # 保存模型 torch.save(cnn_model.state_dict(), 'cnn_model.pth') torch.save(rnn_model.state_dict(), 'rnn_model.pth') print("\n模型已保存为 'cnn_model.pth' 和 'rnn_model.pth'") print("第四组06武康博2315775027")修改代码使预测结果正确
10-22
Fibonacci numbers are the sequence of integers: f 0 = 0 f 0 ​ =0 , f 1 = 1 f 1 ​ =1 , f 2 = 1 f 2 ​ =1 , f 3 = 2 f 3 ​ =2 , f 4 = 3 f 4 ​ =3 , f 5 = 5 f 5 ​ =5 , . . . ... , f n = f n − 2 + f n − 1 f n ​ =f n−2 ​ +f n−1 ​ . So every next number is the sum of the previous two. Bajtek has developed a nice way to compute Fibonacci numbers on a blackboard. First, he writes a 0. Then, below it, he writes a 1. Then he performs the following two operations: operation "T": replace the top number with the sum of both numbers; operation "B": replace the bottom number with the sum of both numbers. If he performs n n operations, starting with "T" and then choosing operations alternately (so that the sequence of operations looks like "TBTBTBTB . . . ... "), the last number written will be equal to f n + 1 f n+1 ​ . Unfortunately, Bajtek sometimes makes mistakes and repeats an operation two or more times in a row. For example, if Bajtek wanted to compute f 7 f 7 ​ , then he would want to do n = 6 n=6 operations: "TBTBTB". If he instead performs the sequence of operations "TTTBBT", then he will have made 3 mistakes, and he will incorrectly compute that the seventh Fibonacci number is 10. The number of mistakes in the sequence of operations is the number of neighbouring equal operations («TT» or «BB»). You are given the number n n of operations that Bajtek has made in an attempt to compute f n + 1 f n+1 ​ and the number r r that is the result of his computations (that is last written number). Find the minimum possible number of mistakes that Bajtek must have made and any possible sequence of n n operations resulting in r r with that number of mistakes. Assume that Bajtek always correctly starts with operation "T". 输入格式 The first line contains the integers n n and r r ( 1 < = n , r < = 1 0 6 ) 1<=n,r<=10 6 ) . 输出格式 The first line of the output should contain one number — the minimum possible number of mistakes made by Bajtek.用c++写
07-09
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值