Datawhale Al春训营 新能源赛道 得分0.3baseline

from netCDF4 import Dataset
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
import lightgbm as lgb
from tqdm import tqdm

1.数据观测

读取并查看训练集中气象数据

nc_path = "./data/pre_train_dataset/nwp_data_train/1/NWP_1/20240101.nc"
dataset = Dataset(nc_path, mode='r')
dataset.variables.keys()
dict_keys(['time', 'channel', 'data', 'lat', 'lon', 'lead_time'])

查看 channel 中变量,共有8个,具体如下

channel = dataset.variables["channel"][:]
channel
array(['ghi', 'poai', 'sp', 't2m', 'tcc', 'tp', 'u100', 'v100'],
      dtype=object)

查看 data 维度,该数据维度为(1, 24, 8, 11, 11)

data = dataset.variables["data"][:]
data.shape
(1, 24, 8, 11, 11)

2.数据处理

观测上步代码结果可知气象数据中新能源场站每个小时的数据维度为 2 (11x11),但构建模型对于单个时间点的单个特征只需要一个 标量 即可,因此我们把 11x11 个格点的数据取均值,从而将二维数据转为单一的标量值。

且主办方提供的气象数据时间精度为h,而发电功率精度为15min,即给我们一天的数据有24条天气数据与96(24*4)条功率数据,因此将功率数据中每四条数据只保留一条。

# 获取2024年日期
date_range = pd.date_range(start='2024-01-01', end='2024-12-30')
# 将%Y-%m-%d格式转为%Y%m%d
date = [date.strftime('%Y%m%d') for date in date_range]
# 定义读取训练/测试集函数
def get_data(path_template, date):
    # 读取该天数据
    dataset = Dataset(path_template.format(date), mode='r')
    # 获取列名
    channel = dataset.variables["channel"][:]
    # 获取列名对应的数据
    data = dataset.variables["data"][:]
    # for i in range(8) 表示将第三维度进行遍历
    # data[:, :, i, :, :][0] 的维度为(24, 11, 11)
    # np.mean(data[:, :, i, :, :][0], axis=(1, 2) 表示对该数组的第二、三个维度(11, 11)计算均值 生成的列表长度为24
    # 又因为循环了8次 因此形状为8*24
    # 我们最后使用.T进行转置 将数组的维度转成了24*8
    mean_values = np.array([np.mean(data[:, :, i, :, :][0], axis=(1, 2)) for i in range(8)]).T
    # 将数据与列名整合为dataframe
    return pd.DataFrame(mean_values, columns=channel)
# 定义路径模版
train_path_template = "../new_energy_power_forecast/data/pre_train_dataset/nwp_data_train/1/NWP_1/{}.nc"
# new_energy_power_forecast/data/pre_train_dataset/nwp_data_train/1/NWP_1/20240101.nc

# 通过列表推导式获取数据 返回的列表中每个元素都是以天为单位的数据
data = [get_data(train_path_template, i) for i in date]
# 将每天的数据拼接并重设index
train = pd.concat(data, axis=0).reset_index(drop=True)
# 读取目标值
target = pd.read_csv("./data/pre_train_dataset/fact_data/1_normalization_train.csv")
target = target[96:]
# 功率数据中每四条数据去掉三条
target = target[target['时间'].str.endswith('00:00')]
target = target.reset_index(drop=True)
# 将目标值合并到训练集
train["power"] = target["功率(MW)"]


test_path_template = "./data/pre_test_dataset/nwp_data_test/1/NWP_1/{}.nc"

date_range_test = pd.date_range(start='2024-12-31', end='2025-02-27')
date_test = [date.strftime('%Y%m%d') for date in date_range_test]
date_test[:5]

data_test = [get_data(test_path_template, i) for i in date_test]
test = pd.concat(data_test, axis=0).reset_index(drop=True)
test.shape
(1416, 8)

3.数据可视化

从分布中分析是否特征与特征、目标值之间存在周期性、趋势性、相关性和异常性。

import matplotlib.pyplot as plt
# 获取24小时的列表
hours = range(24)
# 定义画布
plt.figure(figsize=(20,10))
# 绘制八个特征及目标值
for i in range(9):
    # 绘制3*3的图中第i+1子图
    plt.subplot(3, 3, i+1)
    # 横坐标为小时 纵坐标为特征or目标值
    plt.plot(hours, train.iloc[:24, i])
    # title为列名
    plt.title(train.columns.tolist()[i])
# 展示图片
plt.show()

在这里插入图片描述

根据上图有以下猜测

ghi与poai基本成正比

sp、t2m、tcc、v100趋势大体一致

power功率高的时段主要在凌晨

power在上午时段急转直下

tp是少有的单调递增的特征 且貌似与power递减区间重复

是否可以根据风在经纬度方向上的速度得到总风速?合并后是否会与power相关?

对于以上猜测我们需要观察更多的数据予以验证,在这期间也会产生更多的想法

4.特征工程

特征工程指的是把原始数据转变为模型训练数据的过程,目的是获取更好的训练数据特征。特征工程能使得模型的性能得到提升,有时甚至在简单的模型上也能取得不错的效果。

def feature_combine(df):
    # 复制一份数据
    df_copy = df.copy()
    # 新增列风速 将两个方向的风速合并为总风速
    df_copy["wind_speed"] = np.sqrt(df_copy['u100']**2 + df_copy['v100']**2)
    # 添加小时的特征
    df_copy["h"] = df_copy.index % 24
    return df_copy

train = feature_combine(train)
test = feature_combine(test)  # 确保测试集也应用相同的特征工程

5.数据清洗

数据和特征决定了机器学习的上限,而模型和算法只是逼近这个上限而已。俗话说:garbage in, garbage out。分析完数据后,特征工程前,必不可少的步骤是对数据清洗。

数据清洗的作用是利用有关技术如数理统计、数据挖掘或预定义的清理规则将脏数据转化为满足数据质量要求的数据。

baseline中没有进行细致的数据清洗工作,仅删除含有缺失值所在的行

train = train.dropna().reset_index(drop=True)

6.1.模型训练与验证

特征工程也好,数据清洗也罢,都是为最终的模型来服务的,模型的建立和调参决定了最终的结果。模型的选择决定结果的上限, 如何更好的去达到模型上限取决于模型的调参。

建模的过程需要我们对常见的线性模型、非线性模型有基础的了解。模型构建完成后,需要掌握一定的模型性能验证的方法和技巧

我们使用lightgbm模型,并选择经典的K折交叉验证方法进行离线评估,大体流程如下:

K折交叉验证会把样本数据随机的分成K份;

每次随机的选择K-1份作为训练集,剩下的1份做验证集;

当这一轮完成后,重新随机选择K-1份来训练数据;

最后将K折预测结果取平均作为最终提交结果。

from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
import lightgbm as lgb

def cv_model(clf, train_x, train_y, test_x, seed=2024):
    # 5折交叉验证
    folds = 5
    # 生成训练集和验证集的索引
    kf = KFold(n_splits=folds, shuffle=True, random_state=seed)
    # 存储验证结果
    oof = np.zeros(train_x.shape[0])
    # 存储测试集预测结果
    test_predict = np.zeros(test_x.shape[0])
    # 存储每折评分
    cv_scores = []
    # kf.split(train_x, train_y)返回一个生成器 每次调用返回新的train_index, valid_index
    # 循环五次
    for i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):
        print('************************************ {} ************************************'.format(str(i+1)))
        # 获取当前折的训练集及验证集
        trn_x, trn_y, val_x, val_y = train_x.iloc[train_index], train_y[train_index], train_x.iloc[valid_index], train_y[valid_index]
        
        # 使用Lightgbm的Dataset构建训练及验证数据集
        train_matrix = clf.Dataset(trn_x, label=trn_y)
        valid_matrix = clf.Dataset(val_x, label=val_y)
        # 模型参数
        params = {
            # 提升类型
            'boosting_type': 'gbdt',
            # 回归任务
            'objective': 'regression',
            # 评估指标
            'metric': 'rmse',
            # 子节点最小权重
            'min_child_weight': 5,
            # 最大叶子结点数
            'num_leaves': 2 ** 8,
            # L2正则化权重
            'lambda_l2': 10,
            # 每次迭代随机选择80%的特征
            'feature_fraction': 0.8,
            # 表示每次迭代随机选择80%的样本
            'bagging_fraction': 0.8,
            # 4次迭代执行一次bagging
            'bagging_freq': 4,
            # 学习率,即模型更新的幅度
            'learning_rate': 0.1,
            # 随机种子
            'seed': 2023,
            # 线程数
            'nthread' : 16,
            # 设置模型训练过程中不输出信息
            'verbose' : -1,
        }
        # 使用参数训练模型
        model = clf.train(params, train_matrix, 3000, valid_sets=[train_matrix, valid_matrix])
        # 对验证集进行预测
        val_pred = model.predict(val_x, num_iteration=model.best_iteration)
        # 对测试集进行预测
        test_pred = model.predict(test_x, num_iteration=model.best_iteration)
        # 更新模型在验证集上的结果
        oof[valid_index] = val_pred
        # 将每个模型结果加权并相加
        test_predict += test_pred / kf.n_splits
        # 对rmse取倒数 得分越高性能越好
        score = 1/(1+np.sqrt(mean_squared_error(val_pred, val_y)))
        # 存储成绩
        cv_scores.append(score)
        # 打印成绩
        print(cv_scores)
    # 返回验证集及测试集结果
    return oof, test_predict
# 获取训练集中除了power的其他列
cols = [f for f in train.columns if f not in ['power']]
# 使用函数
lgb_oof, lgb_test = cv_model(lgb, train[cols], train["power"], test)
# 将数据重复4次
lgb_test = [item for item in lgb_test for _ in range(4)]
************************************ 1 ************************************
[np.float64(0.9321765039265354)]
************************************ 2 ************************************
[np.float64(0.9321765039265354), np.float64(0.9318322285078197)]
************************************ 3 ************************************
[np.float64(0.9321765039265354), np.float64(0.9318322285078197), np.float64(0.9342164658678296)]
************************************ 4 ************************************
[np.float64(0.9321765039265354), np.float64(0.9318322285078197), np.float64(0.9342164658678296), np.float64(0.928174460585625)]
************************************ 5 ************************************
[np.float64(0.9321765039265354), np.float64(0.9318322285078197), np.float64(0.9342164658678296), np.float64(0.928174460585625), np.float64(0.9289893514312805)]

6.2.模型特征融合

import numpy as np
import pandas as pd
from sklearn.model_selection import TimeSeriesSplit
from sklearn.metrics import mean_absolute_error
import lightgbm as lgb
import xgboost as xgb
from catboost import CatBoostRegressor

def cv_model(clf, train_x, train_y, test_x, clf_name, seed=2023):
    '''
    clf:调用模型
    train_x:训练数据
    train_y:训练数据对应标签
    test_x:测试数据
    clf_name:选择使用模型名
    seed:随机种子
    '''
    folds = 5
    tscv = TimeSeriesSplit(n_splits=folds)
    oof = np.zeros(train_x.shape[0])
    test_predict = np.zeros(test_x.shape[0])
    cv_scores = []
    
    for i, (train_index, valid_index) in enumerate(tscv.split(train_x)):
        print('************************************ {} ************************************'.format(str(i+1)))
        trn_x, trn_y, val_x, val_y = train_x.iloc[train_index], train_y[train_index], train_x.iloc[valid_index], train_y[valid_index]
        
        if clf_name == "lgb":
            train_matrix = clf.Dataset(trn_x, label=trn_y, categorical_feature=[])
            valid_matrix = clf.Dataset(val_x, label=val_y, categorical_feature=[])
            params = {
                'boosting_type': 'gbdt',
                'objective': 'regression',
                'metric': 'mae',
                'min_child_weight': 6,
                'num_leaves': 2 ** 6,
                'lambda_l2': 10,
                'feature_fraction': 0.8,
                'bagging_fraction': 0.8,
                'bagging_freq': 4,
                'learning_rate': 0.1,
                'seed': 2023,
                'nthread': 16,
                'verbose': -1,  # 控制整体日志输出
            }
            # 使用callbacks替换verbose_eval
            model = clf.train(
                params,
                train_matrix,
                num_boost_round=2000,
                valid_sets=[train_matrix, valid_matrix],
                valid_names=['train', 'valid'],
                callbacks=[
                    lgb.early_stopping(stopping_rounds=100),
                    lgb.log_evaluation(period=200)  # 每200次迭代输出一次日志
                ]
            )
            val_pred = model.predict(val_x, num_iteration=model.best_iteration)
            test_pred = model.predict(test_x, num_iteration=model.best_iteration)
        
        elif clf_name == "xgb":
            xgb_params = {
                'booster': 'gbtree', 
                'objective': 'reg:squarederror',
                'eval_metric': 'mae',
                'max_depth': 5,
                'lambda': 10,
                'subsample': 0.7,
                'colsample_bytree': 0.7,
                'colsample_bylevel': 0.7,
                'eta': 0.1,
                'tree_method': 'hist',
                'seed': 520,
                'nthread': 16
            }
            train_matrix = clf.DMatrix(trn_x, label=trn_y)
            valid_matrix = clf.DMatrix(val_x, label=val_y)
            test_matrix = clf.DMatrix(test_x)
            
            watchlist = [(train_matrix, 'train'), (valid_matrix, 'eval')]
            
            model = clf.train(xgb_params, train_matrix, num_boost_round=2000, evals=watchlist,
                              verbose_eval=200, early_stopping_rounds=100)
            val_pred = model.predict(valid_matrix)
            test_pred = model.predict(test_matrix)
            
        elif clf_name == "cat":
            params = {
                'learning_rate': 0.1,
                'depth': 5,
                'bootstrap_type': 'Bernoulli',
                'random_seed': 2023,
                'od_type': 'Iter',
                'od_wait': 100,
                'random_seed': 11,
                'allow_writing_files': False
            }
            
            model = clf(iterations=2000, **params)
            model.fit(trn_x, trn_y, eval_set=(val_x, val_y),
                      metric_period=200,
                      use_best_model=True, 
                      cat_features=[],
                      verbose=1)
            
            val_pred = model.predict(val_x)
            test_pred = model.predict(test_x)
        
        oof[valid_index] = val_pred
        test_predict += test_pred / folds
        
        score = mean_absolute_error(val_y, val_pred)
        cv_scores.append(score)
        print(f"Fold {i+1} MAE: {score}")
    
    print(f"Average CV MAE: {np.mean(cv_scores)}")
    return oof, test_predict

# 确保特征一致
cols = [f for f in train.columns if f not in ['power']]
assert train[cols].shape[1] == test.shape[1], "训练集和测试集特征数量不一致"
print("训练集特征:", train[cols].columns.tolist())
print("测试集特征:", test.columns.tolist())

# 训练三种模型
lgb_oof, lgb_test = cv_model(lgb, train[cols], train["power"], test, 'lgb')
xgb_oof, xgb_test = cv_model(xgb, train[cols], train["power"], test, 'xgb')
cat_oof, cat_test = cv_model(CatBoostRegressor, train[cols], train["power"], test, 'cat')

# 进行加权平均融合
lgb_weight, xgb_weight, cat_weight = 0.4, 0.3, 0.3
final_test = lgb_weight * lgb_test + xgb_weight * xgb_test + cat_weight * cat_test

# 结果输出
hourly_index = pd.date_range(start='2024-12-31 00:00', end='2025-02-27 23:00', freq='H')
hourly_pred = pd.DataFrame({'time': hourly_index, 'power': final_test})
hourly_pred = hourly_pred.set_index('time')
min15_index = pd.date_range(start='2024-12-31 00:00', end='2025-02-27 23:45', freq='15T')
min15_pred = hourly_pred.reindex(min15_index, method='ffill').interpolate(method='linear')
output = pd.read_csv("data/output/output1.csv")
output["power"] = min15_pred["power"].values
output = output.rename(columns={output.columns[0]: ''}).set_index('')[['power']]
output.to_csv('output/output1.csv')
训练集特征: ['ghi', 'poai', 'sp', 't2m', 'tcc', 'tp', 'u100', 'v100', 'wind_speed', 'h']
测试集特征: ['ghi', 'poai', 'sp', 't2m', 'tcc', 'tp', 'u100', 'v100', 'wind_speed', 'h']
************************************ 1 ************************************
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[63]	train's l1: 0.0314686	valid's l1: 0.128275
Fold 1 MAE: 0.12827455136730476
************************************ 2 ************************************
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[36]	train's l1: 0.0552484	valid's l1: 0.109052
Fold 2 MAE: 0.10905227760240692
************************************ 3 ************************************
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[41]	train's l1: 0.0519096	valid's l1: 0.0666907
Fold 3 MAE: 0.06669074265705817
************************************ 4 ************************************
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[43]	train's l1: 0.0496981	valid's l1: 0.0805869
Fold 4 MAE: 0.08058689527105856
************************************ 5 ************************************
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[32]	train's l1: 0.0551059	valid's l1: 0.0915022
Fold 5 MAE: 0.09150221303533994
Average CV MAE: 0.09522133598663367
************************************ 1 ************************************
[0]	train-mae:0.17285	eval-mae:0.18011
[200]	train-mae:0.02179	eval-mae:0.13517
[206]	train-mae:0.02123	eval-mae:0.13553
Fold 1 MAE: 0.13561037363306977
************************************ 2 ************************************
[0]	train-mae:0.17759	eval-mae:0.15369
[140]	train-mae:0.04825	eval-mae:0.11801
Fold 2 MAE: 0.11801148292060508
************************************ 3 ************************************
[0]	train-mae:0.15877	eval-mae:0.11899
[147]	train-mae:0.05023	eval-mae:0.06830
Fold 3 MAE: 0.06830078409788293
************************************ 4 ************************************
[0]	train-mae:0.14161	eval-mae:0.19603
[200]	train-mae:0.04567	eval-mae:0.08691
[339]	train-mae:0.03760	eval-mae:0.08873
Fold 4 MAE: 0.08872760186934665
************************************ 5 ************************************
[0]	train-mae:0.15344	eval-mae:0.22340
[162]	train-mae:0.04909	eval-mae:0.10039
Fold 5 MAE: 0.1004034196848035
Average CV MAE: 0.10221073244114158
************************************ 1 ************************************
0:	learn: 0.2233502	test: 0.2253162	best: 0.2253162 (0)	total: 568us	remaining: 1.14s
200:	learn: 0.0493536	test: 0.1784290	best: 0.1775005 (161)	total: 73.2ms	remaining: 655ms
Stopped by overfitting detector  (100 iterations wait)

bestTest = 0.1775004688
bestIteration = 161

Shrink model to first 162 iterations.
Fold 1 MAE: 0.13396717857257054
************************************ 2 ************************************
0:	learn: 0.2190750	test: 0.1711251	best: 0.1711251 (0)	total: 506us	remaining: 1.01s
Stopped by overfitting detector  (100 iterations wait)

bestTest = 0.1210232065
bestIteration = 59

Shrink model to first 60 iterations.
Fold 2 MAE: 0.09958377892175246
************************************ 3 ************************************
0:	learn: 0.2008693	test: 0.1334355	best: 0.1334355 (0)	total: 541us	remaining: 1.08s


Warning: Overfitting detector is active, thus evaluation metric is calculated on every iteration. 'metric_period' is ignored for evaluation metric.
Warning: Overfitting detector is active, thus evaluation metric is calculated on every iteration. 'metric_period' is ignored for evaluation metric.
Warning: Overfitting detector is active, thus evaluation metric is calculated on every iteration. 'metric_period' is ignored for evaluation metric.


Stopped by overfitting detector  (100 iterations wait)

bestTest = 0.08326809493
bestIteration = 37

Shrink model to first 38 iterations.
Fold 3 MAE: 0.0654299814356548
************************************ 4 ************************************
0:	learn: 0.1858237	test: 0.2464091	best: 0.2464091 (0)	total: 679us	remaining: 1.36s
200:	learn: 0.0811772	test: 0.1280590	best: 0.1272817 (169)	total: 104ms	remaining: 928ms


Warning: Overfitting detector is active, thus evaluation metric is calculated on every iteration. 'metric_period' is ignored for evaluation metric.


Stopped by overfitting detector  (100 iterations wait)

bestTest = 0.1261758761
bestIteration = 296

Shrink model to first 297 iterations.
Fold 4 MAE: 0.07967392305611812
************************************ 5 ************************************
0:	learn: 0.1983496	test: 0.2805617	best: 0.2805617 (0)	total: 833us	remaining: 1.67s
Stopped by overfitting detector  (100 iterations wait)

bestTest = 0.1160635215
bestIteration = 67

Shrink model to first 68 iterations.
Fold 5 MAE: 0.08843953235735297
Average CV MAE: 0.09341887886868977


Warning: Overfitting detector is active, thus evaluation metric is calculated on every iteration. 'metric_period' is ignored for evaluation metric.
/tmp/ipykernel_746/3735247969.py:135: FutureWarning: 'H' is deprecated and will be removed in a future version, please use 'h' instead.
  hourly_index = pd.date_range(start='2024-12-31 00:00', end='2025-02-27 23:00', freq='H')
/tmp/ipykernel_746/3735247969.py:138: FutureWarning: 'T' is deprecated and will be removed in a future version, please use 'min' instead.
  min15_index = pd.date_range(start='2024-12-31 00:00', end='2025-02-27 23:45', freq='15T')

7.结果输出



"""
输出文件output1.csv预期包含15分钟级数据:59天 × 24小时 × 4(15分钟/小时)= 5664行。
lgb_test是小时级预测结果(1416行),需要插值到15分钟级(5664行)才能匹配output的长度。
"""

import os
import pandas as pd

# 创建输出目录
if not os.path.exists("output"):
    os.makedirs("output")

# 使用融合后的预测结果
# 假设final_test已从特征融合代码生成
# final_test = lgb_weight * lgb_test + xgb_weight * xgb_test + cat_weight * cat_test

# 创建小时级预测DataFrame
hourly_index = pd.date_range(start='2024-12-31 00:00', end='2025-02-27 23:00', freq='H')
hourly_pred = pd.DataFrame({'time': hourly_index, 'power': final_test})
hourly_pred = hourly_pred.set_index('time')

# 插值到15分钟级
min15_index = pd.date_range(start='2024-12-31 00:00', end='2025-02-27 23:45', freq='15T')
min15_pred = hourly_pred.reindex(min15_index, method='ffill').interpolate(method='linear')

# 读取输出模板
output = pd.read_csv("data/output/output1.csv").reset_index(drop=True)

# 验证长度匹配
assert len(output) == len(min15_pred), f"Output length ({len(output)}) does not match prediction length ({len(min15_pred)})"

# 添加预测结果
output["power"] = min15_pred["power"].values

# 动态重命名时间列
if 'Unnamed: 0' in output.columns:
    output = output.rename(columns={'Unnamed: 0': ''})
elif output.columns[0] != '':
    output = output.rename(columns={output.columns[0]: ''})

# 将第一列设置为索引
output = output.set_index('')

# 仅保留power列,动态删除其他列
output = output[['power']]

# 为10个场站生成输出文件
for i in range(1, 11):
    output.to_csv(f'output/output{i}.csv')
/tmp/ipykernel_746/1882064419.py:18: FutureWarning: 'H' is deprecated and will be removed in a future version, please use 'h' instead.
  hourly_index = pd.date_range(start='2024-12-31 00:00', end='2025-02-27 23:00', freq='H')
/tmp/ipykernel_746/1882064419.py:23: FutureWarning: 'T' is deprecated and will be removed in a future version, please use 'min' instead.
  min15_index = pd.date_range(start='2024-12-31 00:00', end='2025-02-27 23:45', freq='15T')
!zip -r output.zip output
updating: output/ (stored 0%)
updating: output/output1.csv (deflated 84%)
updating: output/output10.csv (deflated 84%)
updating: output/output2.csv (deflated 84%)
updating: output/output3.csv (deflated 84%)
updating: output/output4.csv (deflated 84%)
updating: output/output5.csv (deflated 84%)
updating: output/output6.csv (deflated 84%)
updating: output/output7.csv (deflated 84%)
updating: output/output8.csv (deflated 84%)
updating: output/output9.csv (deflated 84%)
  adding: output/.ipynb_checkpoints/ (stored 0%)
  adding: output/.ipynb_checkpoints/output1-checkpoint.csv (deflated 84%)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值