VMware虚拟机Centos7搭建Hadoop、Zookeeper、Kafka、Flink集群

本文详细介绍了在三台服务器上搭建Hadoop、Zookeeper、Kafka、Flink集群的完整过程,包括Java和MySQL的安装,以及各组件的配置、启动和互连测试。此外,还涉及了网络配置、SSH免密登录、时间同步等关键步骤,确保集群的稳定运行。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

安装规划

masterslave1slave2
Java
MySQL(Hvie_MetaStore)
NameNode
DataNode
SecondaryNameNode
ResouceManager
NodeManager
HiveServer2
Zookeeper
Kafka
StandaloneSessionClusterEntrypoint
TaskManagerRunner

全机互通

  1. 准备机器

请添加图片描述

==tips:==如果只用root用户,建议安装时选择手动分区,将/home分区空间给/root分区

请添加图片描述

  1. 修改主机名

    hostnamectl set-hostname master
    hostnamectl set-hostname slave1
    hostnamectl set-hostname slave2
    
  2. 关闭&禁用防火墙

    #查看防火墙状态
    systemctl status firewalld
    #关闭防火墙
    systemctl stop firewalld
    #禁用防火墙
    systemctl disable firewalld
    
  3. 关闭&禁用网络管理服务

    #查看用网络管理服务
    systemctl status NetworkManager
    #关闭网络管理服务
    systemctl stop NetworkManager
    ##禁用网络管理服务
    systemctl disable NetworkManager
    
  4. 配置network。

    #进入配置文件
    vi /etc/sysconfig/network-scripts/ifcfg-ens33
    
    #修改配置文件
    :1,$s/\"//g #替换双引号
    
    #修改
    BOOTROTO=static
    #添加
    IPADDR=自己的
    NETMASK=255.255.255.0
    GATEWAY=自己的
    DNS1=8.8.8.8
    DNS2=114.114.114.114
    

    重启网络服务

    systemctl restart network
    
  5. 测试网络连通性。

    互ping,ping外网

  6. 配置主机名和IP的映射

    vi /etc/hosts
    
    #----------------
    192.168.158.241 master
    192.168.158.242 slave1
    192.168.158.243 slave2
    
  7. 配置SSH免密登录

    #修改root密码
    passwd root
    #安装必要SSH服务
    yum -y install openssh-server openssh-clients
    #启动SSH服务
    systemctl start sshd
    ssh-keygen -t rsa # 三次回车
    ssh-copy-id master
    ssh-copy-id slave1
    ssh-copy-id slave2
    

    互相连接测试

    ssh master
    exit 
    ssh slave1
    exit 
    ssh slave2
    exit
    
  8. 同步服务器时钟

  #master
  yum -y install ntp ntpdate
  ntpdate cn.pool.ntp.org
  ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  cp /etc/ntp.conf /etc/ntp.conf.bak
  cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd.bak
  echo "restrict master mask 255.255.255.0 nomodify notrap" >> /etc/ntp.conf
  echo "SYNC_HWCLOCK=yes" >> /etc/sysconfig/ntpd
  systemctl restart ntpd
  
  #slave1\slave2
  yum -y install ntpdate crontabs
  ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  ntpdate hadoop1
  echo "*/30 * * * * /usr/sbin/ntpdate master" >> /var/spool/cron/root

Java安装

解压、软连接、系统环境变量

tar zxvf /opt/download/jdk-8u171-linux-x64.tar.gz -C /opt/software/
ln -s /opt/software/jdk1.8.0_171 /opt/software/java
#-----------------------------------
export JAVA_HOME=/opt/software/javaexport PATH=$JAVA_HOME/bin:$PATH

Mysql安装

安装服务

tar xvf MySQL-5.5.40-1.linux2.6.x86_64.rpm-bundle.tar
rpm -ivh MySQL-server-5.5.40-1.linux2.6.x86_64.rpm
rpm -ivh MySQL-client-5.5.40-1.linux2.6.x86_64.rpm
 #有个会报错卸载掉相关依赖rpm -e xxxxxxxxxx --nodeps

配置

# 启动服务
systemctl start mysql
# 修改MySQL密码
/usr/bin/mysqladmin -u root password 'root'
# 登陆MySQL设置权限
mysql -uroot -proot 
update mysql.user set host='%' where host='localhost';
delete from mysql.user where host<>'%' or user='';
flush privileges;

改utf8

cd /usr/share/mysql
cp my-small.cnf /etc/my.cnf
vim /etc/my.cnf
#----------------
[mysqlid]
#这下面添加一行
character_set_server=utf8

Hadoop安装

解压、安装

tar zxvf hadoop-3.2.1.tar.gz -C /opt/software
ln -s /opt/software/hadoop-3.2.1/ /opt/software/hadoop

配置

配置core-site.xml
vim core-site.xml
-------------------------------------------
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:8020</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/software/hadoop/data</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.http.staticuser.user</name>
        <value>root</value>
    </property>
</configuration>
-------------------------------------------
配置hdfs-site.xml
vim hdfs-site.xml
-------------------------------------------
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>slave1:9868</value>
    </property>
</configuration>
-------------------------------------------
配置mapred-site.xml
vim mapred-site.xml
-------------------------------------------
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>master:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>master:19888</value>
  </property>
</configuration>
-------------------------------------------
配置yarn-site.xml
vim yarn-site.xml
-------------------------------------------
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>slave2</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>512</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>

    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>  
        <name>yarn.log.server.url</name>  
        <value>http://${yarn.timeline-service.webapp.address}/applicationhistory/logs</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property>
    <property>
        <name>yarn.timeline-service.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.hostname</name>
        <value>${yarn.resourcemanager.hostname}</value>
    </property>
    <property>
        <name>yarn.timeline-service.http-cross-origin.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
        <value>true</value>
    </property>
</configuration>
-------------------------------------------
配置hadoop-env.sh
vim hadoop-env.sh
-------------------------------------------
export JAVA_HOME=/opt/software/java
-------------------------------------------
配置mapred-env.sh
vim mapred-env.sh
-------------------------------------------
export JAVA_HOME=/opt/software/java
-------------------------------------------
配置yarn-env.sh
vim yarn-env.sh
-------------------------------------------
export JAVA_HOME=/opt/software/java
-------------------------------------------
配置works
vim works
-------------------------------------------
master
slave1
slave2
-------------------------------------------
添加变量
vim /etc/profile.d/myenv.sh
------------------------------------------------
export HADOOP_HOME=/opt/software/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
------------------------------------------------

#两个配置文件都需要配置
vim $HADOOP/sbin/start-dfs.sh
vim $HADOOP/sbin/stop-dfs.sh
------------------------------------------------
HDFS_NAMENODE_USER=root 
HDFS_DATANODE_USER=root 
HDFS_SECONDARYNAMENODE_USER=root 
YARN_RESOURCEMANAGER_USER=root 
YARN_NODEMANAGER_USER=root
------------------------------------------------

vim $HADOOP/sbin/start-yarn.sh
vim $HADOOP/sbin/stop-yarn.sh
------------------------------------------------
YARN_RESOURCEMANAGER_USER=root 
HADOOP_SECURE_DN_USER=yarn 
YARN_NODEMANAGER_USER=root
------------------------------------------------

分发给slave

scp -r $hadoop slave:$PWD
scp -r $hadoop slave:$PWD

HDFS格式化

hdfs namenode -format

启动Hadoop服务

# 启动HDFS
$HADOOP_HOME/sbin/start-dfs.sh
# 启动yarn
$HADOOP_HOME/sbin/start-yarn.sh
# 启动历史服务器
mapred --daemon start historyserver

Hive安装

解压、安装

tar zxvf /opt/download/apache-hive-3.1.2-bin.tar.gz -C /opt/software/
ln -s /opt/software/apache-hive-3.1.2-bin/ /opt/software/hive

配置

# 进入路径
cd /opt/software/hive/conf/
修改hive-site.xml
cp hive-default.xml.template hive-site.xml
vim hive-site.xml
-------------------------------------------
<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://master:3306/metastore?createDatabaseIfNotExist=true&amp;useUnicode=true&amp;characterEncoding=UTF-8</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>root</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>root</value>
    </property>
    <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/user/hive/warehouse</value>
    </property>
    <property>
        <name>hive.metastore.schema.verification</name>
        <value>false</value>
    </property>
    <property>
        <name>hive.metastore.uris</name>
        <value>thrift://master:9083</value>
    </property>
    <property>
        <name>hive.server2.thrift.port</name>
        <value>10000</value>
    </property>
    <property>
        <name>hive.server2.thrift.bind.host</name>
        <value>master</value>
    </property>
    <property>
        <name>hive.metastore.event.db.notification.api.auth</name>
        <value>false</value>
    </property>
</configuration>
-------------------------------------------
修改hive-env.sh
cp hive-env.sh.template hive-env.shvim hive-env.sh
-------------------------------------------
HADOOP_HOME=/opt/software/hadoop
-------------------------------------------
添加依赖包
cp /opt/download/mysql-connector-java-5.1.31.jar /opt/software/hive/lib/
添加环境变量
vim /etc/profile.d/myenv.sh
------------------------------------------------
export HIVE_HOME=/opt/software/hive
export PATH=$HIVE_HOME/bin:$PATH
------------------------------------------------

启动服务

# 初始化元数据表
schematool -dbType mysql -initSchema
# 启动hiveserver2服务
nohup hive --service metastore>/dev/null 2>&1 &
nohup hive --service hiveserver2>/dev/null 2>&1 &
#############报错 Exception in thread "main" java.lang.NoSuchMethodError 
################
# jar 包冲突, 需要删除低版本包
rm -rf /opt/software/hive/lib/guava-19.0.jar
cp /opt/software/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar /opt/software/hive/lib/

连接hive

beeline -u jdbc:hive2://master:10000 -n root -p 0000 --color=true

Zookeeper安装

解压、安装

tar -zxvf apache-zookeeper-3.6.1-bin.tar.gz -C /opt/softwareln -s apache-zookeeper-3.6.1-bin zookeeper

配置

创建zkdata zklogs

#在/opt/software/zookeeper下创建文件夹
mkdir zkdata  zklogs

配置zoo.cfg

cd conf/mv zoo_sample.cfg zoo.cfg
vi zoo.cfg
#修改:
dataDir=/usr/software/zookeeper/zkdata
#末尾添加:
dataLogDir=/usr/software/zookeeper/zklogs/
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888

分发给slave

scp -r $zookeeper slave1:$PWD
scp -r $zookeeper slave2:$PWD

创建myid

cd $zookeeper/zkdatamkdir myid
#master 1
#slave1 2
#slave2 3

myid里面存放的内容就是服务器的id,就是server.1=master:2888:3888当中的id, 就是1。

配置环境变量

export ZK_HOME=/usr/software/zookeeperexport PATH=$PATH:$ZK_HOME/bin

启动

#启动
zkServer.sh start
#查看状态
zkServer.sh status
#关闭
zkServer.sh stop

Kafka安装

解压、安装

tar -zxvf kafka_2.12-2.7.0.tgz -C /opt/software/
ln -s kafka_2.12-2.7.0 kafka

配置

修改server.properties

broker.id=0 
#从0 开始,0 1 2
delete.topic.enable=true #这条在文件中没有,手动添加,默认主题不允许删除
listeners=PLAINTEXT://master:9092log.dirs=/usr/software/kafka/kafka-logs# 数据存放的目录,会自动生成,不需要创建
zookeeper.connect=master:2181,slave1:2181,slave2:2181

分发给slave

scp -r $kafka slave1:$PWDscp -r $kafka slave2:$PWD

在其他的节点上,修改broker.id 和listeners 中的主机名。

配置环境变量

export KAFKA_HOME=/opt/software/kafkaexport PATH=$KAFKA_HOME/bin:$PATH

启动

#启动
kafka-server-start.sh
#以后台守护进程启动:
kafka-server-start.sh -daemon /usr/software/kafka/config/server.properties
#关闭
kafka-server-stop.sh

Flink安装

解压、安装

tar -zxvf flink-1.13.0-bin-scala_2.12.tgz -C /opt/software/
ln -s flink-1.13.0 flink

配置

配置 vim masters

master:8081slave1:8081

配置 vim workers

masterslave1slave2

配置 flink-conf.yaml

#修改
jobmanager.rpc.address: master
#最后添加
state.backend: filesystem
state.backend.fs.checkpointdir: hdfs://master:9000/flink-checkpoints
high-availability: zookeeper
high-availability.storageDir: hdfs://master:9000/flink/ha/
high-availability.zookeeper.quorum: master:2181,slave1:2181,slave2:2181

分发到slave

scp -r flink-1.13.0 slave1:$PWD
scp -r flink-1.13.0 slave2:$PWD

修改slave1上的jobmanager 通信地址

vim flink-conf.yaml
#--------------------- ↓这里要有空格!!
jobmanager.rpc.address: slave1

将flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 拷贝到flink的lib目录下

配置环境变量

export FLINK_HOME=/o/software/flinkexport PATH=$PATH:$FLINK_HOME/bin

启动服务

start-cluster.sh
<think>我们正在为CentOS 7系统构建一个包含Hadoop、Spark、FlinkKafka的技术栈。目标是选择稳定且兼容性好的版本。根据引用文档[1][2][3]中的实践,以及这些组件的版本兼容性,我们推荐以下版本组合: ### 推荐版本组合及理由 | 组件 | 推荐版本 | 理由 | |------------|---------------|----------------------------------------------------------------------| | **Hadoop** | **3.2.0** | 稳定且广泛使用,Spark 3.2.0预编译版本基于此版本构建[^1][^2] | | **Spark** | **3.2.0** | 长期支持(LTS)版本,与Hadoop 3.2.0兼容性好,支持Scala 2.12[^2] | | **Flink** | **1.13.6** | 稳定版本,支持Hadoop 3.2.0和Kafka 2.8.0,API成熟且文档丰富[^1] | | **Kafka** | **2.8.0** | 稳定版本,与Flink 1.13.x兼容性好,同时支持Zookeeper管理[^1] | | **Scala** | **2.12.x** | Spark 3.2.0、Flink 1.13.6和Kafka 2.8.0都支持Scala 2.12,需统一版本 | ### 关键兼容性说明 1. **Hadoop与Spark** Spark 3.2.0的预编译包`spark-3.2.0-bin-hadoop3.2.tgz`已集成Hadoop 3.2.0的库文件,无需单独配置HDFS即可运行本地模式[^2]。若需集群部署,Hadoop 3.2.0的配置也较为成熟。 2. **FlinkHadoop/Kafka** Flink 1.13.6官方提供与Hadoop 3.2.0的兼容支持,需将Hadoop依赖包(如`flink-shaded-hadoop-2-uber-2.7.5-10.0.jar`)放入Flink的`lib`目录[^1]。同时,Flink内置的Kafka连接器`flink-connector-kafka_2.12`可直接连接Kafka 2.8.0。 3. **KafkaZookeeper** Kafka 2.8.0仍依赖Zookeeper(生产环境建议≥3.5.0版本),但部署简单,适合学习环境。 ### 安装流程概要(单机/伪分布式) 1. **基础环境** ```bash # 安装JDK 8和Scala sudo yum install -y java-1.8.0-openjdk-devel scala ``` 2. **Hadoop 3.2.0** ```bash wget https://archive.apache.org/dist/hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz tar -xzf hadoop-3.2.0.tar.gz export HADOOP_HOME=/path/to/hadoop-3.2.0 ``` 3. **Spark 3.2.0** ```bash wget https://archive.apache.org/dist/spark/spark-3.2.0/spark-3.2.0-bin-hadoop3.2.tgz tar -xzf spark-3.2.0-bin-hadoop3.2.tgz ./bin/spark-shell # 验证Scala交互 ``` 4. **Kafka 2.8.0** ```bash wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.12-2.8.0.tgz tar -xzf kafka_2.12-2.8.0.tgz # 启动ZookeeperKafka bin/zookeeper-server-start.sh config/zookeeper.properties & bin/kafka-server-start.sh config/server.properties ``` 5. **Flink 1.13.6** ```bash wget https://archive.apache.org/dist/flink/flink-1.13.6/flink-1.13.6-bin-scala_2.12.tgz tar -xzf flink-1.13.6-bin-scala_2.12.tgz # 添加Hadoop支持(关键!) cp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar flink-1.13.6/lib/ ./bin/start-cluster.sh # 启动 ``` > ⚠️ **注意**:若需集群部署,参考引用[1][3]的分发步骤,使用`scp`同步配置到各节点。 ### 版本升级建议 - **生产环境**:可升级至Hadoop 3.3.x + Spark 3.3.x + Flink 1.17.x + Kafka 3.5.x(需验证兼容性)。 - **学习环境**:推荐组合已足够稳定,避免频繁升级。 --- §§相关问题§§ 1. 如何验证Hadoop集群是否正常工作? 2. Flink on YARN的部署流程是什么? 3. Kafka如何创建Topic并测试生产消费? 4. Spark如何读取HDFS上的数据? 5. CentOS 7中如何配置多节点Flink集群?</think>基于您在CentOS 7上构建Hadoop+Spark+Flink+Kafka技术栈的需求,结合引用文档的实践经验,推荐以下**稳定且兼容性最佳**的版本组合: ### 推荐版本组合及兼容性说明 | 组件 | 推荐版本 | 兼容性优势 | 下载来源 | |------------|---------------|----------------------------------------------------------------------------|--------------------------------------------------------------------------| | **Hadoop** | **3.2.0** | Spark 3.2.0官方预编译版本基于此版本构建,与CentOS 7兼容性最佳[^1][^2] | [hadoop-3.2.0.tar.gz](https://archive.apache.org/dist/hadoop/common/hadoop-3.2.0) | | **Spark** | **3.2.0** | 官方预编译Hadoop 3.2支持,Scala 2.12兼容,稳定支持Structured Streaming[^2] | [spark-3.2.0-bin-hadoop3.2.tgz](https://archive.apache.org/dist/spark/spark-3.2.0) | | **Flink** | **1.13.6** | 完美兼容Hadoop 3.2.x,内置Kafka连接器支持2.8.x版本[^1] | [flink-1.13.6-bin-scala_2.12.tgz](https://archive.apache.org/dist/flink/flink-1.13.6) | | **Kafka** | **2.8.0** | 与Flink 1.13.x兼容性最佳,稳定支持Zookeeper协调[^1] | [kafka_2.12-2.8.0.tgz](https://archive.apache.org/dist/kafka/2.8.0) | | **Scala** | **2.12.15** | Spark/Flink/Kafka统一使用Scala 2.12版本,避免兼容问题[^3] | [scala-2.12.15.tgz](https://www.scala-lang.org/download/) | ### 版本选择核心依据 1. **Hadoop 3.2.0** - 最后支持JDK 8的稳定版本(CentOS 7默认JDK) - HDFS/YARN架构成熟,与Spark 3.2.0无缝集成[^2] - 伪分布式模式资源消耗适中(学习友好) 2. **Spark 3.2.0 + Hadoop 3.2** ```bash wget https://archive.apache.org/dist/spark/spark-3.2.0/spark-3.2.0-bin-hadoop3.2.tgz tar -xzf spark-3.2.0-bin-hadoop3.2.tgz ./bin/pyspark # 验证安装[^2] ``` 3. **Flink 1.13.6关键配置** - 需将Hadoop兼容包放入lib目录: ```bash cp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar flink-1.13.6/lib/ ``` - 配置文件`flink-conf.yaml`设置Hadoop路径: ```yaml env.hadoop.conf.dir: /path/to/hadoop-3.2.0/etc/hadoop ``` 4. **Kafka 2.8.0集群要点** - 修改`server.properties`确保与Zookeeper兼容: ```properties broker.id=1 listeners=PLAINTEXT://hadoop01:9092 zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181 ``` - 使用统一Scala版本避免序列化问题[^3] ### 系统环境要求 | 组件 | 要求 | |---------------|--------------------------| | **OS** | CentOS 7.6+ (64位) | | **JDK** | OpenJDK 8 (必须) | | **内存** | ≥8GB (单机伪分布式) | | **磁盘** | ≥50GB可用空间 | ### 安装顺序建议 1. 基础环境:JDK 8 + Scala 2.12 2. Hadoop 3.2.0 (伪分布式) 3. Zookeeper 3.6.3 (Kafka依赖) 4. Kafka 2.8.0 5. Spark 3.2.0 6. Flink 1.13.6 > 💡 **生产环境提示**:若用于生产,建议升级至Flink 1.17.x + Kafka 3.5.x,但需自行编译Hadoop 3.3+兼容包。 ---
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值