Windows环境Docker环境搭建Hadoop3.2+zookeeper3.5.5+HBase2.2高可用集群(二)

 

一、 环境计划

hostnamenamenodedatanodezookeeperzkfcjournalnoderesourcemanagernodemanagerHMasterHRegionServer
hadoop11 N/A1 1 1 
hadoop21 N/A1 1 1 
hadoop3 11 1 1 1
hadoop4 11 1 1 1
hadoop5 11 1 1 1

zookeeper安装3台或者5台,奇数。1

JDK:1.8+

Zookeeper:3.1.4

Hadoop:3.2.0

Hbase:2.2.0

二、 基本环境设置

1、 启动docker镜像

       给oracle virtual machine default主机分配3G内存

       之前的镜像拉取等等操作请参考我之前的博文

      $ docker run -itd  --name hadoop -p 5001:22 centos /bin/bash

2、安装辅助软件

      # yum -y install net-tools.x86_64
      # yum install -y which
      # yum -y install psmisc
      # yum install openssh-server -y
      # yum install openssh-clients -y

3、改密码:

# echo "123456" |passwd --stdin root
# echo "root   ALL=(ALL)       ALL" >> /etc/sudoers
# ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
# ssh-keygen -t rsa -f /etc/ssh/ssh_host_ecdsa_key
# ssh-keygen -t rsa -f /etc/ssh/ssh_host_ed25519_key
# ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
# mkdir /var/run/sshd

4、ssh免密登录
# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# chmod 0600 ~/.ssh/authorized_keys
Ctrl + P + Q 不停止容器退出

将 jdk、hadoop、zookeeper、hbase安装包准备好。

5、复制软件包

$ docker cp ./jdk1.8xxxxxx.tar.gz hadoop:/opt/

$ docker cp ./hbase-2.2.0-bin.tar.gz hadoop:/opt/

$ docker cp ./hadoop-3.2.0.tar.gz hadoop:/opt/

$ docker cp ./zookeeper-3.4.14.tar.gz hadoop:/opt/

6、配置软件包以及环境变量

$ docker attach hadoop 进入docker容器

加压后删除相应tar.gz文件,建立符号链接


 

7、配置HBase

进入hbase的conf目录

需要增加一个指向hdfs配置文件的符号链接

# ln -s /opt/hadoop/etc/hadoop/hdfs-site.xml ./hdfs-site.xml

编辑regionservers内容换成:

hadoop3

hadoop4

hadoop5

配置Hmaster高可用

增加backup-masters文件,内容:

hadoop2

相关配置文件和环境变量请从附件下载

环境变量配置在/root/.bashrc文件中,并按照配置文件中路径要求手工创建/opt/datas子文件夹

8、配置好后,exit退出容器,制作新的镜像。

$ docker commit adc93bab571b hadoop3-ha

基于这个镜像启动5个容器
docker run -itd --name hadoop1 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 --add-host hadoop5:172.17.0.6 -p 5002:22 -p 9870:9870 -p 8088:8088 -p 19888:19888 -p 16010:16010 hadoop3-ha /usr/sbin/sshd -D
docker run -itd --name hadoop2 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5003:22 --add-host hadoop5:172.17.0.6 -p 9871:9870 -p 8087:8088 -p 16011:16010 hadoop3-ha /usr/sbin/sshd -D
docker run -itd --name hadoop3 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5004:22 -p 16020:16020 --add-host hadoop5:172.17.0.6 hadoop3-ha /usr/sbin/sshd -D
docker run -itd --name hadoop4 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5005:22 -p 16021:16020 --add-host hadoop5:172.17.0.6 hadoop3-ha /usr/sbin/sshd -D
docker run -itd --name hadoop5 --add-host hadoop1:172.17.0.2 --add-host hadoop2:172.17.0.3 --add-host hadoop3:172.17.0.4 --add-host hadoop4:172.17.0.5 -p 5006:22 -p 16022:16020 --add-host hadoop5:172.17.0.6 hadoop3-ha /usr/sbin/sshd -D
使用ssh客户端分别登录这5个容器

三、 第一次启动设置

1.启动zookeeper 3台(出现一个leader,三个follower,启动成功)

修改/opt/zookeeper/data/myid,3个容器的myid内容分别是

hadoop3对应1,hadoop4对应2,hadoop5对应3

进入zookeeper根目录
# bin/zkServer.sh start

1.1分别在3台虚拟机上查看zookeeper的状态。

#bin/zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower

1.2 连接其中一台的zookeeper

#bin/zkCli.sh -server hadoop3:2181

1.3写入数据

WatchedEvent state:SyncConnected type:None path:null

[zk: hadoop3:2181(CONNECTED) 0]

create /test data

Created /test

1.4再连接另外一台的zookeeper

#bin/zkCli.sh -server hadoop4:2181

1.5如果能获取到刚才写入的数据,则Zookeeper集群安装成功。

WatchedEvent state:SyncConnected type:None path:null

[zk: hadoop4:2181(CONNECTED) 0] get /test

data
cZxid = 0x100000002
ctime = Wed Jun 26 16:43:13 UTC 2019
mZxid = 0x100000002
mtime = Wed Jun 26 16:43:13 UTC 2019
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0

2.启动journalnode(分别在hadoop3、hadoop4、hadoop5上执行)
# cd /opt/hadoop/
# bin/hdfs --daemon start journalnode
或者以下命令也是开启 journalnode(不推荐)
# sbin/hadoop-deamon.sh start journalnode
# jps
1553 Jps
993 QuorumPeerMain
1514 JournalNode
# jps
993 QuorumPeerMain
1514 JournalNode
1563 Jps
出现JournalNode则表示journalnode启动成功。
3.格式化namenode(只要格式化一台,另一台同步,两台都格式化,你就做错了!!如:在hadoop2节点上)

进入容器hadoop1
# bin/hdfs namenode -format
如果在倒数8行左右的地方,出现这一句就表示成功
INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
启动namenode
# bin/hdfs --daemon start namenode
# jps
1681 NameNode
1747 Jps
# cat /opt/datas/hadoop/ha-hadoop/hdfs/namenode/current/VERSION
#Tue Jun 25 14:59:40 UTC 2019
namespaceID=1463663733
clusterID=CID-32938cd0-ed33-40f6-90c5-2326198e31bd
cTime=1561474780005
storageType=NAME_NODE
blockpoolID=BP-789586919-172.17.0.2-1561474780005
layoutVersion=-65
4.同步hadoop1元数据到hadoop2中(必须先启动hadoop1节点上的namenode
在hadoop2主机上执行:
# bin/hdfs namenode -bootstrapStandby
如果出现:INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
说明同步成功
# cat /opt/datas/hadoop/ha-hadoop/hdfs/namenode/current/VERSION
#Tue Jun 25 15:07:27 UTC 2019
namespaceID=1463663733
clusterID=CID-32938cd0-ed33-40f6-90c5-2326198e31bd
cTime=1561474780005
storageType=NAME_NODE
blockpoolID=BP-789586919-172.17.0.2-1561474780005
layoutVersion=-65
hadoop1和hadoop2显示的信息一致,则namenode数据同步成功
5.格式化ZKFC(在hadoop1上执行一次即可)
# bin/hdfs zkfc -formatZK
若在倒数第4行显示:INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
则表示ZKFC格式化成功
6.启动HDFS、yarn(在hadoop1上执行)
# sbin/start-dfs.sh
# sbin/start-yarn.sh

7、进入hbase根目录启动hbase

bin/start-hbase.sh

8、访问节点信息:

访问http://192.168.99.100:9870

yarn rmadmin -getServiceState rm1

yarn rmadmin -getServiceState rm2

拿active状态的节点映射端口比如8086访问
http://192.168.99.100:8087

查看hbase节点状态:

http://192.168.99.100:16010

9、启停JobHistory(查看MapReduce任务)

/opt/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver
/opt/hadoop/sbin/mr-jobhistory-daemon.sh stop historyserver

http://192.168.99.100:19888

10、首次启动后,停止服务时,先停止,再停止hbase/bin/stop-hbase.sh,最后停止zookeeper/bin/zkServer.sh stop

再次启动 hadoop3、hadoop4、hadoop5上一次启动zookeeper/bin/zkServer.sh start

hadoop1节点上执行hadoop/sbin/start-all.sh

hbase/bin/start-hbase.sh

### 回答1: 很抱歉,我是一名语言模型AI,无法亲自进行实际操作。但是,我可以提供一些基于Docker容器快速搭建Hadoop+Spark+Hive+HBase集群的详细过程记录的参考资料,供您参考。 1. 准备工作 首先,需要安装DockerDocker Compose。可以参考官方文档进行安装。 2. 下载镜像 可以从Docker Hub上下载Hadoop、Spark、Hive和HBase的镜像。可以使用以下命令: ``` docker pull bde2020/hadoop-base:2.0.0-hadoop3.2.1-java8 docker pull bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8 docker pull bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 docker pull bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8 docker pull bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8 docker pull bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8 docker pull bde2020/hive:2.3.7-postgresql-metastore docker pull bde2020/spark-base:2.4.5-hadoop2.7 docker pull bde2020/spark-master:2.4.5-hadoop2.7 docker pull bde2020/spark-worker:2.4.5-hadoop2.7 docker pull bde2020/hbase:2.2.4-hadoop3.2.1-java8 ``` 3. 编写docker-compose.yml文件 可以编写一个docker-compose.yml文件来定义Hadoop、Spark、Hive和HBase的容器。以下是一个示例: ``` version: '3' services: namenode: image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8 container_name: namenode ports: - "9870:9870" volumes: - ./hadoop-data/namenode:/hadoop/dfs/name environment: - CLUSTER_NAME=hadoop-cluster datanode: image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 container_name: datanode volumes: - ./hadoop-data/datanode:/hadoop/dfs/data environment: - CLUSTER_NAME=hadoop-cluster - CORE_CONF_fs_defaultFS=hdfs://namenode:8020 resourcemanager: image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8 container_name: resourcemanager ports: - "8088:8088" environment: - CLUSTER_NAME=hadoop-cluster - CORE_CONF_fs_defaultFS=hdfs://namenode:8020 - YARN_CONF_yarn_resourcemanager_hostname=resourcemanager nodemanager: image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8 container_name: nodemanager environment: - CLUSTER_NAME=hadoop-cluster - CORE_CONF_fs_defaultFS=hdfs://namenode:8020 - YARN_CONF_yarn_resourcemanager_hostname=resourcemanager historyserver: image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8 container_name: historyserver ports: - "8188:8188" environment: - CLUSTER_NAME=hadoop-cluster - CORE_CONF_fs_defaultFS=hdfs://namenode:8020 - YARN_CONF_yarn_resourcemanager_hostname=resourcemanager hive-metastore-postgresql: image: bde2020/hive:2.3.7-postgresql-metastore container_name: hive-metastore-postgresql ports: - "5432:5432" environment: - POSTGRES_PASSWORD=hivepassword - POSTGRES_USER=hiveuser - POSTGRES_DB=hivemetastore spark-master: image: bde2020/spark-master:2.4.5-hadoop2.7 container_name: spark-master ports: - "8080:8080" environment: - SPARK_CONF_spark_master_host=spark-master - SPARK_CONF_spark_eventLog_enabled=true - SPARK_CONF_spark_eventLog_dir=/tmp/spark-events - SPARK_CONF_spark_history_fs_logDirectory=hdfs://namenode:8020/spark-logs - SPARK_CONF_spark_history_ui_port=18080 spark-worker-1: image: bde2020/spark-worker:2.4.5-hadoop2.7 container_name: spark-worker-1 environment: - SPARK_CONF_spark_master_url=spark://spark-master:7077 - SPARK_CONF_spark_worker_cores=2 - SPARK_CONF_spark_worker_memory=2g spark-worker-2: image: bde2020/spark-worker:2.4.5-hadoop2.7 container_name: spark-worker-2 environment: - SPARK_CONF_spark_master_url=spark://spark-master:7077 - SPARK_CONF_spark_worker_cores=2 - SPARK_CONF_spark_worker_memory=2g hbase-master: image: bde2020/hbase:2.2.4-hadoop3.2.1-java8 container_name: hbase-master ports: - "16010:16010" environment: - HBASE_CONF_hbase_regionserver_hostname=hbase-master - HBASE_CONF_hbase_master_hostname=hbase-master hbase-regionserver: image: bde2020/hbase:2.2.4-hadoop3.2.1-java8 container_name: hbase-regionserver environment: - HBASE_CONF_hbase_regionserver_hostname=hbase-regionserver - HBASE_CONF_hbase_master_hostname=hbase-master ``` 4. 启动容器 可以使用以下命令启动容器: ``` docker-compose up -d ``` 5. 验证集群 可以使用以下命令验证集群: ``` docker exec -it namenode bash hdfs dfs -mkdir /test hdfs dfs -ls / exit ``` ``` docker exec -it spark-master bash spark-submit --class org.apache.spark.examples.SparkPi --master spark://spark-master:7077 /opt/spark/examples/jars/spark-examples_2.11-2.4.5.jar 10 exit ``` ``` docker exec -it hive-metastore-postgresql bash psql -h localhost -U hiveuser -d hivemetastore \dt \q exit ``` ``` docker exec -it hbase-master bash hbase shell create 'test', 'cf' list exit ``` 以上是一个基于Docker容器快速搭建Hadoop+Spark+Hive+HBase集群的详细过程记录。希望对您有所帮助。 ### 回答2Docker是一种轻量级的虚拟化技术,可以在同一操作系统中运行多个独立的容器,各个容器之间相互隔离。通过Docker容器,快速搭建Hadoop、Spark、Hive和Hbase集群成为了可能。下面是基于Docker容器,快速搭建Hadoop、Spark、Hive和Hbase集群的详细过程记录: 1. 下载Docker技术栈 在运行Docker之前,我们需要先安装DockerDocker Compose。我们可以从官方Docker网站下载DockerDocker Compose: - Docker的下载链接:https://www.docker.com/get-started - Docker Compose的下载链接:https://docs.docker.com/compose/install/ 2. 创建docker-compose.yml文件 在运行Docker之前,我们需要创建一个docker-compose.yml文件,该文件定义了Docker容器的配置和组合。我们将以下容器定义在该文件中: - Hadoop NameNode - Hadoop DataNode - Hadoop ResourceManager - Hadoop NodeManager - Spark Master - Spark Worker - Hive Server - HBase Master 我们可以通过以下命令创建docker-compose.yml文件: ``` version: "2.2" services: namenode: container_name: namenode image: cloudera/quickstart:latest hostname: namenode ports: - "8020:8020" - "50070:50070" - "50075:50075" - "50010:50010" - "50020:50020" volumes: - ~/hadoop-data/namenode:/var/lib/hadoop-hdfs/cache/hdfs/dfs/name environment: SERVICE_PRECONDITION: HDFS_NAMENODE datanode: container_name: datanode image: cloudera/quickstart:latest hostname: datanode ports: - "50075:50075" - "50010:50010" - "50020:50020" volumes: - ~/hadoop-data/datanode:/var/lib/hadoop-hdfs/cache/hdfs/dfs/data environment: SERVICE_PRECONDITION: HDFS_DATANODE resourcemanager: container_name: resourcemanager image: cloudera/quickstart:latest hostname: resourcemanager ports: - "8088:8088" - "8030:8030" - "8031:8031" - "8032:8032" - "8033:8033" environment: SERVICE_PRECONDITION: YARN_RESOURCEMANAGER nodemanager: container_name: nodemanager image: cloudera/quickstart:latest hostname: nodemanager environment: SERVICE_PRECONDITION: YARN_NODEMANAGER sparkmaster: container_name: sparkmaster image: sequenceiq/spark:2.1.0 hostname: sparkmaster ports: - "8081:8081" command: bash -c "/usr/local/spark/sbin/start-master.sh && tail -f /dev/null" sparkworker: container_name: sparkworker image: sequenceiq/spark:2.1.0 hostname: sparkworker environment: SPARK_MASTER_HOST: sparkmaster command: bash -c "/usr/local/spark/sbin/start-worker.sh spark://sparkmaster:7077 && tail -f /dev/null" hiveserver: container_name: hiveserver image: bde2020/hive:2.3.4-postgresql-metastore hostname: hiveserver ports: - "10000:10000" environment: METASTORE_HOST: postgres META_PORT: 5432 MYSQL_DATABASE: hive MYSQL_USER: hive MYSQL_PASSWORD: hive POSTGRES_DB: hive POSTGRES_USER: hive POSTGRES_PASSWORD: hive hbasemaster: container_name: hbasemaster image: harisekhon/hbase hostname: hbasemaster ports: - "16010:16010" - "2181:2181" command: ["bin/start-hbase.sh"] ``` 3. 运行Docker容器 运行Docker容器的第一步是将docker-compose.yml文件放置在合适的路径下。在运行Docker容器之前,我们需要从Docker Hub拉取镜像,并运行以下命令: ``` $ docker-compose up -d ``` 该命令会运行所有定义在docker-compose.yml文件中的容器。 4. 配置集群 在运行Docker之后,我们需要进入相应的容器,例如进入namenode容器: ``` $ docker exec -it namenode bash ``` 我们可以使用以下命令检查Hadoop、Spark、Hive和HBase集群是否正确配置: - Hadoop集群检查: ``` $ hadoop fs -put /usr/lib/hadoop/README.txt / $ hadoop fs -ls / ``` - Spark集群检查: ``` $ spark-shell --master spark://sparkmaster:7077 ``` - Hive集群检查: ``` $ beeline -u jdbc:hive2://localhost:10000 ``` - HBase集群检查: ``` $ hbase shell ``` 5. 关闭Docker容器 在测试完成后,我们可以使用以下命令关闭所有Docker容器: ``` $ docker-compose down --volumes ``` 综上所述,Docker容器是快速搭建Hadoop、Spark、Hive和HBase集群的理想选择。通过docker-compose.yml文件,我们可以轻松配置和管理整个集群。使用这种方法,可以节省大量的时间和精力,并使整个搭建过程更加方便和高效。 ### 回答3Docker容器是一种轻型的虚拟化技术,能够快速搭建大型分布式系统集群。可以使用Docker容器快速搭建Hadoop,Spark,Hive和HBase集群。下面是基于Docker容器搭建大数据集群的详细过程记录: 1.安装DockerDocker-Compose 首先需要安装DockerDocker-Compose。可以按照官方文档详细教程进行安装。 2.创建Docker文件 创建一个Dockerfile文件用于构建Hadoop,Spark,Hive和HBase的镜像。在该文件内添加以下内容: FROM ubuntu:16.04 RUN apt-get update # Install JDK, Python, and other dependencies RUN apt-get install -y openjdk-8-jdk python python-dev libffi-dev libssl-dev libxml2-dev libxslt-dev # Install Hadoop RUN wget http://www.eu.apache.org/dist/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz RUN tar -xzvf hadoop-2.7.7.tar.gz RUN mv hadoop-2.7.7 /opt/hadoop # Install Spark RUN wget http://www.eu.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz RUN tar -zxvf spark-2.4.0-bin-hadoop2.7.tgz RUN mv spark-2.4.0-bin-hadoop2.7 /opt/spark # Install Hive RUN wget http://www.eu.apache.org/dist/hive/hive-2.3.4/apache-hive-2.3.4-bin.tar.gz RUN tar -zxvf apache-hive-2.3.4-bin.tar.gz RUN mv apache-hive-2.3.4-bin /opt/hive # Install HBase RUN wget http://www.eu.apache.org/dist/hbase/hbase-1.4.9/hbase-1.4.9-bin.tar.gz RUN tar -zxvf hbase-1.4.9-bin.tar.gz RUN mv hbase-1.4.9 /opt/hbase # Set Environment Variables ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64 ENV HADOOP_HOME /opt/hadoop ENV SPARK_HOME /opt/spark ENV HIVE_HOME /opt/hive ENV HBASE_HOME /opt/hbase ENV PATH $PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin # Format HDFS RUN $HADOOP_HOME/bin/hdfs namenode -format 3.创建Docker-Compose文件 创建一个docker-compose文件,里面有一个master节点和两个worker节点。在docker-compose文件中添加以下内容: version: "3" services: master: image: hadoop-spark-hive-hbase container_name: master hostname: master ports: - "22" - "8088:8088" - "8030:8030" - "8031:8031" - "8032:8032" - "9000:9000" - "10020:10020" - "19888:19888" - "50010:50010" - "50020:50020" - "50070:50070" - "50075:50075" volumes: - /data:/data command: - /usr/sbin/sshd - -D worker1: image: hadoop-spark-hive-hbase container_name: worker1 hostname: worker1 ports: - "22" - "50010" - "50020" - "50075" volumes: - /data:/data command: - /usr/sbin/sshd - -D worker2: image: hadoop-spark-hive-hbase container_name: worker2 hostname: worker2 ports: - "22" - "50010" - "50020" - "50075" volumes: - /data:/data command: - /usr/sbin/sshd - -D 4.构建镜像 运行以下命令来构建镜像: docker build -t hadoop-spark-hive-hbase . 5.启动容器 运行以下命令来启动容器: docker-compose up -d 6.测试集群 在浏览器中输入http://IP地址:8088,可以看到Hadoop和YARN的Web控制台。 在浏览器中输入http://IP地址:50070,可以看到HDFS的Web控制台。 在浏览器中输入http://IP地址:8888,可以看到Jupyter Notebook。 在Jupyter Notebook中,创建一个Python文件并运行以下代码来测试Spark集群: from pyspark import SparkContext sc = SparkContext() rdd1 = sc.parallelize(range(1000)) rdd2 = sc.parallelize(range(1000, 2000)) rdd3 = rdd1.union(rdd2) rdd3.take(10) 以上就是基于Docker容器快速搭建Hadoop,Spark,Hive和HBase集群的详细过程记录。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值