离线架构HADOOP/HIVE/SPARK服务端环境

服务

主机服务
01NameNode, ResourceManager,Zookeeper,ZKFC,JournalNode
02NameNode, ResourceManager,Zookeeper,ZKFC,JournalNode
03Zookeeper,JournalNode,JobHistory,SparkHitory,Haproxy(history),balance,trash
04HiveServer2,MetaStore,HaProxy(hs2)
05HiveServer2,MetaStore,HaProxy(hs2)
06DataNode,NodeManager
07DataNode,NodeManager
08DataNode,NodeManager
09DataNode,NodeManager

配置

zookeeper

修改配置文件
zoo.cfg

创建数据目录:
mkdir -p /home/hadoop/zkdata/zookeeper

修改myid:
分别

echo '1' >> zkdata/zookeeper/myid; 
echo '2' >> zkdata/zookeeper/myid;
echo '3' >> zkdata/zookeeper/myid

启动:

zookeeper-current/bin/zkServer.sh start

journalnode

修改配置文件

core-site.xmlhdfs-site.xml

创建数据目录:

sudo mkdir -p /data/hadoopdata/journaldata
sudo chown -R hadoop:hadoop /data/hadoopdata

启动:

journalnode-current/bin/hdfs --daemon start journalnode   

namenode

修改配置文件
hadoop-env.sh 修改JAVA_HOME、内存大小等配置项
core-site.xml
hdfs-site.xml等配置。
slave配置。

先别启动,等ZKFC。

ZKFC

先关闭nn
修改配置:
core-site.xml : ha.zookeeper.quorum
hdfs-site.xml:
hadoop-env.sh: JAVA_HOME、内存等

格式化zkfc(active nn):

/home/hadoop/zkfc-current/bin/hdfs zkfc -formatZK

格式化hdfs(active nn):

/home/hadoop/hadoop-current/bin/hdfs namenode -format

格式化hdfs(standby nn):

/home/hadoop/hadoop-current/bin/hdfs namenode -bootstrapStandby

启动namenode:

/home/hadoop/hadoop-current/bin/hdfs --daemon start namenode

启动zkfc:

/home/hadoop/zkfc-current/bin/hdfs --daemon start zkfc

DataNode

core-site.xml:
hdfs-site.xml:
hadoop-env.sh:

sudo mkdir -p /var/lib/hadoop-hdfs
sudo chown -R hadoop:hadoop /var/lib/hadoop-hdfs

说明:上述dn上短路读配置目录

ResourceManager

core-site.xml
hadoop-env.sh:
hdfs-site.xml:
mapred-site.xml
yarn-site.xml

yarn.resourcemanager.hostname.rm1, yarn.resourcemanager.hostname.rm2
yarn.resourcemanager.zk-address
yarn.resourcemanager.mapreduce.history.url
yarn.resourcemanager.spark.history.url
yarn.historyproxy.webapp.http.address
yarn.timeline-service.webapp.address

创建hdfs目录并修改权限

~/hadoop-current/bin/hdfs dfs -mkdir -p /user/yarn
~/hadoop-current/bin/hadoop fs -chown -R yarn:yarn hdfs://yourNs/user/yarn

启动命令:

/home/yarn/hadoop-current/sbin/yarn-daemon.sh start resourcemanager

JobHistory

core-site.xml:
hdfs-site.xml:

启动命令:

~/jobHistory-current/sbin/mr-jobhistory-daemon.sh start  historyserver

SparkHistory

mkdir -p /tmp/spark-events

启动命令:

sh /home/hadoop/sparkhistory-current/sbin/start-history-server.sh

hsproxyser

core-site.xml:
hdfs-site.xml:
historyproxy-site.xml:

yarn.historyproxy.appstore.zk.addr 

启动命令:

/home/hadoop/hsproxyser-current/sbin/yarn-daemon.sh start historyproxy

NodeManager

core-site.xml:
hdfs-site.xml:
yarn-site.xml

其他

  1. hdfs审计日志

  2. fsimage表

  3. yarn审计日志

  4. spark history log:hdfs路径:
    /tmp/spark/staging/historylog
    /tmp/spark/staging/historylog_archive/

  5. hive history log:hdfs路径:/tmp/hadoop-yarn/staging/history/done/xxx//.xml
    /tmp/hadoop-yarn/staging/history/done/xxx.jhist

  6. hive metastore接口 API:thrift://ip:port

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值