hive启动远程模式命令即注意事项

在启动Hive之前,确保集群的MySQL和Hadoop已经成功启动。Hadoop启动后,HDFS会进入安全模式,需等待安全模式结束再启动Hive。在HDFS安全模式中,只能查看数据,不能进行删除、上传和创建操作。启动Hadoop使用start-all.sh,关闭使用stop-all.sh。MySQL用于存储Hive的元数据。启动Hive远程模式,使用nohup启动metastore和hiveserver2,然后通过beeline连接到jdbc:hive2://node01:10000进行操作。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在启动hive前,一定在保证集群的mysql和Hadoop是启动成功的,

Hadoop是存储实质数据==
mysql是存储hive的元数据(映射关系数据)

Hadoop的启动命令:start-all.sh Hadoop
进程名:NameNode、DataNode
hadoop的关闭命令:stop-all.sh

[root@node01 ~]# start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [node01]
node01: starting namenode, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-namenode-node01.out
node01: starting datanode, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-datanode-node01.out
node02: starting datanode, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-datanode-node02.out
node03: starting datanode, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-datanode-node03.out
Starting secondary namenodes [node02]
node02: starting secondarynamenode, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-secondarynamenode-node02.out
starting yarn daemons
starting resourcemanager, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/yarn-root-resourcemanager-node01.out
node01: starting nodemanager, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/yarn-root-nodemanager-node01.out
node02: starting nodemanager, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/yarn-root-nodemanager-node02.out
node03: starting nodemanager, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/yarn-root-nodemanager-node03.out
[root@node01 ~]# jps
19504 DataNode
12002 RunJar
20085 Jps
19365 NameNode
9448 QuorumPeerMain
19865 NodeManager
11817 RunJar
2169 -- process information unavailable
19758 ResourceManager

在hadoop启动集群成功后,HDFS会进入安全模式,等安全模式结束后再开始启动hive

**

注意hdfs在安全模式下:对数据只有查看的权限,

没有删除,上传,创建的权限

Mysql的启动命令:mysql -u mysql_name(你的mysql的用户名) -p
按下回车执行后在输入你mysql的password

在这里插入图片描述

**

登录hadoop的web端查看是否退出安全模式,
Hadoop的web端口号为:50070**

在这里插入图片描述

此次启动的是hive远程模式 ,
第一代客户端:后台挂起启动
nohup /export/servers/hive/bin/hive --service metastore &
nohup /export/servers/hive/bin/hive --service hiveserver2 &

客户端在那就执行这个启动代码:/export/servers/hive/bin/beeline

! connect jdbc:hive2://node01:10000
…username…:mysql_name
…password…: 可以不用输入,直接按回车即可

[root@node02 ~]# /export/servers/hive/bin/beeline 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/export/servers/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/export/servers/hadoop-2.6.0-cdh5.14.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Beeline version 1.1.0-cdh5.14.0 by Apache Hive
beeline> ! connect jdbc:hive2://node01:10000
scan complete in 2ms
Connecting to jdbc:hive2://node01:10000
Enter username for jdbc:hive2://node01:10000: root
Enter password for jdbc:hive2://node01:10000: 
Connected to: Apache Hive (version 1.1.0-cdh5.14.0)
Driver: Hive JDBC (version 1.1.0-cdh5.14.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://node01:10000> 

启动成功,可以直接操作业务了

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值