presto集群启动

1、启动mysql

启动docker
[root@worker-presto scsi]# cd docker

[root@worker-presto docker]# dockerd &


查看是否启动正常
[root@worker-presto docker]# ps -ef | grep mysql
systemd+  3116  3102  1 16:14 ?        00:00:01 mysqld
root      3212  2887  0 16:15 pts/0    00:00:00 grep --color=auto mysql

[root@worker-presto docker]# docker ps -s
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                     NAMES               SIZE
cb2f746f17ea        mysql:5.6           "docker-entrypoint.s…"   3 days ago          Up 3 minutes        0.0.0.0:33306->3306/tcp   mysql               643B (virtual 302MB)

2、启动hadoop集群

查看hadoop版本号
[root@coordinate-presto /]# hadoop version
Hadoop 2.7.3
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using /scsi/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar


启动hadoop
[root@coordinate-presto etc]# cd /scsi/soft/hadoop-2.7.3/etc

[root@coordinate-presto etc]# start-dfs.sh 
Starting namenodes on [coordinate-presto]
root@coordinate-presto's password: 
coordinate-presto: starting namenode, logging to /scsi/soft/hadoop-2.7.3/logs/hadoop-root-namenode-coordinate-presto.out
worker3-presto: starting datanode, logging to /scsi/soft/hadoop-2.7.3/logs/hadoop-root-datanode-worker3-presto.out
worker2-presto: datanode running as process 2149. Stop it first.
worker1-presto: starting datanode, logging to /scsi/soft/hadoop-2.7.3/logs/hadoop-root-datanode-worker1-presto.out
Starting secondary namenodes [coordinate-presto]
root@coordinate-presto's password: 
coordinate-presto: starting secondarynamenode, logging to /scsi/soft/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-coordinate-presto.out

[root@coordinate-presto etc]# start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /scsi/soft/hadoop-2.7.3/logs/yarn-root-resourcemanager-coordinate-presto.out
worker1-presto: starting nodemanager, logging to /scsi/soft/hadoop-2.7.3/logs/yarn-root-nodemanager-worker1-presto.out
worker3-presto: starting nodemanager, logging to /scsi/soft/hadoop-2.7.3/logs/yarn-root-nodemanager-worker3-presto.out
worker2-presto: starting nodemanager, logging to /scsi/soft/hadoop-2.7.3/logs/yarn-root-nodemanager-worker2-presto.out

查看是否启动正常
[root@coordinate-presto etc]# jps
3510 ResourceManager
4266 Jps
3163 NameNode
3355 SecondaryNameNode

3、启动hive

关闭安全模式
[root@coordinate-presto tmp]# hdfs dfsadmin -safemode leave
Safe mode is OFF

查看版本号
[root@coordinate-presto etc]# hive --version
which: no hbase in (/scsi/soft/apache-hive-2.1.1-bin/bin:/scsi/soft/apache-maven-3.6.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/scsi/soft/jdk1.8.0_231/bin:/scsi/soft/hadoop-2.7.3/bin:/scsi/soft/hadoop-2.7.3/sbin:/root/bin)
Hive 2.1.1
Subversion git://jcamachorodriguez-rMBP.local/Users/jcamachorodriguez/src/workspaces/hive/HIVE-release2/hive -r 1af77bbf8356e86cabbed92cfa8cc2e1470a1d5c
Compiled by jcamachorodriguez on Tue Nov 29 19:46:12 GMT 2016
From source with checksum 569ad6d6e5b71df3cb04303183948d90

启动hive的9083端口
[root@coordinate-presto bin]# hive --service metastore &
[1] 5055
[root@coordinate-presto bin]# which: no hbase in (/scsi/soft/apache-hive-2.1.1-bin/bin:/scsi/soft/apache-maven-3.6.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/scsi/soft/jdk1.8.0_231/bin:/scsi/soft/hadoop-2.7.3/bin:/scsi/soft/hadoop-2.7.3/sbin:/root/bin)
Starting Hive Metastore Server
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/scsi/soft/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/scsi/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]


启动hive
[root@coordinate-presto tmp]# hive
which: no hbase in (/scsi/soft/apache-hive-2.1.1-bin/bin:/scsi/soft/apache-maven-3.6.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/scsi/soft/jdk1.8.0_231/bin:/scsi/soft/hadoop-2.7.3/bin:/scsi/soft/hadoop-2.7.3/sbin:/root/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/scsi/soft/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/scsi/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/scsi/soft/apache-hive-2.1.1-bin/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> 

4、启动presto

进入presto的bin启动目录
[root@coordinate-presto bin]# cd /scsi/presto/presto-run/presto-server-0.225/bin

启动
[root@coordinate-presto bin]# ./launcher start
Started as 4895
[root@coordinate-presto bin]# ./launcher run
Already running as 4895

查看集群web:http://192.168.133.129:8090


通过客户端查看
[root@coordinate-presto presto-server-0.225]# ./bin/presto --server 192.168.133.129:8090 --catalog hive
presto> 
presto> 
presto> show databases;
Query 20191021_093619_00000_ykdrm failed: line 1:6: mismatched input 'databases'. Expecting: 'CATALOGS', 'COLUMNS', 'CREATE', 'CURRENT', 'FUNCTIONS', 'GRANTS', 'ROLE', 'ROLES', 'SCHEMAS', 'SESSION', 'STATS', 'TABLES'
show databases

presto> show tables from mydb1;
  Table   
----------
 userinfo 
(1 row)

Query 20191021_093637_00001_ykdrm, FINISHED, 4 nodes
Splits: 70 total, 70 done (100.00%)
0:09 [1 rows, 23B] [0 rows/s, 2B/s]

presto> select * from userinfo;
Query 20191021_093701_00002_ykdrm failed: line 1:15: Schema must be specified when session schema is not set
select * from userinfo

presto> use mydb1;
USE
presto:mydb1> select * from userinfo;

Query 20191021_093737_00004_ykdrm, FAILED, 2 nodes
Splits: 17 total, 0 done (0.00%)
0:23 [0 rows, 0B] [0 rows/s, 0B/s]

Query 20191021_093737_00004_ykdrm failed: Could not obtain block: BP-1739518535-192.168.133.129-1571313173913:blk_1073741843_1019 file=/user/hive/warehouse/mydb1.db/userinfo/000000_0

presto:mydb1> select * from mydb1.userinfo;

Query 20191021_093819_00005_ykdrm, FAILED, 2 nodes
Splits: 17 total, 0 done (0.00%)
0:24 [0 rows, 0B] [0 rows/s, 0B/s]

Query 20191021_093819_00005_ykdrm failed: Could not obtain block: BP-1739518535-192.168.133.129-1571313173913:blk_1073741843_1019 file=/user/hive/warehouse/mydb1.db/userinfo/000000_0

presto:mydb1> 

 

 

 

对于Presto集群的部署,可以按照以下步骤进行操作: 1. 确保你有一个运行Java的环境,Presto是用Java编写的。你可以从官方网站下载并安装Java Development Kit(JDK)。 2. 下载Presto服务器的二进制文件。你可以从Presto官方网站的下载页面获得最新版本的二进制文件。 3. 解压缩下载的二进制文件到你想要安装Presto的目录中。 4. 配置节点信息。在Presto安装目录下的etc目录中,有一个配置文件叫做`node.properties`,你需要编辑这个文件,指定集群中每个节点的唯一标识符和通信地址。 5. 配置集群连接信息。在etc目录中,有一个配置文件叫做`config.properties`,你需要编辑这个文件,指定连接到Presto集群所需的信息,比如访问控制、元数据存储等。 6. 配置分布式查询协调器。在etc目录中,有一个配置文件叫做`coordinator.properties`,如果你打算使用Presto集群中的一个节点作为协调器节点,你需要编辑这个文件,并指定协调器节点的配置信息。 7. 配置工作节点。在etc目录中,有一个配置文件叫做`worker.properties`,如果你打算使用Presto集群中的某些节点作为工作节点,你需要编辑这个文件,并指定每个工作节点的配置信息。 8. 启动Presto集群。在Presto安装目录下执行命令`./bin/launcher start`或者`./bin/launcher run`以启动Presto集群。前者以守护进程方式启动,后者将在前台运行。 9. 进入Presto CLI。在Presto安装目录下执行命令`./bin/presto-cli`以进入Presto CLI,你可以使用CLI与Presto集群进行交互查询。 这些是基本的Presto集群部署步骤,你可以根据你的特定需求和环境进行配置和调整。请确保在部署之前,详细阅读Presto官方文档并遵循最佳实践。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值