002A-mysql安装

博客介绍了MySQL安装及多实例启动的相关内容。安装时,若移动data目录或整个安装目录,需在配置文件中添加相应路径。多实例启动时,可复制安装目录,修改配置文件参数,如端口、socket等,最后启动新实例。

mysql安装

 MySQL社区版下载页面https://dev.mysql.com/downloads/mysql/

在这里插入图片描述

[root@mysql-host local ]#rz -e 

[root@mysql-host local ]#ll
total 838940
drwxr-xr-x. 2 root root         6 Apr 11  2018 bin
drwxr-xr-x. 2 root root         6 Apr 11  2018 etc
drwxr-xr-x. 2 root root         6 Apr 11  2018 games
drwxr-xr-x. 2 root root         6 Apr 11  2018 include
drwxr-xr-x. 2 root root         6 Apr 11  2018 lib
drwxr-xr-x. 2 root root         6 Apr 11  2018 lib64
drwxr-xr-x. 2 root root         6 Apr 11  2018 libexec
-rw-r--r--  1 root root 859071704 Nov 17 11:40 mysql-8.0.22-linux-glibc2.12-x86_64.tar.xz
drwxr-xr-x. 2 root root         6 Apr 11  2018 sbin
drwxr-xr-x. 5 root root        49 Aug 19  2018 share
drwxr-xr-x. 2 root root         6 Apr 11  2018 src
[root@mysql-host local ]#tar xf mysql-8.0.22-linux-glibc2.12-x86_64.tar.xz 
[root@mysql-host local ]#mv mysql-8.0.22-linux-glibc2.12-x86_64 mysql
[root@mysql-host local ]#groupadd mysql              #创建mysql组
[root@mysql-host local ]#useradd mysql -g mysql      #创建mysql用户
[root@mysql-host local ]#chown -R mysql:mysql mysql  #更改mysql目录的用户和用户组分别为mysql,mysql
[root@mysql-host local ]#cd mysql/
[root@mysql-host mysql ]#mkdir data   #创建存放mysql数据库文件目录
[root@mysql-host mysql ]#chown mysql:mysql data
[root@mysql-host mysql ]#ll
total 384
drwx------  2 mysql mysql   4096 Sep 23 22:11 bin   #可执行二进制文件命令
drwxr-xr-x  2 mysql mysql      6 Nov 17 14:40 data  #创建的数据库文件目录
drwx------  2 mysql mysql     55 Sep 23 22:11 docs  #数据库说明文档目录
drwx------  3 mysql mysql    282 Sep 23 22:11 include #头文件(c语言)
drwx------  6 mysql mysql    201 Sep 23 22:11 lib     #lib库文件
-rw-r--r--  1 mysql mysql 378912 Sep 23 20:37 LICENSE #授权license文件
drwx------  4 mysql mysql     30 Sep 23 22:11 man     #man说明文档目录
-rw-r--r--  1 mysql mysql    687 Sep 23 20:37 README  #自述文件
drwx------ 28 mysql mysql   4096 Sep 23 22:11 share   #字符集信息,错误信息提示文件等
drwx------  2 mysql mysql     77 Sep 23 22:11 support-files  #支持文件,如已经写好的mysql启动服务脚本等

#初始化数据库目录,会生成data目录下的文件和一个登录root用户的临时密码,如果初始化后再次初始化,需要删除data目录下的文件
[root@mysql-host mysql ]#bin/mysqld --initialize --user=mysql --datadir=/usr/local/mysql/data
2020-11-17T06:42:12.688848Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
2020-11-17T06:42:12.688933Z 0 [System] [MY-013169] [Server] /usr/local/mysql/bin/mysqld (mysqld 8.0.22) initializing of server in progress as process 3428
2020-11-17T06:42:12.698300Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2020-11-17T06:42:13.243660Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2020-11-17T06:42:14.487465Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: dz2e-V/D*p=H

#启动mysql服务
[root@mysql-host mysql ]#bin/mysqld_safe --initialize --user=mysql --datadir=/usr/local/mysql/data
2020-11-17T06:42:41.272319Z mysqld_safe error: log-error set to '/var/log/mariadb/mariadb.log', however file don't exists. Create writable for user 'mysql'.

#删除配置文件后,使用mysql默认参数
[root@mysql-host mysql ]#rm -f /etc/my.cnf
[root@mysql-host mysql ]#bin/mysqld_safe --initialize --user=mysql --datadir=/usr/local/mysql/data
Logging to '/usr/local/mysql/data/mysql-host.err'.
2020-11-17T06:43:01.525094Z mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data
2020-11-17T06:43:01.565327Z mysqld_safe mysqld from pid file /usr/local/mysql/data/mysql-host.pid ended

#还可以通过服务启动mysql,默认查找/etc/my.cnf中指定的参数
[root@mysql-host mysql ]#./support-files/mysql.server start
Starting MySQL.Logging to '/usr/local/mysql/data/mysql-host.err'.
. SUCCESS! 

#主进程和子进程,共2个进程
[root@mysql-host mysql ]#
[root@mysql-host mysql ]#ps -ef |grep mysql
root       4489      1  0 15:26 pts/3    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/usr/local/mysql/data --pid-file=/usr/local/mysql/data/mysql-host.pid
mysql      4574   4489  0 15:26 pts/3    00:00:03 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysq/data --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=mysql-host.err --pid-file=/usr/local/mysql/data/mysql-host.pid
root       5059   3239  0 15:45 pts/4    00:00:00 grep --color=auto mysql
[root@mysql-host mysql ]#

[root@mysql-host share ]#mysql -u root -p
bash: mysql: command not found...
[root@mysql-host share ]#vi /root/.bash_profile 
PATH=$PATH:$HOME/bin:/usr/local/mysql/bin
[root@mysql-host share ]#source /root/.bash_profile

[root@mysql-host share ]#mysql -u root -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.22

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 
mysql> show databases;
ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.
#如下命令5.7版本支持,5.8版本不在支持
mysql> set password=password('mysql'); 
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'password('mysql')' at line 1
mysql> 
mysql> 
mysql> alter user user() identified by 'mysql';
Query OK, 0 rows affected (0.01 sec)

mysql> exit
Bye
[root@mysql-host share ]#mysql -u root -pmysql
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.22 MySQL Community Server - GPL

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

如果将/usr/local/mysql/下的data目录移到别的目录下,mysql服务启动会报错,需要vim /etc/my.cnf添加如下一行
[mysqld]
datadir=目录路径
数据和程序文件不在同一个目录或者硬盘

如果将mysql整个安装目录从/usr/local/mysql移到别处,需要vim /etc/my.cnf添加如下
[mysqld]
basedir=/usr/local/mysql2
datadir=/usr/local/mysql2/data

多实例启动mysql

[root@mysql-host local ]#cp -R mysql mysql2

[root@mysql-host mysql ]#vim /etc/my3307.cnf
[mysqld]
basedir=/usr/local/mysql2
datadir=/usr/local/mysql2/data
port=3307
socket=/tmp/mysql3307.sock
mysqlx_port=33070
mysqlx_socket=/tmp/mysqlx33070.sock

配置文件参数参考mysql第一个实例如下

[root@mysql-host ~ ]#mysql -u root -pmysql
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.22 MySQL Community Server - GPL

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show variables like "%sock%";
+-----------------------------------------+------------------+
| Variable_name                           | Value            |
+-----------------------------------------+------------------+
| mysqlx_socket                           | /tmp/mysqlx.sock |
| performance_schema_max_socket_classes   | 10               |
| performance_schema_max_socket_instances | -1               |
| socket                                  | /tmp/mysql.sock  |
+-----------------------------------------+------------------+
4 rows in set (0.01 sec)

#查看3306和33060
[root@mysql-host mysql ]#netstat -an | grep -i listen
tcp        0      0 127.0.0.1:6012          0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:6013          0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:41051           0.0.0.0:*               LISTEN     
tcp6       0      0 ::1:6012                :::*                    LISTEN     
tcp6       0      0 ::1:6013                :::*                    LISTEN     
tcp6       0      0 :::32894                :::*                    LISTEN     
tcp6       0      0 :::33060                :::*                    LISTEN     
tcp6       0      0 :::3306                 :::*                    LISTEN     
tcp6       0      0 :::111                  :::*                    LISTEN  

启动mysql2

[root@mysql-host mysql2 ]#bin/mysqld --defaults-file=/etc/my3307.cnf --user=mysql &
[1] 5313
[root@mysql-host mysql2 ]#2020-11-17T07:57:13.281982Z 0 [System] [MY-010116] [Server] /usr/local/mysql2/bin/mysqld (mysqld 8.0.22) starting as process 5313
2020-11-17T07:57:13.291225Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2020-11-17T07:57:13.509495Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2020-11-17T07:57:13.633706Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33070, socket: /tmp/mysqlx33070.sock
2020-11-17T07:57:13.762323Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2020-11-17T07:57:13.762573Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2020-11-17T07:57:13.809469Z 0 [System] [MY-010931] [Server] /usr/local/mysql2/bin/mysqld: ready for connections. Version: '8.0.22'  socket: '/tmp/mysql3307.sock'  port: 3307  MySQL Community Server - GPL.


[root@mysql-host tmp ]#ll
total 20
srwxrwxrwx  1 mysql mysql   0 Nov 17 15:57 mysql3307.sock
-rw-------  1 mysql mysql   5 Nov 17 15:57 mysql3307.sock.lock
srwxrwxrwx  1 mysql mysql   0 Nov 17 15:26 mysql.sock
-rw-------  1 mysql mysql   5 Nov 17 15:26 mysql.sock.lock
srwxrwxrwx  1 mysql mysql   0 Nov 17 15:57 mysqlx33070.sock
-rw-------  1 mysql mysql   6 Nov 17 15:57 mysqlx33070.sock.lock
srwxrwxrwx  1 mysql mysql   0 Nov 17 15:26 mysqlx.sock
-rw-------  1 mysql mysql   6 Nov 17 15:26 mysqlx.sock.lock
drwx------  2 root  root   24 Nov 17 13:49 ssh-iMEk9ZHDIYBY
drwx------  3 root  root   17 Nov 17 13:49 systemd-private-ff268a8040cf4318ada7245b116d3f6d-chronyd.service-uaNw2E
drwx------  3 root  root   17 Nov 17 13:49 systemd-private-ff268a8040cf4318ada7245b116d3f6d-colord.service-JQ98i9
drwx------  3 root  root   17 Nov 17 13:49 systemd-private-ff268a8040cf4318ada7245b116d3f6d-cups.service-k35GeP
drwx------  3 root  root   17 Nov 17 13:49 systemd-private-ff268a8040cf4318ada7245b116d3f6d-rtkit-daemon.service-nHuV2F
drwx------. 2 root  root    6 Nov 17 15:31 tracker-extract-files.0
-rw-------  1 root  root  617 Nov 17 14:39 yum_save_tx.2020-11-17.14-39.V4Ue6y.yumtx
[root@mysql-host tmp ]#mysql -u root -p -S /tmp/mysql3307.sock
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.22 MySQL Community Server - GPL

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show variables like "%sock%";
+-----------------------------------------+-----------------------+
| Variable_name                           | Value                 |
+-----------------------------------------+-----------------------+
| mysqlx_socket                           | /tmp/mysqlx33070.sock |
| performance_schema_max_socket_classes   | 10                    |
| performance_schema_max_socket_instances | -1                    |
| socket                                  | /tmp/mysql3307.sock   |
+-----------------------------------------+-----------------------+
4 rows in set (0.02 sec)

mysql> 


#第二个实例以一个单独进程运行
[root@mysql-host mysql2 ]#ps -ef |grep mysql
root       4489      1  0 15:26 pts/3    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/usr/local/mysql/data --pid-file=/usr/local/mysql/data/mysql-host.pid
mysql      4574   4489  0 15:26 pts/3    00:00:05 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysq/data --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=mysql-host.err --pid-file=/usr/local/mysql/data/mysql-host.pid
mysql      5313   3200  0 15:57 pts/3    00:00:01 bin/mysqld --defaults-file=/etc/my3307.cnf --user=mysql
root       5381   3239  0 15:59 pts/4    00:00:00 mysql -u root -p -S /tmp/mysql3307.sock
root       5447   3200  0 16:03 pts/3    00:00:00 grep --color=auto mysql

SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/installs/hadoop3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/installs/hive3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/installs/hbase2.2.2/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2025-10-15 20:23:12,448 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 2025-10-15 20:23:12,530 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 2025-10-15 20:23:12,531 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 2025-10-15 20:23:12,531 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 2025-10-15 20:23:12,778 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 2025-10-15 20:23:12,794 INFO tool.CodeGenTool: Beginning code generation Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. 2025-10-15 20:23:14,590 INFO manager.SqlManager: Executing SQL statement: select region_code,region_code_desc,region_city,region_city_desc,region_province,region_province_desc from area_dim where (1 = 0) 2025-10-15 20:23:14,636 INFO manager.SqlManager: Executing SQL statement: select region_code,region_code_desc,region_city,region_city_desc,region_province,region_province_desc from area_dim where (1 = 0) 2025-10-15 20:23:14,699 INFO manager.SqlManager: Executing SQL statement: select region_code,region_code_desc,region_city,region_city_desc,region_province,region_province_desc from area_dim where (1 = 0) 2025-10-15 20:23:14,717 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/installs/hadoop3.3.1 注: /tmp/sqoop-root/compile/2779abb8eed61751692c37913db4d94d/QueryResult.java使用或覆盖了已过时的 API。 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 2025-10-15 20:23:18,787 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/2779abb8eed61751692c37913db4d94d/QueryResult.jar 2025-10-15 20:23:20,876 INFO tool.ImportTool: Destination directory /data/nshop/ods/dim_pub_area is not present, hence not deleting. 2025-10-15 20:23:20,881 INFO mapreduce.ImportJobBase: Beginning query import. 2025-10-15 20:23:20,883 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2025-10-15 20:23:20,894 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 2025-10-15 20:23:20,928 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2025-10-15 20:23:21,530 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at caiji/192.168.193.54:8032 2025-10-15 20:23:21,916 INFO client.AHSProxy: Connecting to Application History server at caiji/192.168.193.54:10200 2025-10-15 20:23:22,262 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1760505557106_0013 Hive Session ID = e086f596-6c65-405f-87f6-0d09bca90e4a Hive Session ID = fa8aa012-e2d9-4c4b-8fbf-ea3bbc7b7e89 2025-10-15 20:23:27,485 INFO db.DBInputFormat: Using read commited transaction isolation 2025-10-15 20:23:27,565 INFO mapreduce.JobSubmitter: number of splits:1 2025-10-15 20:23:28,128 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1760505557106_0013 2025-10-15 20:23:28,129 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-10-15 20:23:28,587 INFO conf.Configuration: resource-types.xml not found 2025-10-15 20:23:28,591 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2025-10-15 20:23:28,742 INFO impl.YarnClientImpl: Submitted application application_1760505557106_0013 2025-10-15 20:23:28,819 INFO mapreduce.Job: The url to track the job: http://caiji:8088/proxy/application_1760505557106_0013/ 2025-10-15 20:23:28,820 INFO mapreduce.Job: Running job: job_1760505557106_0013 2025-10-15 20:23:43,166 INFO mapreduce.Job: Job job_1760505557106_0013 running in uber mode : false 2025-10-15 20:23:43,168 INFO mapreduce.Job: map 0% reduce 0% 2025-10-15 20:23:53,451 INFO mapreduce.Job: map 100% reduce 0% 2025-10-15 20:23:53,474 INFO mapreduce.Job: Job job_1760505557106_0013 completed successfully 2025-10-15 20:23:53,720 INFO mapreduce.Job: Counters: 33 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=282339 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=87 HDFS: Number of bytes written=33222 HDFS: Number of read operations=6 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 HDFS: Number of bytes read erasure-coded=0 Job Counters Launched map tasks=1 Other local map tasks=1 Total time spent by all maps in occupied slots (ms)=7085 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=7085 Total vcore-milliseconds taken by all map tasks=7085 Total megabyte-milliseconds taken by all map tasks=7255040 Map-Reduce Framework Map input records=791 Map output records=791 Input split bytes=87 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=102 CPU time spent (ms)=4340 Physical memory (bytes) snapshot=249790464 Virtual memory (bytes) snapshot=2819796992 Total committed heap usage (bytes)=238026752 Peak Map Physical memory (bytes)=249790464 Peak Map Virtual memory (bytes)=2819796992 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=33222 2025-10-15 20:23:53,733 INFO mapreduce.ImportJobBase: Transferred 32.4434 KB in 32.7752 seconds (1,013.6317 bytes/sec) 2025-10-15 20:23:53,742 INFO mapreduce.ImportJobBase: Retrieved 791 records. 2025-10-15 20:23:53,742 INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table null 2025-10-15 20:23:53,795 INFO manager.SqlManager: Executing SQL statement: select region_code,region_code_desc,region_city,region_city_desc,region_province,region_province_desc from area_dim where (1 = 0) 2025-10-15 20:23:53,804 INFO manager.SqlManager: Executing SQL statement: select region_code,region_code_desc,region_city,region_city_desc,region_province,region_province_desc from area_dim where (1 = 0) 2025-10-15 20:23:53,826 INFO hive.HiveImport: Loading uploaded data into Hive 2025-10-15 20:23:53,888 INFO conf.HiveConf: Found configuration file file:/opt/installs/hive3.1.2/conf/hive-site.xml 2025-10-15 20:24:00,619 main ERROR Could not register mbeans java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register") at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) at java.lang.SecurityManager.checkPermission(SecurityManager.java:585) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:322) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.logging.log4j.core.jmx.Server.register(Server.java:389) at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:167) at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:556) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:261) at org.apache.logging.log4j.core.async.AsyncLoggerContext.start(AsyncLoggerContext.java:87) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:240) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:158) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:131) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:101) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:188) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jDefault(LogUtils.java:173) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:106) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:98) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:331) at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:537) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252) Hive Session ID = a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:00,804 INFO SessionState: Hive Session ID = a3156ff3-c06b-4333-9294-e6fd634a4b9d Logging initialized using configuration in jar:file:/opt/installs/sqoop1.4.7/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true 2025-10-15 20:24:01,091 INFO SessionState: Logging initialized using configuration in jar:file:/opt/installs/sqoop1.4.7/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true 2025-10-15 20:24:01,206 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:01,512 INFO session.SessionState: Created local directory: /opt/installs/hive3.1.2/iotmp/root/a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:01,523 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/a3156ff3-c06b-4333-9294-e6fd634a4b9d/_tmp_space.db 2025-10-15 20:24:01,557 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:01,557 INFO session.SessionState: Updating thread name to a3156ff3-c06b-4333-9294-e6fd634a4b9d main 2025-10-15 20:24:06,029 INFO metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://caiji:9083 2025-10-15 20:24:06,217 INFO metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1 2025-10-15 20:24:06,321 INFO metastore.HiveMetaStoreClient: Connected to metastore. 2025-10-15 20:24:06,322 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 Hive Session ID = 32336280-d41f-4a0a-b3d7-b39399f2bbab 2025-10-15 20:24:07,603 INFO SessionState: Hive Session ID = 32336280-d41f-4a0a-b3d7-b39399f2bbab 2025-10-15 20:24:07,617 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:07,656 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/32336280-d41f-4a0a-b3d7-b39399f2bbab 2025-10-15 20:24:07,670 INFO session.SessionState: Created local directory: /opt/installs/hive3.1.2/iotmp/root/32336280-d41f-4a0a-b3d7-b39399f2bbab 2025-10-15 20:24:07,681 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/32336280-d41f-4a0a-b3d7-b39399f2bbab/_tmp_space.db 2025-10-15 20:24:07,865 INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=a3156ff3-c06b-4333-9294-e6fd634a4b9d, clientType=HIVECLI] 2025-10-15 20:24:07,891 WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. 2025-10-15 20:24:07,944 INFO metastore.HiveMetaStoreClient: Mestastore configuration metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook 2025-10-15 20:24:07,974 INFO metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 0 2025-10-15 20:24:07,977 ERROR utils.MetaStoreUtils: Got exception: org.apache.thrift.transport.TTransportException Cannot write to null outputStream org.apache.thrift.transport.TTransportException: Cannot write to null outputStream at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:142) at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:178) at org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:106) at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:70) at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_get_tables_by_type(ThriftHiveMetastore.java:1913) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_tables_by_type(ThriftHiveMetastore.java:1903) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1676) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1665) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) at com.sun.proxy.$Proxy41.getTables(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) at com.sun.proxy.$Proxy41.getTables(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.getTablesByType(Hive.java:1310) at org.apache.hadoop.hive.ql.metadata.Hive.getTableObjects(Hive.java:1222) at org.apache.hadoop.hive.ql.metadata.Hive.getAllMaterializedViewObjects(Hive.java:1217) at org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry$Loader.run(HiveMaterializedViewsRegistry.java:166) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2025-10-15 20:24:07,983 ERROR utils.MetaStoreUtils: Converting exception to MetaException 2025-10-15 20:24:07,980 INFO metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://caiji:9083 2025-10-15 20:24:07,984 INFO metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1 2025-10-15 20:24:07,992 INFO metastore.HiveMetaStoreClient: Connected to metastore. 2025-10-15 20:24:07,992 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-10-15 20:24:08,002 WARN metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. getTables MetaException(message:Got exception: org.apache.thrift.transport.TTransportException Cannot write to null outputStream) at org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:168) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1667) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) at com.sun.proxy.$Proxy41.getTables(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) at com.sun.proxy.$Proxy41.getTables(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.getTablesByType(Hive.java:1310) at org.apache.hadoop.hive.ql.metadata.Hive.getTableObjects(Hive.java:1222) at org.apache.hadoop.hive.ql.metadata.Hive.getAllMaterializedViewObjects(Hive.java:1217) at org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry$Loader.run(HiveMaterializedViewsRegistry.java:166) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2025-10-15 20:24:08,212 INFO SessionState: Added [/opt/installs/hive3.1.2/iotmp/a3156ff3-c06b-4333-9294-e6fd634a4b9d_resources/json-udf-1.3.8-jar-with-dependencies.jar] to class path 2025-10-15 20:24:08,212 INFO SessionState: Added resources: [hdfs:///common/lib/json-udf-1.3.8-jar-with-dependencies.jar] 2025-10-15 20:24:08,213 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,213 INFO session.SessionState: Resetting thread name to main 2025-10-15 20:24:08,213 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,213 INFO session.SessionState: Updating thread name to a3156ff3-c06b-4333-9294-e6fd634a4b9d main 2025-10-15 20:24:08,258 INFO SessionState: Added [/opt/installs/hive3.1.2/iotmp/a3156ff3-c06b-4333-9294-e6fd634a4b9d_resources/json-serde-1.3.8-jar-with-dependencies.jar] to class path 2025-10-15 20:24:08,258 INFO SessionState: Added resources: [hdfs:///common/lib/json-serde-1.3.8-jar-with-dependencies.jar] 2025-10-15 20:24:08,258 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,258 INFO session.SessionState: Resetting thread name to main 2025-10-15 20:24:08,258 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,258 INFO session.SessionState: Updating thread name to a3156ff3-c06b-4333-9294-e6fd634a4b9d main 2025-10-15 20:24:08,297 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,297 INFO session.SessionState: Resetting thread name to main 2025-10-15 20:24:08,298 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,298 INFO session.SessionState: Updating thread name to a3156ff3-c06b-4333-9294-e6fd634a4b9d main 2025-10-15 20:24:08,299 INFO SessionState: Added [/opt/installs/hive3.1.2/iotmp/a3156ff3-c06b-4333-9294-e6fd634a4b9d_resources/json-udf-1.3.8-jar-with-dependencies.jar] to class path 2025-10-15 20:24:08,300 INFO SessionState: Added resources: [hdfs:///common/lib/json-udf-1.3.8-jar-with-dependencies.jar] 2025-10-15 20:24:08,300 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,300 INFO session.SessionState: Resetting thread name to main 2025-10-15 20:24:08,300 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,300 INFO session.SessionState: Updating thread name to a3156ff3-c06b-4333-9294-e6fd634a4b9d main 2025-10-15 20:24:08,301 INFO SessionState: Added [/opt/installs/hive3.1.2/iotmp/a3156ff3-c06b-4333-9294-e6fd634a4b9d_resources/json-serde-1.3.8-jar-with-dependencies.jar] to class path 2025-10-15 20:24:08,301 INFO SessionState: Added resources: [hdfs:///common/lib/json-serde-1.3.8-jar-with-dependencies.jar] 2025-10-15 20:24:08,301 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,302 INFO session.SessionState: Resetting thread name to main 2025-10-15 20:24:08,303 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:08,303 INFO session.SessionState: Updating thread name to a3156ff3-c06b-4333-9294-e6fd634a4b9d main 2025-10-15 20:24:08,845 INFO ql.Driver: Compiling command(queryId=root_20251015202408_6d0ebb16-1bd2-4867-9b4c-c4ef8da002a7): CREATE TABLE IF NOT EXISTS `dim_nshop`.`dim_pub_area` ( `region_code` STRING, `region_code_desc` STRING, `region_city` STRING, `region_city_desc` STRING, `region_province` STRING, `region_province_desc` STRING) COMMENT 'Imported by sqoop on 2025/10/15 20:23:53' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE 2025-10-15 20:24:09,011 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient trying reconnect as root (auth:SIMPLE) 2025-10-15 20:24:09,015 INFO metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://caiji:9083 2025-10-15 20:24:09,018 INFO metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 2 2025-10-15 20:24:09,022 INFO metastore.HiveMetaStoreClient: Connected to metastore. 2025-10-15 20:24:09,072 INFO metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://caiji:9083 2025-10-15 20:24:09,072 INFO metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 3 2025-10-15 20:24:09,076 INFO metastore.HiveMetaStoreClient: Connected to metastore. 2025-10-15 20:24:09,076 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-10-15 20:24:09,358 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized 2025-10-15 20:24:13,737 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-10-15 20:24:13,760 INFO parse.CalcitePlanner: Starting Semantic Analysis 2025-10-15 20:24:13,825 INFO parse.CalcitePlanner: Creating table dim_nshop.dim_pub_area position=27 2025-10-15 20:24:14,225 INFO ql.Driver: Semantic Analysis Completed (retrial = false) 2025-10-15 20:24:14,246 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) 2025-10-15 20:24:14,319 INFO ql.Driver: Completed compiling command(queryId=root_20251015202408_6d0ebb16-1bd2-4867-9b4c-c4ef8da002a7); Time taken: 5.741 seconds 2025-10-15 20:24:14,320 INFO reexec.ReExecDriver: Execution #1 of query 2025-10-15 20:24:14,329 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-10-15 20:24:14,330 INFO ql.Driver: Executing command(queryId=root_20251015202408_6d0ebb16-1bd2-4867-9b4c-c4ef8da002a7): CREATE TABLE IF NOT EXISTS `dim_nshop`.`dim_pub_area` ( `region_code` STRING, `region_code_desc` STRING, `region_city` STRING, `region_city_desc` STRING, `region_province` STRING, `region_province_desc` STRING) COMMENT 'Imported by sqoop on 2025/10/15 20:23:53' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE 2025-10-15 20:24:14,413 INFO ql.Driver: Completed executing command(queryId=root_20251015202408_6d0ebb16-1bd2-4867-9b4c-c4ef8da002a7); Time taken: 0.084 seconds OK 2025-10-15 20:24:14,413 INFO ql.Driver: OK 2025-10-15 20:24:14,414 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager Time taken: 5.849 seconds 2025-10-15 20:24:14,417 INFO CliDriver: Time taken: 5.849 seconds 2025-10-15 20:24:14,417 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:14,417 INFO session.SessionState: Resetting thread name to main 2025-10-15 20:24:14,418 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:14,418 INFO session.SessionState: Updating thread name to a3156ff3-c06b-4333-9294-e6fd634a4b9d main 2025-10-15 20:24:14,424 INFO ql.Driver: Compiling command(queryId=root_20251015202414_6fef43ef-ccc7-4ebf-89b4-ef659209e529): LOAD DATA INPATH 'hdfs://caiji:9820/data/nshop/ods/dim_pub_area' INTO TABLE `dim_nshop`.`dim_pub_area` 2025-10-15 20:24:14,460 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-10-15 20:24:15,264 INFO ql.Driver: Semantic Analysis Completed (retrial = false) 2025-10-15 20:24:15,265 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) 2025-10-15 20:24:15,265 INFO ql.Driver: Completed compiling command(queryId=root_20251015202414_6fef43ef-ccc7-4ebf-89b4-ef659209e529); Time taken: 0.841 seconds 2025-10-15 20:24:15,265 INFO reexec.ReExecDriver: Execution #1 of query 2025-10-15 20:24:15,266 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-10-15 20:24:15,266 INFO ql.Driver: Executing command(queryId=root_20251015202414_6fef43ef-ccc7-4ebf-89b4-ef659209e529): LOAD DATA INPATH 'hdfs://caiji:9820/data/nshop/ods/dim_pub_area' INTO TABLE `dim_nshop`.`dim_pub_area` 2025-10-15 20:24:15,301 INFO ql.Driver: Starting task [Stage-0:MOVE] in serial mode 2025-10-15 20:24:15,306 INFO metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 2 Loading data to table dim_nshop.dim_pub_area 2025-10-15 20:24:15,306 INFO exec.Task: Loading data to table dim_nshop.dim_pub_area from hdfs://caiji:9820/data/nshop/ods/dim_pub_area 2025-10-15 20:24:15,313 INFO metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://caiji:9083 2025-10-15 20:24:15,313 INFO metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 3 2025-10-15 20:24:15,320 INFO metastore.HiveMetaStoreClient: Connected to metastore. 2025-10-15 20:24:15,320 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-10-15 20:24:15,975 INFO ql.Driver: Starting task [Stage-1:STATS] in serial mode 2025-10-15 20:24:15,976 INFO metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 2 2025-10-15 20:24:15,976 INFO stats.BasicStatsTask: Executing stats task 2025-10-15 20:24:15,999 INFO metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://caiji:9083 2025-10-15 20:24:15,999 INFO metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 3 2025-10-15 20:24:16,004 INFO metastore.HiveMetaStoreClient: Connected to metastore. 2025-10-15 20:24:16,004 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-10-15 20:24:16,948 INFO stats.BasicStatsTask: Table dim_nshop.dim_pub_area stats: [numFiles=2, numRows=0, totalSize=66444, rawDataSize=0] 2025-10-15 20:24:16,949 INFO ql.Driver: Completed executing command(queryId=root_20251015202414_6fef43ef-ccc7-4ebf-89b4-ef659209e529); Time taken: 1.682 seconds OK 2025-10-15 20:24:16,949 INFO ql.Driver: OK 2025-10-15 20:24:16,949 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager Time taken: 2.525 seconds 2025-10-15 20:24:16,949 INFO CliDriver: Time taken: 2.525 seconds 2025-10-15 20:24:16,949 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:16,949 INFO session.SessionState: Resetting thread name to main 2025-10-15 20:24:16,950 INFO conf.HiveConf: Using the default value passed in for log id: a3156ff3-c06b-4333-9294-e6fd634a4b9d 2025-10-15 20:24:16,980 INFO session.SessionState: Deleted directory: /tmp/hive/root/a3156ff3-c06b-4333-9294-e6fd634a4b9d on fs with scheme hdfs 2025-10-15 20:24:16,982 INFO session.SessionState: Deleted directory: /opt/installs/hive3.1.2/iotmp/root/a3156ff3-c06b-4333-9294-e6fd634a4b9d on fs with scheme file 2025-10-15 20:24:16,983 INFO metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 2 2025-10-15 20:24:16,984 INFO hive.HiveImport: Hive import complete. 2025-10-15 20:24:16,990 INFO hive.HiveImport: Export directory is contains the _SUCCESS file only, removing the directory. [root@caiji ~]# Hive Session ID = a9894d69-9202-4b58-9ca0-6eef301aed2c Hive Session ID = ae9cf9ab-f39f-4427-ac3a-5814406e64aa Hive Session ID = 29df4bf0-a832-4031-b833-29dd1fafb9c8 Hive Session ID = b3f7b733-9fc0-43f1-9eed-dda7772f5f80 Hive Session ID = be9341af-4b2e-4f80-b38a-366118d7b883 Hive Session ID = 6b18de3f-9c17-4284-b156-e620d002430c Hive Session ID = 45b5e4e2-c4d5-4f4c-90e3-ac3dded07d75 Hive Session ID = a0d8dc7d-cf69-4027-9c73-72585f89b631 Hive Session ID = c172f183-771d-4192-b320-1c076d8e1bcc Hive Session ID = 7320aca4-1bba-4271-b1d8-847b5f028c85 Hive Session ID = d6498fc1-0e11-4cd7-925b-4d7f4dd5a67c Hive Session ID = f9f74c84-604d-450e-8181-b8c3484aad10
最新发布
10-16
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值