hadoop配置文件 -- hdfs-site.xml(updating)

本文探讨了HDFS中dfs.datanode.du.reserved参数的重要性,解释了如何防止磁盘空间满导致的问题。建议根据系统实际情况调整该参数及系统保留空间,并指出在设置dfs.replication时应考虑的集群复制因素。同时,强调了配置变更应在全集群范围内统一以确保稳定运行。

dfs.datanode.du.reserved

dfs.datanode.du.reserved表示的是datanode在对磁盘写数据的时候,保留多少空间给非HDFS使用。这个参数主要是为了防止磁盘空间被写满导致的HDFS异常。

最近遇到一个HDFS磁盘空间写满的问题,原因也很简单,就是因为dfs.datanode.du.reserved这个参数没有考虑到操作系统保留块空间的问题。

比如:

一块4T的盘,格式化完了以后剩下3.6T,去掉5%的系统保留空间,就剩下3.4T。

但是hbase是按照磁盘的总空间3.6T来算写入的,此时可以写入的实际只有3.4T。这里总空间和实际空间的差值就是0.2T,也就是200G。

而此时如果dfs.datanode.du.reserved设置为小于200G,那么hbase就存在空间写满的风险。

因此,我们在配置HBase的时候,需要根据系统实际使用空间来配置dfs.datanode.du.reserved参数的大小。

解决办法:

1、将系统保留空间调整小(总空间-系统保留空间

tune2fs  -m 1 /dev/diskname

    备注:系统默认保留5%的空间,也就是tune2fs  -m 5 /dev/diskname。上面的命令即保留为1%

2、将dfs.datanode.du.reserved调大

最佳实践:

因为HDFS一般使用的数据盘都是T级别的硬盘,系统默认保留5%其实是很浪费的。因此最好的解决办法是同时进行1、2两种方法的调整。

推荐系统保留空间为2%,并将dfs.datanode.du.reserved做调整。

比盘是4T,格式化后可用空间为3.6T,系统保留空间为75G左右,dfs.datanode.du.reserved设置为200G

dfs.replication

We have 3 settings for hadoop replication namely:

dfs.replication.max = 10
dfs.replication.min = 1
dfs.replication     = 2

So dfs.replication is default replication of files in hadoop cluster until a hadoop client is setting it manually using "setrep". and a hadoop client can set max replication up to dfs.replication.mx.

dfs.relication.min is used in two cases:

During safe mode, it checks whether replication of blocks is upto dfs.replication.min or not.
dfs.replication.min are processed synchronously. and remaining dfs.replication-dfs.replication.min are processed asynchronously.

So we have to set these configuration on each node (namenode+datanode) or only on client node?

What if setting for above three settings vary on different datanodes?

Replication factor can’t be set for any specific node in cluster, you can set it for entire cluster/directory/file. dfs.replication can be updated in running cluster in hdfs-sie.xml.

Set the replication factor for a file- hadoop dfs -setrep -w <rep-number> file-path
Or set it recursively for directory or for entire cluster- hadoop fs -setrep -R -w 1 /
Use of min and max rep factor-

While writing the data to datanode it is possible that many datanodes may fail. If the dfs.namenode.replication.min replicas written then the write operation succeed. Post to write operation the blocks replicated asynchronously until it reaches to dfs.replication level.

The max replication factor dfs.replication.max is used to set the replication limit of blocks. A user can’t set the blocks replication more than the limit while creating the file.

You can set the high replication factor for blocks of popular file to distribute the read load on the cluster.

So do i have to set configuration parameters on (namenode+datanode) or only on namenode or on all datanodes? We are using hive for data loading activites, so how can we set high replication factor for blocks of popular file to distribute the read load on the cluster.? – user2950086 May 22 '14 at 12:26 
default blocks' replication configuration you can update in hdfs-site.xml, that is one time cluster level configuration. if you have to update the replication for hive table then find the table location in HDFS(default /user/hive/warehouse..) and change the replication by command- hadoop fs -setrep -R -w rep-num /user/hive/warehouse/xyz_table – Rahul Sharma May 22 '14 at 13:04
do we have some configuration parameter for hive, which can make replication for that table configurable? e.g hive -e "some query" -hiveconf some.configuration.parameter – user2950086 May 23 '14 at 9:48
i haven't come across any such configuration at HiveConf. all hadoop command can be run from Hive CLI. e.g. hive> dfs -setrep -w file-path.... or $hive -e "dfs -setrep -w file-path" – Rahul Sharma May 23 '14 at 12:56
Isn't the dfs.replication.min used ONLY during safe mode? – Marsellus Wallace May 31 '15 at 21:03
@Gevorg yes, it is also used during the safe mode as well once every file has minimum replication HDFS exists the safe mode. – Rahul Sharma Oct 14 '17 at 0:12
@user2950086 you can set replication for a directory and create all managed and external tables inside that directory to achieve desired replication. –

hive启动之后这样/opt/servers/hadoop-3.1.3/libexec/hadoop-functions.sh: line 2361: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name /opt/servers/hadoop-3.1.3/libexec/hadoop-functions.sh: line 2456: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: invalid variable name 2025-10-31 09:41:07,363 INFO [main] conf.HiveConf (HiveConf.java:findConfigFile(187)) - Found configuration file file:/opt/servers/apache-hive-3.1.2-bin/conf/hive-site.xml 2025-10-31 09:41:07,822 WARN [main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.strict.managed.tables does not exist 2025-10-31 09:41:07,823 WARN [main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.allow.show.create.table.in.select.nogrant does not exist 2025-10-31 09:41:09,635 WARN [main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.strict.managed.tables does not exist 2025-10-31 09:41:09,635 WARN [main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.allow.show.create.table.in.select.nogrant does not exist Hive Session ID = 3cacf30c-1bb6-4aea-a64f-0c625b70c23a 2025-10-31 09:41:09,643 INFO [main] SessionState (SessionState.java:printInfo(1227)) - Hive Session ID = 3cacf30c-1bb6-4aea-a64f-0c625b70c23a Logging initialized using configuration in jar:file:/opt/servers/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true 2025-10-31 09:41:09,689 INFO [main] SessionState (SessionState.java:printInfo(1227)) - Logging initialized using configuration in jar:file:/opt/servers/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true 2025-10-31 09:41:10,468 INFO [main] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/xtlife/3cacf30c-1bb6-4aea-a64f-0c625b70c23a 2025-10-31 09:41:10,484 INFO [main] session.SessionState (SessionState.java:createPath(790)) - Created local directory: /tmp/xtlife/3cacf30c-1bb6-4aea-a64f-0c625b70c23a 2025-10-31 09:41:10,487 INFO [main] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/xtlife/3cacf30c-1bb6-4aea-a64f-0c625b70c23a/_tmp_space.db 2025-10-31 09:41:10,499 INFO [main] conf.HiveConf (HiveConf.java:getLogIdVar(5040)) - Using the default value passed in for log id: 3cacf30c-1bb6-4aea-a64f-0c625b70c23a 2025-10-31 09:41:10,499 INFO [main] session.SessionState (SessionState.java:updateThreadName(441)) - Updating thread name to 3cacf30c-1bb6-4aea-a64f-0c625b70c23a main 2025-10-31 09:41:10,544 WARN [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.strict.managed.tables does not exist 2025-10-31 09:41:10,544 WARN [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.allow.show.create.table.in.select.nogrant does not exist 2025-10-31 09:41:11,013 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(441)) - Trying to connect to metastore with URI thrift://xtlife02:9083 2025-10-31 09:41:11,036 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(517)) - Opened a connection to metastore, current connections: 1 2025-10-31 09:41:11,055 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(570)) - Connected to metastore. 2025-10-31 09:41:11,055 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metastore.RetryingMetaStoreClient (RetryingMetaStoreClient.java:<init>(97)) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=xtlife (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-10-31 09:41:12,146 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metadata.Hive (Hive.java:reloadFunctions(294)) - Registering function decrypt com.xtlife.datacenter.hive.udf.Decrypt 2025-10-31 09:41:12,148 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metadata.Hive (Hive.java:reloadFunctions(294)) - Registering function decrypt com.xtlife.datacenter.hive.udf.Decrypt 2025-10-31 09:41:12,148 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metadata.Hive (Hive.java:reloadFunctions(294)) - Registering function g_workday com.xtlife.datacenter.hive.udf.LiscodeGworkday 2025-10-31 09:41:12,148 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metadata.Hive (Hive.java:reloadFunctions(294)) - Registering function tbgetskstandpremind com.xtlife.datacenter.hive.udf.LiscodeTbgetskstandpremind 2025-10-31 09:41:12,149 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metadata.Hive (Hive.java:reloadFunctions(294)) - Registering function getlmriskvalue com.xtlife.datacenter.hive.udf.LicodeGetlmriskvalue 2025-10-31 09:41:12,149 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metadata.Hive (Hive.java:reloadFunctions(294)) - Registering function tbgetstandprem com.xtlife.datacenter.hive.udf.LiscodeTbgetstandprem 2025-10-31 09:41:12,149 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] metadata.Hive (Hive.java:reloadFunctions(294)) - Registering function encrypt com.xtlife.datacenter.hive.udf.Encrypt Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-10-31 09:41:12,150 INFO [3cacf30c-1bb6-4aea-a64f-0c625b70c23a main] CliDriver (SessionState.java:printInfo(1227)) - Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = a7c7a24b-fdb3-4b44-8c21-586d7f43ea46 2025-10-31 09:41:12,150 INFO [pool-7-thread-1] SessionState (SessionState.java:printInfo(1227)) - Hive Session ID = a7c7a24b-fdb3-4b44-8c21-586d7f43ea46 2025-10-31 09:41:12,157 INFO [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/xtlife/a7c7a24b-fdb3-4b44-8c21-586d7f43ea46 2025-10-31 09:41:12,158 INFO [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created local directory: /tmp/xtlife/a7c7a24b-fdb3-4b44-8c21-586d7f43ea46 2025-10-31 09:41:12,161 INFO [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/xtlife/a7c7a24b-fdb3-4b44-8c21-586d7f43ea46/_tmp_space.db hive (default)> 2025-10-31 09:41:13,142 INFO [pool-7-thread-1] metadata.HiveMaterializedViewsRegistry (HiveMaterializedViewsRegistry.java:run(171)) - Materialized views registry has been initialized
最新发布
11-01
[root@hadoop01 apache-hive-3.1.3-bin]# hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 20:45:39,133 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 7a465677-eec4-40ee-b6a2-5c7b638725a7 2025-06-17 20:45:43,011 INFO SessionState: Hive Session ID = 7a465677-eec4-40ee-b6a2-5c7b638725a7 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 20:45:43,144 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 20:45:45,757 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/7a465677-eec4-40ee-b6a2-5c7b638725a7 2025-06-17 20:45:45,806 INFO session.SessionState: Created local directory: /tmp/root/7a465677-eec4-40ee-b6a2-5c7b638725a7 2025-06-17 20:45:45,820 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/7a465677-eec4-40ee-b6a2-5c7b638725a7/_tmp_space.db 2025-06-17 20:45:45,850 INFO conf.HiveConf: Using the default value passed in for log id: 7a465677-eec4-40ee-b6a2-5c7b638725a7 2025-06-17 20:45:45,850 INFO session.SessionState: Updating thread name to 7a465677-eec4-40ee-b6a2-5c7b638725a7 main 2025-06-17 20:45:47,956 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 20:45:48,005 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 20:45:48,017 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 20:45:48,020 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 20:45:48,023 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 20:45:48,023 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 20:45:48,024 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 20:45:48,024 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 20:45:48,400 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 20:45:48,953 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 20:45:49,485 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 20:45:49,573 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 20:45:49,644 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 20:45:50,533 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 20:45:50,824 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 20:45:50,827 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 20:45:51,223 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,224 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,225 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,225 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,225 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:51,225 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,606 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,607 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,608 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,608 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,609 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:54,609 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 20:45:59,277 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 20:45:59,278 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore root@192.168.245.131 2025-06-17 20:45:59,555 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 20:45:59,559 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 20:45:59,663 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 20:46:00,035 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 20:46:00,086 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 20:46:00,097 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.Hive Session ID = b7eae93e-640d-4628-b883-4e088aafa6e6 2025-06-17 20:46:00,287 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 20:46:00,287 INFO SessionState: Hive Session ID = b7eae93e-640d-4628-b883-4e088aafa6e6 2025-06-17 20:46:00,331 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b7eae93e-640d-4628-b883-4e088aafa6e6 2025-06-17 20:46:00,336 INFO session.SessionState: Created local directory: /tmp/root/b7eae93e-640d-4628-b883-4e088aafa6e6 2025-06-17 20:46:00,346 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b7eae93e-640d-4628-b883-4e088aafa6e6/_tmp_space.db 2025-06-17 20:46:00,354 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 20:46:00,355 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 20:46:00,360 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 20:46:00,364 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 20:46:00,404 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 20:46:00,406 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 20:46:00,446 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,447 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,470 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 20:46:00,471 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 20:46:00,486 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,486 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,495 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 20:46:00,495 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 20:46:00,495 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,496 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 20:46:00,503 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 20:46:00,503 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 20:46:00,503 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>
06-18
[root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 19:17:36,429 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:39,510 INFO SessionState: Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:39,639 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:42,218 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,262 INFO session.SessionState: Created local directory: /tmp/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,274 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc/_tmp_space.db 2025-06-17 19:17:42,304 INFO conf.HiveConf: Using the default value passed in for log id: 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,304 INFO session.SessionState: Updating thread name to 174edc05-5da2-4b28-a455-d29ac7bdd8fc main 2025-06-17 19:17:44,440 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:44,512 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 19:17:44,530 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:44,532 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,958 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 19:17:45,507 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 19:17:46,120 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 19:17:46,244 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 19:17:46,262 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 19:17:46,944 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 19:17:47,177 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:47,179 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:47,661 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,662 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,340 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,341 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore root@192.168.245.131 2025-06-17 19:17:56,489 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 19:17:56,497 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 19:17:56,607 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 19:17:56,969 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 19:17:57,003 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 19:17:57,011 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 19:17:57,170 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,170 INFO SessionState: Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,216 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,222 INFO session.SessionState: Created local directory: /tmp/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,228 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8/_tmp_space.db 2025-06-17 19:17:57,231 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 19:17:57,231 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:17:57,233 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:57,239 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:57,272 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:57,272 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:57,288 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,288 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,314 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,314 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,316 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,316 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,321 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 19:17:57,321 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 19:17:57,322 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,322 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,326 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>
06-18
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

大怀特

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值