HDFS 抛出异常 (java.io.IOException: config())

本文分析了一种在使用HDFS API时出现的异常情况:Configuration初始化时抛出java.io.IOException:config()错误,并给出了详细的解决方案。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

原文:http://www.myexception.cn/program/1029248.html

HDFS 抛出错误 (java.io.IOException: config())

 

DEBUG [main] Configuration.<init>(211) | java.io.IOException: config()

at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:211)

at com.netqin.hdfs.MyHdfs.isExists(MyHdfs.java:20)

at com.netqin.hdfs.MyHdfs.main(MyHdfs.java:41)

 

在客户端使用API访问HDFS的时候保了一个这样的错误。

 

通过定位代码:

 

 

  /** A new configuration where the behavior of reading from the default 

   * resources can be turned off.

   * 

   * If the parameter {@code loadDefaults} is false, the new instance

   * will not load resources from the default files. 

   * @param loadDefaults specifies whether to load from the default files

   */

  public Configuration(boolean loadDefaults) {

    this.loadDefaults = loadDefaults;

    if (LOG.isDebugEnabled()) {

      LOG.debug(StringUtils.stringifyException(new IOException("config()")));

    }

    synchronized(Configuration.class) {

      REGISTRY.put(this, null);

    }

  }

 

原来是标红的这一行抛出的错误,原因就是HDFS不允许在log4j中使用DEBUG输出日志。如果有则抛出了错误,不过这个错误太不明显了,让人搞得莫名其妙。

 

 

解决办法:

把Log4j文件

# 本文件为日志显示控制文件

log4j.rootCategory=DEBUG,stdout,file

 

改为

log4j.rootCategory=ERROR,stdout,file

 

 

--- apiVersion: flink.apache.org/v1beta1 kind: FlinkDeployment metadata: name: flinksql-hive-test-job spec: flinkConfiguration: classloader.resolve-order: parent-first pekko.ask.timeout: '30s' taskmanager.numberOfTaskSlots: '4' execution.checkpointing.interval: 60000ms state.backend.type: 'rocksdb' execution.checkpointing.incremental: true execution.checkpointing.dir: 'hdfs:///tmp/checkpoints' execution.checkpointing.savepoint-dir: 'hdfs:///tmp/savepoints' security.kerberos.login.keytab: '/opt/keytab/work.keytab' security.kerberos.login.principal: 'work/work@BAIDU.COM' flinkVersion: v1_20 image: 'swr.cn-east-3.myhuaweicloud.com/yifanzhang/flink:1.20_cdc_mysql_to_paimon_starrocks' imagePullPolicy: Always job: jarURI: 'local:///opt/flink/lib/flink-sql-runner-1.12.0.jar' args: ["/opt/flink/sql-scripts/flinksql.sql"] state: running upgradeMode: savepoint jobManager: replicas: 1 resource: cpu: 2 memory: 4096m taskManager: replicas: 1 resource: cpu: 4 memory: 8192m restartNonce: 0 serviceAccount: flink podTemplate: apiVersion: v1 kind: Pod spec: containers: # don't modify this name - name: flink-main-container volumeMounts: - name: flinksql mountPath: /opt/flink/sql-scripts - name: hadoop-config mountPath: /opt/hadoop/etc/hadoop - name: krb5-config mountPath: /etc/krb5.conf subPath: krb5.conf - name: keytab mountPath: /opt/keytab volumes: - configMap: name: flinksql-hive-test-configmap name: flinksql - configMap: name: hadoop-config-configmap name: hadoop-config - configMap: name: krb5-config-configmap name: krb5-config - secret: secretName: work-keytab name: keytab hostAliases: - ip: "10.8.75.101" hostnames: - "dn1.bmr.cde.cscec8b.com.cn" - ip: "10.8.75.102" hostnames: - "dn2.bmr.cde.cscec8b.com.cn" apiVersion: v1 data: flinksql.sql: |- SET 'hadoop.security.authentication' = 'kerberos'; SET 'hive.metastore.sasl.enabled' = 'true'; SET 'hive.metastore.kerberos.principal' = 'nm/dn1.bmr.cde.cscec8b.com.cn@BAIDU.COM'; -- 替换为实际principal -- 测试部分:验证Hive表读取 CREATE CATALOG hive WITH ( 'type' = 'hive', 'hadoop-conf-dir' = '/opt/hadoop/etc/hadoop', 'hive-conf-dir' = '/opt/hadoop/etc/hadoop' ); -- 测试查询2:抽样10条数据验证字段 SELECT person_no, second_unit_code, end_time, post_status_id FROM hive.dws.dws_user_system_total_user_quantity_log WHERE rfq = (SELECT MAX(rfq) FROM hive.dws.dws_user_system_total_user_quantity_log) LIMIT 10; kind: ConfigMap metadata: name: flinksql-hive-test-configmap 以上2个脚本有什么问题,为什么报错如下: Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Can't get Master Kerberos principal for use as renewer at java.base/java.util.concurrent.FutureTask.report(Unknown Source) at java.base/java.util.concurrent.FutureTask.get(Unknown Source) at org.apache.flink.connectors.hive.MRSplitsGetter.getHiveTablePartitionMRSplits(MRSplitsGetter.java:79) ... 13 more Caused by: java.io.IOException: Can't get Master Kerberos principal for use as renewer at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:134) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:102) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:81)
最新发布
07-14
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值