关于Flume启动时报ERROR - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:526)] 错

2019-08-08 11:32:19,680 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:526)] Hit max consecutive under-replication rotations (30); w                              ill not continue rolling files under this path due to under-replication
2019-08-08 11:32:19,913 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file bounda                              ry. Rolling to the next file, if there is one.
2019-08-08 11:32:19,913 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /home/hyxy/flu                              me/spooldir/1901 to /home/hyxy/flume/spooldir/1901.COMPLETED
2019-08-08 12:32:36,096 (pool-4-thread-1) [WARN - org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:239)] The channel is full, and cannot write data now. The s                              ource will try again after 250 milliseconds
2019-08-08 12:32:36,348 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:238)] Last read was never committed - resetting                               mark position.
2019-08-08 12:32:37,715 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file bounda                              ry. Rolling to the next file, if there is one.
2019-08-08 12:32:37,716 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /home/hyxy/flu                              me/spooldir/1901 to /home/hyxy/flume/spooldir/1901.COMPLETED
2019-08-08 12:32:41,856 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading c                              onfiguration file:/home/hyxy/apps/flume/conf/spooling-hdfs.conf

 

这个错误翻译过来是意思是连续命中最大复本数不足循环(30);由于复本数不足,我将不会继续在此路径下滚动文件

出现这个错误之后无法继续向HDFS上继续滚写数据,你的复本数不足。

1.可以检查Datanode是否有挂掉的或是否忘开启某个datanode

2.检查hdfs-site.xml属性中你设置的副本数,是否和大于你的Datanode节点

我的错误就是 我设置的3个副本,但是我的datanode只有两个,把设置的副本数改成2个就好了。

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值