hdfs quota物理空间转逻辑空间

本文深入探讨Hadoop中Quota机制的实现细节,包括quota的设置与使用流程,物理空间到逻辑空间的转变,以及各种操作如create、mv、setrep等在不同quota限制下的表现。同时,通过具体测试案例验证了Quota机制的有效性和局限性。
部署运行你感兴趣的模型镜像

1.现有quota的设置与使用

1.setQuota客户端到NN的主要流程

setQuota的shell入口示例如下:

hdfs dfsadmin -D fs.defaultFS=DClusterNmg4 -setQuota  1819200 hdfs://ns1/user/prod_xxx
hdfs dfsadmin -D fs.defaultFS=DClusterNmg4 -setSpaceQuota  666T hdfs://ns1/user/prod_xxx

对应的NN端:该shell的执行过程从Client到NN的主要流程如下
——1.DFSAdmin$SetSpaceQuotaCommand#run

——2.DistributedFileSystem#setQuota

——3.DFSClient#setQuota

——4.FSNamesystem#setQuota

——5.FSDirAttrOp#unprotectedSetQuota

——6.INodeDirectory#setQuota

DirectoryWithQuotaFeature特性分为两种:quota数值和使用量:

private QuotaCounts quota;
private QuotaCounts usage;

在设置quota时,直接向客户端传入的long型的数值设置到Feature中。

因此在quota由物理改为逻辑时,setQuota部分无需更改。
quota会落地到fsimage,usage每次加载时动态计算,usage的值的计算逻辑需要更改。

2.count -q / -u 查看quota

hadoop fs -count -q 或 hadoop fs -count -u 命令客户端代码如下:

// Count.java

protected void processPath(PathData src) throws IOException {

  StringBuilder outputString = new StringBuilder();

  if (showQuotasAndUsageOnly || showQuotabyType) {

    QuotaUsage usage = src.fs.getQuotaUsage(src.path);

    outputString.append(usage.toString(

        isHumanReadable(), showQuotabyType, storageTypes));

  } else {

    ContentSummary summary = src.fs.getContentSummary(src.path);

    outputString.append(summary.toString(

        showQuotas, isHumanReadable(), excludeSnapshots));

  }

  if(displayECPolicy){

    ContentSummary summary = src.fs.getContentSummary(src.path);

    if(!summary.getErasureCodingPolicy().equals("Replicated")){

      outputString.append("EC:");

    }

    outputString.append(summary.getErasureCodingPolicy());

    outputString.append(" ");

  }

  outputString.append(src);

  out.println(outputString.toString());

}

主要计算逻辑直接对应到NN端同名方法。这里会走两个方法:
- src.fs.getQuotaUsage(src.path): 只查看 QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA(物理空间)四个与设置quota有关时使用该方法
- src.fs.getContentSummary(src.path): 除了上述还会额外显示 DIR_COUNT FILE_COUNT CONTENT_SIZE(已用逻辑空间)

注意:getQuotaUsage和getContentSummary会走不同的方法:
- getQuotaUsage:直接取DirectoryWithQuotaFeature中的usage字段,该值是一个缓存值,启动后放在内存中。NN启动时会计算所有子目录求和所得。
- getContentSummary:每次重新计算

 

2.quota在hadoop中的限制作用

 

超过quota限制时,NameNode端会返回DSQuotaExceededException异常,如下:

// DSQuotaExceededException

public String getMessage() {

  String msg = super.getMessage();

  if (msg == null) {

    return "The DiskSpace quota" + (pathName==null?""" of " + pathName)

        " is exceeded: quota = " + quota

        " B = " + long2String(quota, "B"2)

        " but diskspace consumed = " + count

        " B = " + long2String(count, "B"2);

  else {

    return msg;

  }

}

搜索该异常的全部调用方如下
——1.DirectoryWithQuotaFeature

  • DirectoryWithQuotaFeature#verifyNamespaceQuota
  • DirectoryWithQuotaFeature#verifyStoragespaceQuota

——2.DFSOutputStream

  • DFSOutputStream#addBlock
    • dfsClient.namenode.addBlock
  • DFSOutputStream#newStreamForCreate
    • dfsClient.namenode.create

——3.DFSClient
DFSClient直接调用NameNode对应的方法,如下

  • DFSClient#createSymlink
    • namenode.createSymlink
  • DFSClient#callAppend
    • DFSOutputStream.newStreamForAppend
  • DFSClient#setReplication
    • namenode.setReplication
  • DFSClient#rename
    • namenode.rename
    • namenode.rename2
  • DFSClient#primitiveMkdir
    • namenode.mkdirs
  • DFSClient#setQuota
    • namenode.setQuota

可知,NN在以下情况会做quota校验:

  • create
  • append
  • setReplication
  • rename
  • mkdirs
  • setQuota

其中校验方法为:

  • DirectoryWithQuotaFeature#verifyNamespaceQuota
  • DirectoryWithQuotaFeature#verifyStoragespaceQuota
static boolean isViolated(final long quota, final long usage,

    final long delta) {

  return quota >= 0 && delta > 0 && usage > quota - delta;

}

2.SpaceQuota改逻辑空间

1.改动

主要是两方面改动:

  1. create/mv/setrep等操作时,会判断存储增量(delta),这里将原有的物理空间判断改为逻辑空间判断。其中更新quota的逻辑如下;

  2. DirectoryWithQuotaFeature中的usage变量初始化逻辑由物理空间改为逻辑空间。

2.测试

以下为SpaceQuota改成逻辑空间的测试。

#新建目录并设quota

[hadoop@cluster-host1 quota]$ hadoop fs -mkdir /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop dfsadmin -setSpaceQuota 1g /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G             1 G            1            0                  0 /test_quota/quota_1g

#创建100m大小文件

[hadoop@cluster-host1 quota]$ dd if=/dev/zero of=100m bs=1M count=100

#上传文件

[hadoop@cluster-host1 quota]$ hadoop fs -put 100m /test_quota/quota_1g/100m_1

#以-q和-u两种方式查看quota

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           924 M            1            1              100 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           924 M /test_quota/quota_1g

#上传第二个

[hadoop@cluster-host1 quota]$ hadoop fs -put 100m /test_quota/quota_1g/100m_2

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           824 M            1            2              200 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           824 M /test_quota/quota_1g

#上传中间几个文件,命令省略

[hadoop@cluster-host1 quota]$ hadoop fs -put 100m /quota_1g/100m_8

[hadoop@cluster-host1 quota]$ hadoop fs -put 100m /quota_1g/100m_9

 

put

#上传第10个文件出现超额

[hadoop@cluster-host1 quota]$ hadoop fs -put 100m /test_quota/quota_1g/100m_10

put: The DiskSpace quota of /test_quota/quota_1g is exceeded: quota = 1073741824 B = 1 GB but diskspace consumed = 1077936128 B = 1.00 GB

#此时查看quota还剩逻辑空间124M,上传100M文件却出现超额,是因为写数据时要满足最小块大小(测试环境128M)。

#1077936128/1024/1024=1028

#此时查看quota

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           124 M            1            9              900 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           124 M /test_quota/quota_1g

mv

#第一次mv成功,只判断文件大小,不会再判断块

[hadoop@cluster-host1 quota]$ hadoop fs -mv /test/100m /test_quota/quota_1g/100m_10

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G            24 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G            24 M            1           10             1000 M /test_quota/quota_1g

#第二次mv失败,需要1100M,但是只有1024M

[hadoop@cluster-host1 quota]$ hadoop fs -mv /test/100m /test_quota/quota_1g/100m_11

mv: The DiskSpace quota of /test_quota/quota_1g is exceeded: quota = 1073741824 B = 1 GB but diskspace consumed = 1153433600 B = 1.07 GB

#1153433600=1100M

setrep

[hadoop@cluster-host1 quota]$ hadoop fs -setrep 10 /test_quota/quota_1g

Replication 10 set/test_quota/quota_1g/100m_1

Replication 10 set/test_quota/quota_1g/100m_10

Replication 10 set/test_quota/quota_1g/100m_2

Replication 10 set/test_quota/quota_1g/100m_3

Replication 10 set/test_quota/quota_1g/100m_4

Replication 10 set/test_quota/quota_1g/100m_5

Replication 10 set/test_quota/quota_1g/100m_6

Replication 10 set/test_quota/quota_1g/100m_7

Replication 10 set/test_quota/quota_1g/100m_8

Replication 10 set/test_quota/quota_1g/100m_9

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G            24 M            1           10             1000 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G            24 M /test_quota/quota_1g

# 增加副本数不会受到限制,符合预期

du

[hadoop@cluster-host1 quota]$ hadoop fs -du -s -h /test_quota/quota_1g

1000 M  9.8 G  /test_quota/quota_1g

rm

[hadoop@cluster-host1 quota]$ hadoop fs -rm /test_quota/quota_1g/100m_10

Deleted /test_quota/quota_1g/100m_10

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           124 M            1            9              900 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           124 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -rm /test_quota/quota_1g/100m_9

Deleted /test_quota/quota_1g/100m_9

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           224 M            1            8              800 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           224 M /test_quota/quota_1g

注意:由于重启nn后,quota中的usage会重新计算。在上一版本的测试中发现,重启nn后,使用hadoop fs -count - u查看的剩余量不准(按物理空间量算了)。所以这一部分必须测试。

重启后查看

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

2020-05-20 17:19:22,567 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           224 M            1            8              800 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

2020-05-20 17:19:32,411 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           224 M /test_quota/quota_1g

#正常

cp

[hadoop@cluster-host1 quota]$ hadoop fs -cp /test/100m /test_quota/quota_1g/100m_9

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

2020-05-20 17:22:10,908 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           124 M            1            9              900 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           124 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -cp /test/100m /test_quota/quota_1g/100m_10

cp: The DiskSpace quota of /test_quota/quota_1g is exceeded: quota = 1073741824 B = 1 GB but diskspace consumed = 1077936128 B = 1.00 GB

子目录测试

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           224 M            1            8              800 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -put 100m /test_quota/quota_1g/a/100m_1

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           124 M            2            9              900 M /test_quota/quota_1g

[hadoop@cluster-host1 quota]$ hadoop fs -put 100m /test_quota/quota_1g/a/100m_2

put: The DiskSpace quota of /test_quota/quota_1g is exceeded: quota = 1073741824 B = 1 GB but diskspace consumed = 1077936128 B = 1.00 GB

EC测试

[hadoop@cluster-host1 quota]$ hadoop dfsadmin -setSpaceQuota 1g /test_quota/quota_1g_2

#设置Ec目录

[hadoop@cluster-host1 quota]$ hdfs ec -setPolicy -path /test_quota/quota_1g_2/ec -policy RS-3-2-1024k

Set RS-3-2-1024k erasure coding policy on /test_quota/quota_1g_2/ec

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g_2

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G             1 G            2            0                  0 /test_quota/quota_1g_2

#写EC文件

[hadoop@cluster-host1 quota]$ hadoop fs -put 200m /test_quota/quota_1g_2/ec/200m_1

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g_2

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           824 M /test_quota/quota_1g_2

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g_2

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           824 M            2            1              200 M /test_quota/quota_1g_2

#写EC文件

[hadoop@cluster-host1 quota]$ hadoop fs -put 200m /test_quota/quota_1g_2/ec/200m_2

[hadoop@cluster-host1 quota]$ hadoop fs -count -q -v -h /test_quota/quota_1g_2

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA    DIR_COUNT   FILE_COUNT       CONTENT_SIZE PATHNAME

        none             inf             1 G           624 M            2            2              400 M /test_quota/quota_1g_2

[hadoop@cluster-host1 quota]$ hadoop fs -count -u -v -h /test_quota/quota_1g_2

       QUOTA       REM_QUOTA     SPACE_QUOTA REM_SPACE_QUOTA PATHNAME

        none             inf             1 G           624 M /test_quota/quota_1g_2

进一步测试

需要对副本、EC文件,小于、等于、大于一个块(块组)的情况进一步测试。

4.可能的问题

1.fsimage中字段无需改动

2.历史quota需要全部找到,在升级版本后,刷成逻辑空间

3.namequota与spacequota的比例

4.quota会按磁盘的type来做精细化限制,内部版本不作考虑。

 

您可能感兴趣的与本文相关的镜像

ACE-Step

ACE-Step

音乐合成
ACE-Step

ACE-Step是由中国团队阶跃星辰(StepFun)与ACE Studio联手打造的开源音乐生成模型。 它拥有3.5B参数量,支持快速高质量生成、强可控性和易于拓展的特点。 最厉害的是,它可以生成多种语言的歌曲,包括但不限于中文、英文、日文等19种语言

使用scala编写spark任务在华为云平台运行失败报错如下 2025-11-10 14:45:44,759 [main] INFO org.apache.spark.deploy.yarn.Client - Application report for application_1761562621540_474713 (state: RUNNING) 2025-11-10 14:45:45,763 [main] INFO org.apache.spark.deploy.yarn.Client - Application report for application_1761562621540_474713 (state: FINISHED) 2025-11-10 14:45:45,764 [main] INFO org.apache.spark.deploy.yarn.Client - client token: Token { kind: YARN_CLIENT_TOKEN, service: } diagnostics: User class threw exception: java.io.IOException: BulkLoad encountered an unrecoverable problem at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:546) at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.performBulkLoad(LoadIncrementalHFiles.java:475) at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:379) at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:293) at com.bmsoft.scala.hbase.HFileRDDHelper.saveAsHFile(HFileSupport.scala:208) at com.bmsoft.scala.hbase.HFileRDDSimpleS.toHBaseBulk(HFileSupport.scala:270) at com.bmsoft.scala.hbase.HbaseOperate$.toHbaseBulkS(HbaseOperate.scala:198) at com.bmsoft.operate.cust.DwsVsCustVoltratePeriodNew.outputData(DwsVsCustVoltratePeriodNew.scala:139) at com.bmsoft.operate.cust.DwsVsCustVoltratePeriodNew.executor(DwsVsCustVoltratePeriodNew.scala:55) at com.bmsoft.task.cust.DwsVsCustVoltratePeriodTask$.func(DwsVsCustVoltratePeriodTask.scala:17) at com.bmsoft.task.cust.DwsVsCustVoltratePeriodTask$.$anonfun$main$1(DwsVsCustVoltratePeriodTask.scala:29) at com.bmsoft.task.cust.DwsVsCustVoltratePeriodTask$.$anonfun$main$1$adapted(DwsVsCustVoltratePeriodTask.scala:29) at com.bmsoft.scala.utils.LeoUtils.package$.$anonfun$taskEntry_daYu$3(LeoUtils.scala:437) at scala.runtime.java8.JFunction1$mcVJ$sp.apply(JFunction1$mcVJ$sp.java:23) at scala.collection.immutable.NumericRange.foreach(NumericRange.scala:74) at com.bmsoft.scala.utils.LeoUtils.package$.taskEntry_daYu(LeoUtils.scala:424) at com.bmsoft.task.cust.DwsVsCustVoltratePeriodTask$.main(DwsVsCustVoltratePeriodTask.scala:29) at com.bmsoft.task.cust.DwsVsCustVoltratePeriodTask.main(DwsVsCustVoltratePeriodTask.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:733) Caused by: org.apache.hadoop.hbase.quotas.SpaceLimitingException: (unknown policy) Could not verify length of file to bulk load: hdfs://hacluster/tmp/dws_vs_cust_voltrate_period_9ec23c29-36e2-4245-b345-e1c97e1d2797/C/426d78deaae74dabb96fd367b18496d2 at org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:98) at org.apache.hadoop.hbase.quotas.policies.MissingSnapshotViolationPolicyEnforcement.computeBulkLoadSize(MissingSnapshotViolationPolicyEnforcement.java:53) at org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2716) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42280) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:455) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) Caused by: java.io.FileNotFoundException: File does not exist: hdfs://hacluster/tmp/dws_vs_cust_voltrate_period_9ec23c29-36e2-4245-b345-e1c97e1d2797/C/426d78deaae74dabb96fd367b18496d2 at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1637) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1630) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1645) at org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:95) ... 7 more at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:99) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:89) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:367) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:355) at org.apache.hadoop.hbase.client.SecureBulkLoadClient.secureBulkLoadHFiles(SecureBulkLoadClient.java:157) at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles$1.rpcCall(LoadIncrementalHFiles.java:578) at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles$1.rpcCall(LoadIncrementalHFiles.java:564) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:110) at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.tryAtomicRegionLoad(LoadIncrementalHFiles.java:885) at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.tryAtomicRegionLoad(LoadIncrementalHFiles.java:862) at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.lambda$bulkLoadPhase$4(LoadIncrementalHFiles.java:524) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.SpaceLimitingException): org.apache.hadoop.hbase.quotas.SpaceLimitingException: NoQuota Could not verify length of file to bulk load: hdfs://hacluster/tmp/dws_vs_cust_voltrate_period_9ec23c29-36e2-4245-b345-e1c97e1d2797/C/426d78deaae74dabb96fd367b18496d2 at org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:98) at org.apache.hadoop.hbase.quotas.policies.MissingSnapshotViolationPolicyEnforcement.computeBulkLoadSize(MissingSnapshotViolationPolicyEnforcement.java:53) at org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2716) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42280) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:455) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) Caused by: java.io.FileNotFoundException: File does not exist: hdfs://hacluster/tmp/dws_vs_cust_voltrate_period_9ec23c29-36e2-4245-b345-e1c97e1d2797/C/426d78deaae74dabb96fd367b18496d2 at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1637) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1630) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1645) at org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:95) ... 7 more at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:395) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:97) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:429) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:425) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:118) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:133) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more ApplicationMaster host: node-group-3ntAo0008 ApplicationMaster RPC port: 22867 queue: default start time: 1762751893580 final status: FAILED tracking URL: https://node-master5qydo:8090/proxy/application_1761562621540_474713/ user: zhangdacheng
最新发布
11-11
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值