MaxCompute-ODPS SQL报错:Quota not enough,配额组资源不足

博客围绕MaxCompute-ODPS SQL出现的“Quota not enough”报错展开。原因是配额组资源不足,执行insert...select语句时存储资源不足,项目空间使用率达97%。解决办法一是删除无用大表并清空回收站,二是对项目空间进行扩容。

MaxCompute-ODPS SQL报错:Quota not enough,配额组资源不足

ODPS SQL报错:ODPS-0010000:System internal error - fuxi job failed, caused by: |Quota not enough 

问题原因: 配额组的资源不足。

                   由于我执行的是insert  ...  select ...语句,所以断定是存储资源不足。让运维人员查询项目空间使用率后,发现存储空间使用率已经达到了97%。

解决办法:

一、删除项目空间中无用的大表,并清空回收站。

1、删除项目空间中的存储容量较大的表。

     drop  table   table_name。

 2、删除后清空回收站。

     purge   table   table_name。

二、对项目空间进行扩容。

本文来自博客园,作者:业余砖家,转载请注明原文链接:MaxCompute-ODPS SQL报错:Quota not enough,配额组资源不足 - 业余砖家 - 博客园

执行scala编写的spark任务报错, Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) ~[scala-library-2.11.8.jar:?] at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) ~[scala-library-2.11.8.jar:?] at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:149) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:145) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:145) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doExecute(BroadcastHashJoinExec.scala:67) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:136) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:132) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:132) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.ProjectExec.doExecute(basicPhysicalOperators.scala:71) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:136) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:132) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:132) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:69) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:99) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:96) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) ~[spark-catalyst_2.11-2.3.0-odps0.30.0.jar:?] ... 30 more
最新发布
10-17
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值