canOpenURL: failed for URL: "xx" - error:"This app is not allowed to query for scheme xx"

本文介绍如何解决iOS 9升级后控制台出现的安全警告问题,包括禁用Bitcode、配置LSApplicationQueriesSchemes及调整NSAppTransportSecurity设置的方法。

控制台输出


如图是在我启动一个 Xcode 7 + iOS 9 的 App 之后,控制台的输出。

这在 Xcode 6.4 + ios 8 时,是不会有的情况,原因是【为了强制增强数据访问安全, iOS9 默认会把所有从NSURLConnection 、 CFURL 、 NSURLSession发出的 HTTP 请求,都改为 HTTPS 请求:iOS9.x-SDK编译时,默认会让所有从NSURLConnection 、 CFURL 、 NSURLSession发出的 HTTP 请求统一采用 TLS 1.2(SSL 3.1) 协议。】

下面说解决方案:

①如果你的输出信息是-canOpenURL: failed for URL: "kindle://home" - error: "This app is not allowed to query for scheme kindle"


set Bitcode to NO


去你的 target 里面的 Build Settings 下的 Enable Bitcode,把它设置成 NO,这不一定会阻挡你的控制台继续输出这条信息,但是可以保证你的 App 正常运行。

②如果你的输出信息是 xxxx - error: "This app is not allowed to query for scheme xxxx"
(在这里因为我的 App 集成了分享到QQ、微信、微博的功能,xxxx部分我看到了 mqq、wechat、sinaweibosso 等多条信息)


Info.plist


去 Info.plist 里面建立一个叫 LSApplicationQueriesSchemes 的 Array,把你在xxxx部分看到的词汇一个一个填进去,直至控制台没有任何相关输出即可。

③关于其他通过 WebView 访问 http 网址引发的控制台报错信息


Info.plist 中设置 ATS
<key>NSAppTransportSecurity</key>
<dict>
<!--Include to allow all connections (DANGER)-->
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>

如之前所说,Apple 希望我们访问相对安全的 HTTPS,所以在你需要访问 HTTP 时,
虽 Apple 不建议,但可通过在 Info.plist 中声明如上图所示的内容,倒退回不安全的网络请求,这样依然能让 App 访问指定 HTTP,甚至任意的 HTTP。

2025-07-02 11:15:25,551 INFO - task run command: sudo -u hadoop -E bash /tmp/dolphinscheduler/exec/process/hadoop/16836554651104/18167664743392_10/32581/56672/32581_56672.command 2025-07-02 11:15:25,552 INFO - process start, process id is: 1190 2025-07-02 11:15:26,553 INFO - -> /usr/lib/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh: line 23: export: `zookeeper.quorum=': not a valid identifier /usr/lib/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh: line 23: export: `dominos-usdp-fun01:2181,dominos-usdp-fun02:2181,dominos-usdp-fun03:2181': not a valid identifier 2025-07-02 11:15:31,554 INFO - -> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 2025-07-02 11:15:31,226 INFO [main] conf.HiveConf (HiveConf.java:findConfigFile(187)) - Found configuration file file:/etc/hive/conf/hive-site.xml 2025-07-02 11:15:32,554 INFO - -> 2025-07-02 11:15:32,428 main ERROR Cannot access RandomAccessFile java.io.FileNotFoundException: /data/log/hive/hive.log (Permission denied) java.io.FileNotFoundException: /data/log/hive/hive.log (Permission denied) at java.io.RandomAccessFile.open0(Native Method) at java.io.RandomAccessFile.open(RandomAccessFile.java:316) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:124) at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager$RollingRandomAccessFileManagerFactory.createManager(RollingRandomAccessFileManager.java:232) at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager$RollingRandomAccessFileManagerFactory.createManager(RollingRandomAccessFileManager.java:204) at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:114) at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:100) at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.getRollingRandomAccessFileManager(RollingRandomAccessFileManager.java:107) at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender$Builder.build(RollingRandomAccessFileAppender.java:132) at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender$Builder.build(RollingRandomAccessFileAppender.java:53) at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122) at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1120) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:1045) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:1037) at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:651) at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:247) at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:293) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:626) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:302) at org.apache.logging.log4j.core.async.AsyncLoggerContext.start(AsyncLoggerContext.java:87) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:242) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:159) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:131) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:101) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:210) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jDefault(LogUtils.java:173) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:106) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:98) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2025-07-02 11:15:32,430 main ERROR Could not create plugin of type class org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender for element RollingRandomAccessFile: java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager$RollingRandomAccessFileManagerFactory@5ef6ae06] unable to create manager for [/data/log/hive/hive.log] with data [org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager$FactoryData@55dfebeb] java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager$RollingRandomAccessFileManagerFactory@5ef6ae06] unable to create manager for [/data/log/hive/hive.log] with data [org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager$FactoryData@55dfebeb] at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:116) at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:100) at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.getRollingRandomAccessFileManager(RollingRandomAccessFileManager.java:107) at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender$Builder.build(RollingRandomAccessFileAppender.java:132) at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender$Builder.build(RollingRandomAccessFileAppender.java:53) at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122) at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1120) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:1045) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:1037) at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:651) at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:247) at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:293) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:626) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:302) at org.apache.logging.log4j.core.async.AsyncLoggerContext.start(AsyncLoggerContext.java:87) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:242) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:159) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:131) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:101) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:210) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jDefault(LogUtils.java:173) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:106) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:98) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2025-07-02 11:15:32,431 main ERROR Unable to invoke factory method in class org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender for element RollingRandomAccessFile: java.lang.IllegalStateException: No factory method found for class org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender java.lang.IllegalStateException: No factory method found for class org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.findFactoryMethod(PluginBuilder.java:236) at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:134) at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1120) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:1045) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:1037) at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:651) at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:247) at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:293) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:626) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:302) at org.apache.logging.log4j.core.async.AsyncLoggerContext.start(AsyncLoggerContext.java:87) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:242) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:159) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:131) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:101) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:210) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jDefault(LogUtils.java:173) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:106) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:98) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2025-07-02 11:15:32,432 main ERROR Null object returned for RollingRandomAccessFile in Appenders. 2025-07-02 11:15:32,432 main ERROR Unable to locate appender "DRFA" for logger config "root" Hive Session ID = 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:32,528 INFO [main] SessionState (SessionState.java:printInfo(1227)) - Hive Session ID = 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:33,555 INFO - -> Logging initialized using configuration in file:/etc/hive/conf/hive-log4j2.properties Async: true 2025-07-02 11:15:32,577 INFO [main] SessionState (SessionState.java:printInfo(1227)) - Logging initialized using configuration in file:/etc/hive/conf/hive-log4j2.properties Async: true 2025-07-02 11:15:34,556 INFO - -> 2025-07-02 11:15:33,630 INFO [main] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/hadoop/63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:33,652 INFO [main] session.SessionState (SessionState.java:createPath(790)) - Created local directory: /tmp/hadoop/63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:33,659 INFO [main] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/hadoop/63fc22ae-87a3-4d13-b59e-6ea5a99a9941/_tmp_space.db 2025-07-02 11:15:33,691 INFO [main] tez.TezSessionState (TezSessionState.java:openInternal(277)) - User of session id 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 is hadoop 2025-07-02 11:15:33,714 INFO [main] tez.DagUtils (DagUtils.java:localizeResource(1159)) - Localizing resource because it does not exist: file:/usr/lib/hive/auxlib/hudi-hadoop-mr-bundle-0.13.0.jar to dest: hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/63fc22ae-87a3-4d13-b59e-6ea5a99a9941-resources/hudi-hadoop-mr-bundle-0.13.0.jar 2025-07-02 11:15:34,365 INFO [main] tez.DagUtils (DagUtils.java:createLocalResource(842)) - Resource modification time: 1751426134293 for hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/63fc22ae-87a3-4d13-b59e-6ea5a99a9941-resources/hudi-hadoop-mr-bundle-0.13.0.jar 2025-07-02 11:15:34,384 INFO [main] tez.DagUtils (DagUtils.java:localizeResource(1159)) - Localizing resource because it does not exist: file:/usr/lib/hive/auxlib/hudi-hive-sync-bundle-0.13.0.jar to dest: hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/63fc22ae-87a3-4d13-b59e-6ea5a99a9941-resources/hudi-hive-sync-bundle-0.13.0.jar 2025-07-02 11:15:35,557 INFO - -> 2025-07-02 11:15:34,776 INFO [main] tez.DagUtils (DagUtils.java:createLocalResource(842)) - Resource modification time: 1751426134737 for hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/63fc22ae-87a3-4d13-b59e-6ea5a99a9941-resources/hudi-hive-sync-bundle-0.13.0.jar 2025-07-02 11:15:34,851 INFO [main] tez.TezSessionState (TezSessionState.java:openInternal(288)) - Created new resources: null 2025-07-02 11:15:34,854 INFO [main] tez.DagUtils (DagUtils.java:getHiveJarDirectory(1058)) - Jar dir is null / directory doesn't exist. Choosing HIVE_INSTALL_DIR - /user/hadoop/.hiveJars 2025-07-02 11:15:35,179 INFO [main] tez.TezSessionState (TezSessionState.java:getSha(854)) - Computed sha: 3420a6126cfea97266fe35b708da5d5f95a5b158cad390dc4124081a39cf906f for file: file:/usr/lib/hive/lib/hive-exec-3.1.3.jar of length: 40.36MB in 321 ms 2025-07-02 11:15:35,191 INFO [main] tez.DagUtils (DagUtils.java:createLocalResource(842)) - Resource modification time: 1715146950410 for hdfs://dominos-usdp-v3-fun/user/hadoop/.hiveJars/hive-exec-3.1.3-3420a6126cfea97266fe35b708da5d5f95a5b158cad390dc4124081a39cf906f.jar 2025-07-02 11:15:35,240 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.task.io.sort.mb, mr initial value=100, tez(original):tez.runtime.io.sort.mb=null, tez(final):tez.runtime.io.sort.mb=100 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.read.timeout, mr initial value=180000, tez(original):tez.runtime.shuffle.read.timeout=null, tez(final):tez.runtime.shuffle.read.timeout=180000 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.minimum-allowed-tasks, mr initial value=10, tez(original):tez.am.minimum.allowed.speculative.tasks=null, tez(final):tez.am.minimum.allowed.speculative.tasks=10 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.ifile.readahead.bytes, mr initial value=4194304, tez(original):tez.runtime.ifile.readahead.bytes=null, tez(final):tez.runtime.ifile.readahead.bytes=4194304 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.shuffle.ssl.enabled, mr initial value=false, tez(original):tez.runtime.shuffle.ssl.enable=null, tez(final):tez.runtime.shuffle.ssl.enable=false 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.map.sort.spill.percent, mr initial value=0.80, tez(original):tez.runtime.sort.spill.percent=null, tez(final):tez.runtime.sort.spill.percent=0.80 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.speculative-cap-running-tasks, mr initial value=0.1, tez(original):tez.am.proportion.running.tasks.speculatable=null, tez(final):tez.am.proportion.running.tasks.speculatable=0.1 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.speculative-cap-total-tasks, mr initial value=0.01, tez(original):tez.am.proportion.total.tasks.speculatable=null, tez(final):tez.am.proportion.total.tasks.speculatable=0.01 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.ifile.readahead, mr initial value=true, tez(original):tez.runtime.ifile.readahead=null, tez(final):tez.runtime.ifile.readahead=true 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.merge.percent, mr initial value=0.66, tez(original):tez.runtime.shuffle.merge.percent=null, tez(final):tez.runtime.shuffle.merge.percent=0.66 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.parallelcopies, mr initial value=50, tez(original):tez.runtime.shuffle.parallel.copies=null, tez(final):tez.runtime.shuffle.parallel.copies=50 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.retry-after-speculate, mr initial value=15000, tez(original):tez.am.soonest.retry.after.speculate=null, tez(final):tez.am.soonest.retry.after.speculate=15000 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.reduce.slowstart.completedmaps, mr initial value=0.95, tez(original):tez.shuffle-vertex-manager.min-src-fraction=null, tez(final):tez.shuffle-vertex-manager.min-src-fraction=0.95 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.memory.limit.percent, mr initial value=0.25, tez(original):tez.runtime.shuffle.memory.limit.percent=null, tez(final):tez.runtime.shuffle.memory.limit.percent=0.25 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.retry-after-no-speculate, mr initial value=1000, tez(original):tez.am.soonest.retry.after.no.speculate=null, tez(final):tez.am.soonest.retry.after.no.speculate=1000 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.task.io.sort.factor, mr initial value=100, tez(original):tez.runtime.io.sort.factor=null, tez(final):tez.runtime.io.sort.factor=100 2025-07-02 11:15:35,241 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.map.output.compress, mr initial value=false, tez(original):tez.runtime.compress=null, tez(final):tez.runtime.compress=false 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.connect.timeout, mr initial value=180000, tez(original):tez.runtime.shuffle.connect.timeout=null, tez(final):tez.runtime.shuffle.connect.timeout=180000 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.input.buffer.percent, mr initial value=0.0, tez(original):tez.runtime.task.input.post-merge.buffer.percent=null, tez(final):tez.runtime.task.input.post-merge.buffer.percent=0.0 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.map.output.compress.codec, mr initial value=org.apache.hadoop.io.compress.DefaultCodec, tez(original):tez.runtime.compress.codec=null, tez(final):tez.runtime.compress.codec=org.apache.hadoop.io.compress.DefaultCodec 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.task.merge.progress.records, mr initial value=10000, tez(original):tez.runtime.merge.progress.records=null, tez(final):tez.runtime.merge.progress.records=10000 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):map.sort.class, mr initial value=org.apache.hadoop.util.QuickSort, tez(original):tez.runtime.internal.sorter.class=null, tez(final):tez.runtime.internal.sorter.class=org.apache.hadoop.util.QuickSort 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.input.buffer.percent, mr initial value=0.70, tez(original):tez.runtime.shuffle.fetch.buffer.percent=null, tez(final):tez.runtime.shuffle.fetch.buffer.percent=0.70 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.counters.max, mr initial value=120, tez(original):tez.counters.max=null, tez(final):tez.counters.max=120 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.hdfs-servers, mr initial value=hdfs://dominos-usdp-v3-fun, tez(original):tez.job.fs-servers=null, tez(final):tez.job.fs-servers=hdfs://dominos-usdp-v3-fun 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.queuename, mr initial value=default, tez(original):tez.queue.name=default, tez(final):tez.queue.name=default 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.maxtaskfailures.per.tracker, mr initial value=3, tez(original):tez.am.maxtaskfailures.per.node=null, tez(final):tez.am.maxtaskfailures.per.node=3 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.task.timeout, mr initial value=600000, tez(original):tez.task.timeout-ms=null, tez(final):tez.task.timeout-ms=600000 2025-07-02 11:15:35,242 INFO [main] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):yarn.app.mapreduce.am.job.task.listener.thread-count, mr initial value=30, tez(original):tez.am.task.listener.thread-count=null, tez(final):tez.am.task.listener.thread-count=30 2025-07-02 11:15:35,261 INFO [main] sqlstd.SQLStdHiveAccessController (SQLStdHiveAccessController.java:<init>(96)) - Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=63fc22ae-87a3-4d13-b59e-6ea5a99a9941, clientType=HIVECLI] 2025-07-02 11:15:35,263 WARN [main] session.SessionState (SessionState.java:setAuthorizerV2Config(950)) - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. 2025-07-02 11:15:35,328 INFO [main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(441)) - Trying to connect to metastore with URI thrift://dc3-dominos-usdp-fun01:9083 2025-07-02 11:15:35,350 INFO [main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(517)) - Opened a connection to metastore, current connections: 1 2025-07-02 11:15:35,358 INFO [main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(570)) - Connected to metastore. 2025-07-02 11:15:35,358 INFO [main] metastore.RetryingMetaStoreClient (RetryingMetaStoreClient.java:<init>(97)) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hadoop (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-07-02 11:15:36,558 INFO - -> 2025-07-02 11:15:35,864 INFO [main] counters.Limits (Limits.java:init(61)) - Counter limits initialized with parameters: GROUP_NAME_MAX=256, MAX_GROUPS=500, COUNTER_NAME_MAX=64, MAX_COUNTERS=1200 2025-07-02 11:15:35,864 INFO [main] counters.Limits (Limits.java:init(61)) - Counter limits initialized with parameters: GROUP_NAME_MAX=256, MAX_GROUPS=500, COUNTER_NAME_MAX=64, MAX_COUNTERS=120 2025-07-02 11:15:35,864 INFO [main] client.TezClient (TezClient.java:<init>(210)) - Tez Client Version: [ component=tez-api, version=0.10.2, revision=22f46fe39a7cf99b24275304e99867b9135caba2, SCM-URL=scm:git:https://gitbox.apache.org/repos/asf/tez.git, buildTime=2023-02-08T02:24:56Z, buildUser=jenkins, buildJavaVersion=1.8.0_362 ] 2025-07-02 11:15:35,864 INFO [main] tez.TezSessionState (TezSessionState.java:openInternal(363)) - Opening new Tez Session (id: 63fc22ae-87a3-4d13-b59e-6ea5a99a9941, scratch dir: hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/63fc22ae-87a3-4d13-b59e-6ea5a99a9941) 2025-07-02 11:15:35,884 INFO [main] conf.HiveConf (HiveConf.java:getLogIdVar(5037)) - Using the default value passed in for log id: 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:35,884 INFO [main] session.SessionState (SessionState.java:updateThreadName(441)) - Updating thread name to 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main 2025-07-02 11:15:35,954 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:isCompatibleWith(346)) - Mestastore configuration metastore.filter.hook changed from org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook to org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl 2025-07-02 11:15:35,958 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:close(600)) - Closed a connection to metastore, current connections: 0 2025-07-02 11:15:36,152 INFO [Tez session start thread] impl.TimelineReaderClientImpl (TimelineReaderClientImpl.java:serviceInit(97)) - Initialized TimelineReader URI=http://dc3-dominos-usdp-fun02:8198/ws/v2/timeline/, clusterId=dominos-usdp-v3-fun 2025-07-02 11:15:36,342 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(441)) - Trying to connect to metastore with URI thrift://dc3-dominos-usdp-fun01:9083 2025-07-02 11:15:36,344 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(517)) - Opened a connection to metastore, current connections: 1 2025-07-02 11:15:36,345 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(570)) - Connected to metastore. 2025-07-02 11:15:36,345 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.RetryingMetaStoreClient (RetryingMetaStoreClient.java:<init>(97)) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hadoop (auth:SIMPLE) retries=1 delay=1 lifetime=0 Hive Session ID = 4caadf81-0f27-469e-8de0-87e177d910e3 2025-07-02 11:15:36,366 INFO [pool-7-thread-1] SessionState (SessionState.java:printInfo(1227)) - Hive Session ID = 4caadf81-0f27-469e-8de0-87e177d910e3 2025-07-02 11:15:36,385 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] conf.HiveConf (HiveConf.java:getLogIdVar(5037)) - Using the default value passed in for log id: 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:36,386 INFO [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/hadoop/4caadf81-0f27-469e-8de0-87e177d910e3 2025-07-02 11:15:36,412 INFO [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created local directory: /tmp/hadoop/4caadf81-0f27-469e-8de0-87e177d910e3 2025-07-02 11:15:36,420 INFO [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/hadoop/4caadf81-0f27-469e-8de0-87e177d910e3/_tmp_space.db 2025-07-02 11:15:36,420 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:openInternal(277)) - User of session id 4caadf81-0f27-469e-8de0-87e177d910e3 is hadoop 2025-07-02 11:15:36,441 INFO [pool-7-thread-1] tez.DagUtils (DagUtils.java:localizeResource(1159)) - Localizing resource because it does not exist: file:/usr/lib/hive/auxlib/hudi-hadoop-mr-bundle-0.13.0.jar to dest: hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/4caadf81-0f27-469e-8de0-87e177d910e3-resources/hudi-hadoop-mr-bundle-0.13.0.jar 2025-07-02 11:15:36,455 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:compile(554)) - Compiling command(queryId=hadoop_20250702111536_a8fe6b15-57e0-4288-895c-6d4f8fd58503): ALTER TABLE ddp_dmo_dwd.DWD_OrdCusSrvDetail DROP IF EXISTS PARTITION(DT='') 2025-07-02 11:15:36,484 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] hooks.ATSHook (ATSHook.java:<init>(146)) - Created ATS Hook 2025-07-02 11:15:36,484 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] hooks.ATSHook (ATSHook.java:<init>(146)) - Created ATS Hook 2025-07-02 11:15:36,484 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] hooks.ATSHook (ATSHook.java:<init>(146)) - Created ATS Hook 2025-07-02 11:15:37,561 INFO - -> 2025-07-02 11:15:36,669 INFO [Tez session start thread] client.AHSProxy (AHSProxy.java:createAHSProxy(43)) - Connecting to Application History server at dc3-dominos-usdp-fun01/10.30.10.60:10200 2025-07-02 11:15:36,686 INFO [Tez session start thread] client.TezClient (TezClient.java:start(388)) - Session mode. Starting session. 2025-07-02 11:15:36,727 INFO [Tez session start thread] client.ConfiguredRMFailoverProxyProvider (ConfiguredRMFailoverProxyProvider.java:performFailover(100)) - Failing over to rm-dc3-dominos-usdp-fun01 2025-07-02 11:15:36,809 INFO [Tez session start thread] client.TezClientUtils (TezClientUtils.java:setupTezJarsLocalResources(180)) - Using tez.lib.uris value from configuration: hdfs:////dominos-usdp-v3-fun/tez/tez.tar.gz 2025-07-02 11:15:36,809 INFO [Tez session start thread] client.TezClientUtils (TezClientUtils.java:setupTezJarsLocalResources(182)) - Using tez.lib.uris.classpath value from configuration: null 2025-07-02 11:15:36,880 INFO [Tez session start thread] client.TezClient (TezCommonUtils.java:createTezSystemStagingPath(123)) - Tez system stage directory hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/63fc22ae-87a3-4d13-b59e-6ea5a99a9941/.tez/application_1740624029612_5078 doesn't exist and is created 2025-07-02 11:15:36,913 INFO [Tez session start thread] conf.Configuration (Configuration.java:getConfResourceAsInputStream(2845)) - resource-types.xml not found 2025-07-02 11:15:36,914 INFO [Tez session start thread] resource.ResourceUtils (ResourceUtils.java:addResourcesFileToConf(476)) - Unable to find 'resource-types.xml'. 2025-07-02 11:15:36,948 INFO [Tez session start thread] Configuration.deprecation (Configuration.java:logDeprecation(1441)) - yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled 2025-07-02 11:15:37,000 INFO [pool-7-thread-1] tez.DagUtils (DagUtils.java:createLocalResource(842)) - Resource modification time: 1751426136955 for hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/4caadf81-0f27-469e-8de0-87e177d910e3-resources/hudi-hadoop-mr-bundle-0.13.0.jar 2025-07-02 11:15:37,005 INFO [pool-7-thread-1] tez.DagUtils (DagUtils.java:localizeResource(1159)) - Localizing resource because it does not exist: file:/usr/lib/hive/auxlib/hudi-hive-sync-bundle-0.13.0.jar to dest: hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/4caadf81-0f27-469e-8de0-87e177d910e3-resources/hudi-hive-sync-bundle-0.13.0.jar 2025-07-02 11:15:38,562 INFO - -> 2025-07-02 11:15:37,601 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:isCompatibleWith(346)) - Mestastore configuration metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook 2025-07-02 11:15:37,602 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:close(600)) - Closed a connection to metastore, current connections: 0 2025-07-02 11:15:37,603 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:checkConcurrency(285)) - Concurrency mode is disabled, not creating a lock manager 2025-07-02 11:15:37,610 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(441)) - Trying to connect to metastore with URI thrift://dc3-dominos-usdp-fun02:9083 2025-07-02 11:15:37,615 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(517)) - Opened a connection to metastore, current connections: 1 2025-07-02 11:15:37,620 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(570)) - Connected to metastore. 2025-07-02 11:15:37,621 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] metastore.RetryingMetaStoreClient (RetryingMetaStoreClient.java:<init>(97)) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hadoop (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-07-02 11:15:37,676 INFO [Tez session start thread] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(338)) - Submitted application application_1740624029612_5078 2025-07-02 11:15:37,685 INFO [Tez session start thread] client.TezClient (TezClient.java:start(404)) - The url to track the Tez Session: http://dc3-dominos-usdp-fun01:8088/proxy/application_1740624029612_5078/ 2025-07-02 11:15:38,143 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:compile(666)) - Semantic Analysis Completed (retrial = false) 2025-07-02 11:15:38,145 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:getSchema(374)) - Returning Hive schema: Schema(fieldSchemas:null, properties:null) 2025-07-02 11:15:38,149 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:compile(781)) - Completed compiling command(queryId=hadoop_20250702111536_a8fe6b15-57e0-4288-895c-6d4f8fd58503); Time taken: 1.723 seconds 2025-07-02 11:15:38,150 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] reexec.ReExecDriver (ReExecDriver.java:run(156)) - Execution #1 of query 2025-07-02 11:15:38,150 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:checkConcurrency(285)) - Concurrency mode is disabled, not creating a lock manager 2025-07-02 11:15:38,150 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:execute(2255)) - Executing command(queryId=hadoop_20250702111536_a8fe6b15-57e0-4288-895c-6d4f8fd58503): ALTER TABLE ddp_dmo_dwd.DWD_OrdCusSrvDetail DROP IF EXISTS PARTITION(DT='') 2025-07-02 11:15:38,153 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] hooks.ATSHook (ATSHook.java:setupAtsExecutor(115)) - Creating ATS executor queue with capacity 64 2025-07-02 11:15:38,177 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] impl.TimelineClientImpl (TimelineClientImpl.java:serviceInit(130)) - Timeline service address: dc3-dominos-usdp-fun01:8188 2025-07-02 11:15:38,295 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:launchTask(2662)) - Starting task [Stage-0:DDL] in serial mode 2025-07-02 11:15:38,414 INFO [ATS Logger 0] hooks.ATSHook (ATSHook.java:createTimelineDomain(155)) - ATS domain created:hive_63fc22ae-87a3-4d13-b59e-6ea5a99a9941(hadoop,hadoop) 2025-07-02 11:15:38,528 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:execute(2531)) - Completed executing command(queryId=hadoop_20250702111536_a8fe6b15-57e0-4288-895c-6d4f8fd58503); Time taken: 0.378 seconds OK 2025-07-02 11:15:38,528 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (SessionState.java:printInfo(1227)) - OK 2025-07-02 11:15:38,529 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:checkConcurrency(285)) - Concurrency mode is disabled, not creating a lock manager Time taken: 2.104 seconds 2025-07-02 11:15:38,529 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] CliDriver (SessionState.java:printInfo(1227)) - Time taken: 2.104 seconds 2025-07-02 11:15:38,530 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] conf.HiveConf (HiveConf.java:getLogIdVar(5037)) - Using the default value passed in for log id: 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:38,530 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] session.SessionState (SessionState.java:resetThreadName(452)) - Resetting thread name to main 2025-07-02 11:15:38,530 INFO [main] conf.HiveConf (HiveConf.java:getLogIdVar(5037)) - Using the default value passed in for log id: 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:38,530 INFO [main] session.SessionState (SessionState.java:updateThreadName(441)) - Updating thread name to 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main 2025-07-02 11:15:38,533 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:compile(554)) - Compiling command(queryId=hadoop_20250702111538_f08ba63c-7b09-48c8-86fe-f29aa249329c): ALTER TABLE ddp_dmo_dwd.DWD_OrdCusSrvDetail ADD IF NOT EXISTS PARTITION(DT='') 2025-07-02 11:15:38,551 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] hooks.ATSHook (ATSHook.java:<init>(146)) - Created ATS Hook 2025-07-02 11:15:38,551 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] hooks.ATSHook (ATSHook.java:<init>(146)) - Created ATS Hook 2025-07-02 11:15:38,551 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] hooks.ATSHook (ATSHook.java:<init>(146)) - Created ATS Hook 2025-07-02 11:15:38,559 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:checkConcurrency(285)) - Concurrency mode is disabled, not creating a lock manager 2025-07-02 11:15:39,286 INFO - process has exited. execute path:/tmp/dolphinscheduler/exec/process/hadoop/16836554651104/18167664743392_10/32581/56672, processId:1190 ,exitStatusCode:1 ,processWaitForStatus:true ,processExitValue:1 2025-07-02 11:15:39,287 INFO - Send task execute result to master, the current task status: TaskExecutionStatus{code=6, desc='failure'} 2025-07-02 11:15:39,287 INFO - Remove the current task execute context from worker cache 2025-07-02 11:15:39,287 INFO - The current execute mode isn't develop mode, will clear the task execute file: /tmp/dolphinscheduler/exec/process/hadoop/16836554651104/18167664743392_10/32581/56672 2025-07-02 11:15:39,288 INFO - Success clear the task execute file: /tmp/dolphinscheduler/exec/process/hadoop/16836554651104/18167664743392_10/32581/56672 2025-07-02 11:15:39,562 INFO - -> 2025-07-02 11:15:38,630 INFO [pool-7-thread-1] tez.DagUtils (DagUtils.java:createLocalResource(842)) - Resource modification time: 1751426138580 for hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/4caadf81-0f27-469e-8de0-87e177d910e3-resources/hudi-hive-sync-bundle-0.13.0.jar 2025-07-02 11:15:38,630 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:openInternal(288)) - Created new resources: null 2025-07-02 11:15:38,644 INFO [pool-7-thread-1] tez.DagUtils (DagUtils.java:getHiveJarDirectory(1058)) - Jar dir is null / directory doesn't exist. Choosing HIVE_INSTALL_DIR - /user/hadoop/.hiveJars 2025-07-02 11:15:38,666 INFO [pool-7-thread-1] tez.DagUtils (DagUtils.java:createLocalResource(842)) - Resource modification time: 1715146950410 for hdfs://dominos-usdp-v3-fun/user/hadoop/.hiveJars/hive-exec-3.1.3-3420a6126cfea97266fe35b708da5d5f95a5b158cad390dc4124081a39cf906f.jar 2025-07-02 11:15:38,697 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:compile(666)) - Semantic Analysis Completed (retrial = false) 2025-07-02 11:15:38,698 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:getSchema(374)) - Returning Hive schema: Schema(fieldSchemas:null, properties:null) 2025-07-02 11:15:38,698 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:compile(781)) - Completed compiling command(queryId=hadoop_20250702111538_f08ba63c-7b09-48c8-86fe-f29aa249329c); Time taken: 0.165 seconds 2025-07-02 11:15:38,698 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] reexec.ReExecDriver (ReExecDriver.java:run(156)) - Execution #1 of query 2025-07-02 11:15:38,698 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:checkConcurrency(285)) - Concurrency mode is disabled, not creating a lock manager 2025-07-02 11:15:38,698 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:execute(2255)) - Executing command(queryId=hadoop_20250702111538_f08ba63c-7b09-48c8-86fe-f29aa249329c): ALTER TABLE ddp_dmo_dwd.DWD_OrdCusSrvDetail ADD IF NOT EXISTS PARTITION(DT='') 2025-07-02 11:15:38,700 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:launchTask(2662)) - Starting task [Stage-0:DDL] in serial mode 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.task.io.sort.mb, mr initial value=100, tez(original):tez.runtime.io.sort.mb=null, tez(final):tez.runtime.io.sort.mb=100 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.read.timeout, mr initial value=180000, tez(original):tez.runtime.shuffle.read.timeout=null, tez(final):tez.runtime.shuffle.read.timeout=180000 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.minimum-allowed-tasks, mr initial value=10, tez(original):tez.am.minimum.allowed.speculative.tasks=null, tez(final):tez.am.minimum.allowed.speculative.tasks=10 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.ifile.readahead.bytes, mr initial value=4194304, tez(original):tez.runtime.ifile.readahead.bytes=null, tez(final):tez.runtime.ifile.readahead.bytes=4194304 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.shuffle.ssl.enabled, mr initial value=false, tez(original):tez.runtime.shuffle.ssl.enable=null, tez(final):tez.runtime.shuffle.ssl.enable=false 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.map.sort.spill.percent, mr initial value=0.80, tez(original):tez.runtime.sort.spill.percent=null, tez(final):tez.runtime.sort.spill.percent=0.80 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.speculative-cap-running-tasks, mr initial value=0.1, tez(original):tez.am.proportion.running.tasks.speculatable=null, tez(final):tez.am.proportion.running.tasks.speculatable=0.1 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.speculative-cap-total-tasks, mr initial value=0.01, tez(original):tez.am.proportion.total.tasks.speculatable=null, tez(final):tez.am.proportion.total.tasks.speculatable=0.01 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.ifile.readahead, mr initial value=true, tez(original):tez.runtime.ifile.readahead=null, tez(final):tez.runtime.ifile.readahead=true 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.merge.percent, mr initial value=0.66, tez(original):tez.runtime.shuffle.merge.percent=null, tez(final):tez.runtime.shuffle.merge.percent=0.66 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.parallelcopies, mr initial value=50, tez(original):tez.runtime.shuffle.parallel.copies=null, tez(final):tez.runtime.shuffle.parallel.copies=50 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.retry-after-speculate, mr initial value=15000, tez(original):tez.am.soonest.retry.after.speculate=null, tez(final):tez.am.soonest.retry.after.speculate=15000 2025-07-02 11:15:38,726 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.reduce.slowstart.completedmaps, mr initial value=0.95, tez(original):tez.shuffle-vertex-manager.min-src-fraction=null, tez(final):tez.shuffle-vertex-manager.min-src-fraction=0.95 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.memory.limit.percent, mr initial value=0.25, tez(original):tez.runtime.shuffle.memory.limit.percent=null, tez(final):tez.runtime.shuffle.memory.limit.percent=0.25 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.speculative.retry-after-no-speculate, mr initial value=1000, tez(original):tez.am.soonest.retry.after.no.speculate=null, tez(final):tez.am.soonest.retry.after.no.speculate=1000 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.task.io.sort.factor, mr initial value=100, tez(original):tez.runtime.io.sort.factor=null, tez(final):tez.runtime.io.sort.factor=100 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.map.output.compress, mr initial value=false, tez(original):tez.runtime.compress=null, tez(final):tez.runtime.compress=false 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.connect.timeout, mr initial value=180000, tez(original):tez.runtime.shuffle.connect.timeout=null, tez(final):tez.runtime.shuffle.connect.timeout=180000 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.input.buffer.percent, mr initial value=0.0, tez(original):tez.runtime.task.input.post-merge.buffer.percent=null, tez(final):tez.runtime.task.input.post-merge.buffer.percent=0.0 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.map.output.compress.codec, mr initial value=org.apache.hadoop.io.compress.DefaultCodec, tez(original):tez.runtime.compress.codec=null, tez(final):tez.runtime.compress.codec=org.apache.hadoop.io.compress.DefaultCodec 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.task.merge.progress.records, mr initial value=10000, tez(original):tez.runtime.merge.progress.records=null, tez(final):tez.runtime.merge.progress.records=10000 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):map.sort.class, mr initial value=org.apache.hadoop.util.QuickSort, tez(original):tez.runtime.internal.sorter.class=null, tez(final):tez.runtime.internal.sorter.class=org.apache.hadoop.util.QuickSort 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.reduce.shuffle.input.buffer.percent, mr initial value=0.70, tez(original):tez.runtime.shuffle.fetch.buffer.percent=null, tez(final):tez.runtime.shuffle.fetch.buffer.percent=0.70 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.counters.max, mr initial value=120, tez(original):tez.counters.max=null, tez(final):tez.counters.max=120 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.hdfs-servers, mr initial value=hdfs://dominos-usdp-v3-fun, tez(original):tez.job.fs-servers=null, tez(final):tez.job.fs-servers=hdfs://dominos-usdp-v3-fun 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.queuename, mr initial value=default, tez(original):tez.queue.name=default, tez(final):tez.queue.name=default 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.job.maxtaskfailures.per.tracker, mr initial value=3, tez(original):tez.am.maxtaskfailures.per.node=null, tez(final):tez.am.maxtaskfailures.per.node=3 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):mapreduce.task.timeout, mr initial value=600000, tez(original):tez.task.timeout-ms=null, tez(final):tez.task.timeout-ms=600000 2025-07-02 11:15:38,727 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:setupTezParamsBasedOnMR(562)) - Config: mr(unset):yarn.app.mapreduce.am.job.task.listener.thread-count, mr initial value=30, tez(original):tez.am.task.listener.thread-count=null, tez(final):tez.am.task.listener.thread-count=30 2025-07-02 11:15:38,731 INFO [pool-7-thread-1] sqlstd.SQLStdHiveAccessController (SQLStdHiveAccessController.java:<init>(96)) - Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=4caadf81-0f27-469e-8de0-87e177d910e3, clientType=HIVECLI] 2025-07-02 11:15:38,731 WARN [pool-7-thread-1] session.SessionState (SessionState.java:setAuthorizerV2Config(950)) - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. 2025-07-02 11:15:38,735 INFO [pool-7-thread-1] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(441)) - Trying to connect to metastore with URI thrift://dc3-dominos-usdp-fun02:9083 2025-07-02 11:15:38,739 INFO [pool-7-thread-1] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(517)) - Opened a connection to metastore, current connections: 2 2025-07-02 11:15:38,745 INFO [pool-7-thread-1] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(570)) - Connected to metastore. 2025-07-02 11:15:38,745 INFO [pool-7-thread-1] metastore.RetryingMetaStoreClient (RetryingMetaStoreClient.java:<init>(97)) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hadoop (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-07-02 11:15:38,750 INFO [pool-7-thread-1] client.TezClient (TezClient.java:<init>(210)) - Tez Client Version: [ component=tez-api, version=0.10.2, revision=22f46fe39a7cf99b24275304e99867b9135caba2, SCM-URL=scm:git:https://gitbox.apache.org/repos/asf/tez.git, buildTime=2023-02-08T02:24:56Z, buildUser=jenkins, buildJavaVersion=1.8.0_362 ] 2025-07-02 11:15:38,750 INFO [pool-7-thread-1] tez.TezSessionState (TezSessionState.java:openInternal(363)) - Opening new Tez Session (id: 4caadf81-0f27-469e-8de0-87e177d910e3, scratch dir: hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/4caadf81-0f27-469e-8de0-87e177d910e3) 2025-07-02 11:15:38,773 INFO [pool-7-thread-1] impl.TimelineReaderClientImpl (TimelineReaderClientImpl.java:serviceInit(97)) - Initialized TimelineReader URI=http://dc3-dominos-usdp-fun02:8198/ws/v2/timeline/, clusterId=dominos-usdp-v3-fun 2025-07-02 11:15:38,779 ERROR [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] exec.DDLTask (DDLTask.java:failed(927)) - Failed org.apache.hadoop.hive.ql.metadata.HiveException: partition spec is invalid; field dt does not exist or is empty at org.apache.hadoop.hive.ql.metadata.Partition.createMetaPartitionObject(Partition.java:129) at org.apache.hadoop.hive.ql.metadata.Hive.convertAddSpecToMetaPartition(Hive.java:2525) at org.apache.hadoop.hive.ql.metadata.Hive.createPartitions(Hive.java:2466) at org.apache.hadoop.hive.ql.exec.DDLTask.addPartitions(DDLTask.java:1320) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:466) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2025-07-02 11:15:38,790 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] reexec.ReOptimizePlugin (ReOptimizePlugin.java:run(70)) - ReOptimization: retryPossible: false FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. partition spec is invalid; field dt does not exist or is empty 2025-07-02 11:15:38,791 ERROR [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (SessionState.java:printError(1250)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. partition spec is invalid; field dt does not exist or is empty 2025-07-02 11:15:38,792 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:execute(2531)) - Completed executing command(queryId=hadoop_20250702111538_f08ba63c-7b09-48c8-86fe-f29aa249329c); Time taken: 0.094 seconds 2025-07-02 11:15:38,792 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] ql.Driver (Driver.java:checkConcurrency(285)) - Concurrency mode is disabled, not creating a lock manager 2025-07-02 11:15:38,793 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] conf.HiveConf (HiveConf.java:getLogIdVar(5037)) - Using the default value passed in for log id: 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:38,793 INFO [63fc22ae-87a3-4d13-b59e-6ea5a99a9941 main] session.SessionState (SessionState.java:resetThreadName(452)) - Resetting thread name to main 2025-07-02 11:15:38,793 INFO [main] conf.HiveConf (HiveConf.java:getLogIdVar(5037)) - Using the default value passed in for log id: 63fc22ae-87a3-4d13-b59e-6ea5a99a9941 2025-07-02 11:15:38,799 INFO [main] tez.TezSessionPoolManager (TezSessionPoolManager.java:closeIfNotDefault(351)) - Closing tez session if not default: sessionId=63fc22ae-87a3-4d13-b59e-6ea5a99a9941, queueName=null, user=hadoop, doAs=false, isOpen=false, isDefault=false 2025-07-02 11:15:38,800 INFO [Tez session start thread] client.TezClient (TezClient.java:stop(731)) - Shutting down Tez Session, sessionName=HIVE-63fc22ae-87a3-4d13-b59e-6ea5a99a9941, applicationId=application_1740624029612_5078 2025-07-02 11:15:38,815 INFO [Tez session start thread] client.TezClient (TezClient.java:stop(777)) - Could not connect to AM, killing session via YARN, sessionName=HIVE-63fc22ae-87a3-4d13-b59e-6ea5a99a9941, applicationId=application_1740624029612_5078 2025-07-02 11:15:38,824 INFO [main] tez.TezSessionState (TezSessionState.java:cleanupDagResources(721)) - Attemting to clean up resources for 63fc22ae-87a3-4d13-b59e-6ea5a99a9941: hdfs://dominos-usdp-v3-fun/tmp/hive/hadoop/_tez_session_dir/63fc22ae-87a3-4d13-b59e-6ea5a99a9941-resources; 0 additional files, 2 localized resources 2025-07-02 11:15:38,839 INFO [pool-7-thread-1] client.AHSProxy (AHSProxy.java:createAHSProxy(43)) - Connecting to Application History server at dc3-dominos-usdp-fun01/10.30.10.60:10200 2025-07-02 11:15:38,840 INFO [pool-7-thread-1] client.TezClient (TezClient.java:start(388)) - Session mode. Starting session. 2025-07-02 11:15:38,840 INFO [pool-7-thread-1] client.ConfiguredRMFailoverProxyProvider (ConfiguredRMFailoverProxyProvider.java:performFailover(100)) - Failing over to rm-dc3-dominos-usdp-fun01 2025-07-02 11:15:38,851 INFO [pool-7-thread-1] client.TezClientUtils (TezClientUtils.java:setupTezJarsLocalResources(180)) - Using tez.lib.uris value from configuration: hdfs:////dominos-usdp-v3-fun/tez/tez.tar.gz 2025-07-02 11:15:38,851 INFO [pool-7-thread-1] client.TezClientUtils (TezClientUtils.java:setupTezJarsLocalResources(182)) - Using tez.lib.uris.classpath value from configuration: null 2025-07-02 11:15:38,855 INFO [main] session.SessionState (SessionState.java:dropPathAndUnregisterDeleteOnExit(885)) - Deleted directory: /tmp/hive/hadoop/63fc22ae-87a3-4d13-b59e-6ea5a99a9941 on fs with scheme hdfs 2025-07-02 11:15:38,856 INFO [main] session.SessionState (SessionState.java:dropPathAndUnregisterDeleteOnExit(885)) - Deleted directory: /tmp/hadoop/63fc22ae-87a3-4d13-b59e-6ea5a99a9941 on fs with scheme file 2025-07-02 11:15:38,857 INFO [main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:close(600)) - Closed a connection to metastore, current connections: 1 2025-07-02 11:15:39,565 INFO - FINALIZE_SESSION
07-03
from __future__ import annotations import collections.abc as cabc import os import sys import typing as t import weakref from datetime import timedelta from inspect import iscoroutinefunction from itertools import chain from types import TracebackType from urllib.parse import quote as _url_quote import click from werkzeug.datastructures import Headers from werkzeug.datastructures import ImmutableDict from werkzeug.exceptions import BadRequestKeyError from werkzeug.exceptions import HTTPException from werkzeug.exceptions import InternalServerError from werkzeug.routing import BuildError from werkzeug.routing import MapAdapter from werkzeug.routing import RequestRedirect from werkzeug.routing import RoutingException from werkzeug.routing import Rule from werkzeug.serving import is_running_from_reloader from werkzeug.wrappers import Response as BaseResponse from werkzeug.wsgi import get_host from . import cli from . import typing as ft from .ctx import AppContext from .ctx import RequestContext from .globals import _cv_app from .globals import _cv_request from .globals import current_app from .globals import g from .globals import request from .globals import request_ctx from .globals import session from .helpers import get_debug_flag from .helpers import get_flashed_messages from .helpers import get_load_dotenv from .helpers import send_from_directory from .sansio.app import App from .sansio.scaffold import _sentinel from .sessions import SecureCookieSessionInterface from .sessions import SessionInterface from .signals import appcontext_tearing_down from .signals import got_request_exception from .signals import request_finished from .signals import request_started from .signals import request_tearing_down from .templating import Environment from .wrappers import Request from .wrappers import Response if t.TYPE_CHECKING: # pragma: no cover from _typeshed.wsgi import StartResponse from _typeshed.wsgi import WSGIEnvironment from .testing import FlaskClient from .testing import FlaskCliRunner from .typing import HeadersValue T_shell_context_processor = t.TypeVar( "T_shell_context_processor", bound=ft.ShellContextProcessorCallable ) T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable) T_template_filter = t.TypeVar("T_template_filter", bound=ft.TemplateFilterCallable) T_template_global = t.TypeVar("T_template_global", bound=ft.TemplateGlobalCallable) T_template_test = t.TypeVar("T_template_test", bound=ft.TemplateTestCallable) def _make_timedelta(value: timedelta | int | None) -> timedelta | None: if value is None or isinstance(value, timedelta): return value return timedelta(seconds=value) class Flask(App): """The flask object implements a WSGI application and acts as the central object. It is passed the name of the module or package of the application. Once it is created it will act as a central registry for the view functions, the URL rules, template configuration and much more. The name of the package is used to resolve resources from inside the package or the folder the module is contained in depending on if the package parameter resolves to an actual python package (a folder with an :file:`__init__.py` file inside) or a standard module (just a ``.py`` file). For more information about resource loading, see :func:`open_resource`. Usually you create a :class:`Flask` instance in your main module or in the :file:`__init__.py` file of your package like this:: from flask import Flask app = Flask(__name__) .. admonition:: About the First Parameter The idea of the first parameter is to give Flask an idea of what belongs to your application. This name is used to find resources on the filesystem, can be used by extensions to improve debugging information and a lot more. So it's important what you provide there. If you are using a single module, `__name__` is always the correct value. If you however are using a package, it's usually recommended to hardcode the name of your package there. For example if your application is defined in :file:`yourapplication/app.py` you should create it with one of the two versions below:: app = Flask('yourapplication') app = Flask(__name__.split('.')[0]) Why is that? The application will work even with `__name__`, thanks to how resources are looked up. However it will make debugging more painful. Certain extensions can make assumptions based on the import name of your application. For example the Flask-SQLAlchemy extension will look for the code in your application that triggered an SQL query in debug mode. If the import name is not properly set up, that debugging information is lost. (For example it would only pick up SQL queries in `yourapplication.app` and not `yourapplication.views.frontend`) .. versionadded:: 0.7 The `static_url_path`, `static_folder`, and `template_folder` parameters were added. .. versionadded:: 0.8 The `instance_path` and `instance_relative_config` parameters were added. .. versionadded:: 0.11 The `root_path` parameter was added. .. versionadded:: 1.0 The ``host_matching`` and ``static_host`` parameters were added. .. versionadded:: 1.0 The ``subdomain_matching`` parameter was added. Subdomain matching needs to be enabled manually now. Setting :data:`SERVER_NAME` does not implicitly enable it. :param import_name: the name of the application package :param static_url_path: can be used to specify a different path for the static files on the web. Defaults to the name of the `static_folder` folder. :param static_folder: The folder with static files that is served at ``static_url_path``. Relative to the application ``root_path`` or an absolute path. Defaults to ``'static'``. :param static_host: the host to use when adding the static route. Defaults to None. Required when using ``host_matching=True`` with a ``static_folder`` configured. :param host_matching: set ``url_map.host_matching`` attribute. Defaults to False. :param subdomain_matching: consider the subdomain relative to :data:`SERVER_NAME` when matching routes. Defaults to False. :param template_folder: the folder that contains the templates that should be used by the application. Defaults to ``'templates'`` folder in the root path of the application. :param instance_path: An alternative instance path for the application. By default the folder ``'instance'`` next to the package or module is assumed to be the instance path. :param instance_relative_config: if set to ``True`` relative filenames for loading the config are assumed to be relative to the instance path instead of the application root. :param root_path: The path to the root of the application files. This should only be set manually when it can't be detected automatically, such as for namespace packages. """ default_config = ImmutableDict( { "DEBUG": None, "TESTING": False, "PROPAGATE_EXCEPTIONS": None, "SECRET_KEY": None, "SECRET_KEY_FALLBACKS": None, "PERMANENT_SESSION_LIFETIME": timedelta(days=31), "USE_X_SENDFILE": False, "TRUSTED_HOSTS": None, "SERVER_NAME": None, "APPLICATION_ROOT": "/", "SESSION_COOKIE_NAME": "session", "SESSION_COOKIE_DOMAIN": None, "SESSION_COOKIE_PATH": None, "SESSION_COOKIE_HTTPONLY": True, "SESSION_COOKIE_SECURE": False, "SESSION_COOKIE_PARTITIONED": False, "SESSION_COOKIE_SAMESITE": None, "SESSION_REFRESH_EACH_REQUEST": True, "MAX_CONTENT_LENGTH": None, "MAX_FORM_MEMORY_SIZE": 500_000, "MAX_FORM_PARTS": 1_000, "SEND_FILE_MAX_AGE_DEFAULT": None, "TRAP_BAD_REQUEST_ERRORS": None, "TRAP_HTTP_EXCEPTIONS": False, "EXPLAIN_TEMPLATE_LOADING": False, "PREFERRED_URL_SCHEME": "http", "TEMPLATES_AUTO_RELOAD": None, "MAX_COOKIE_SIZE": 4093, "PROVIDE_AUTOMATIC_OPTIONS": True, } ) #: The class that is used for request objects. See :class:`~flask.Request` #: for more information. request_class: type[Request] = Request #: The class that is used for response objects. See #: :class:`~flask.Response` for more information. response_class: type[Response] = Response #: the session interface to use. By default an instance of #: :class:`~flask.sessions.SecureCookieSessionInterface` is used here. #: #: .. versionadded:: 0.8 session_interface: SessionInterface = SecureCookieSessionInterface() def __init__( self, import_name: str, static_url_path: str | None = None, static_folder: str | os.PathLike[str] | None = "static", static_host: str | None = None, host_matching: bool = False, subdomain_matching: bool = False, template_folder: str | os.PathLike[str] | None = "templates", instance_path: str | None = None, instance_relative_config: bool = False, root_path: str | None = None, ): super().__init__( import_name=import_name, static_url_path=static_url_path, static_folder=static_folder, static_host=static_host, host_matching=host_matching, subdomain_matching=subdomain_matching, template_folder=template_folder, instance_path=instance_path, instance_relative_config=instance_relative_config, root_path=root_path, ) #: The Click command group for registering CLI commands for this #: object. The commands are available from the ``flask`` command #: once the application has been discovered and blueprints have #: been registered. self.cli = cli.AppGroup() # Set the name of the Click group in case someone wants to add # the app's commands to another CLI tool. self.cli.name = self.name # Add a static route using the provided static_url_path, static_host, # and static_folder if there is a configured static_folder. # Note we do this without checking if static_folder exists. # For one, it might be created while the server is running (e.g. during # development). Also, Google App Engine stores static files somewhere if self.has_static_folder: assert bool(static_host) == host_matching, ( "Invalid static_host/host_matching combination" ) # Use a weakref to avoid creating a reference cycle between the app # and the view function (see #3761). self_ref = weakref.ref(self) self.add_url_rule( f"{self.static_url_path}/<path:filename>", endpoint="static", host=static_host, view_func=lambda **kw: self_ref().send_static_file(**kw), # type: ignore # noqa: B950 ) def get_send_file_max_age(self, filename: str | None) -> int | None: """Used by :func:`send_file` to determine the ``max_age`` cache value for a given file path if it wasn't passed. By default, this returns :data:`SEND_FILE_MAX_AGE_DEFAULT` from the configuration of :data:`~flask.current_app`. This defaults to ``None``, which tells the browser to use conditional requests instead of a timed cache, which is usually preferable. Note this is a duplicate of the same method in the Flask class. .. versionchanged:: 2.0 The default configuration is ``None`` instead of 12 hours. .. versionadded:: 0.9 """ value = current_app.config["SEND_FILE_MAX_AGE_DEFAULT"] if value is None: return None if isinstance(value, timedelta): return int(value.total_seconds()) return value # type: ignore[no-any-return] def send_static_file(self, filename: str) -> Response: """The view function used to serve files from :attr:`static_folder`. A route is automatically registered for this view at :attr:`static_url_path` if :attr:`static_folder` is set. Note this is a duplicate of the same method in the Flask class. .. versionadded:: 0.5 """ if not self.has_static_folder: raise RuntimeError("'static_folder' must be set to serve static_files.") # send_file only knows to call get_send_file_max_age on the app, # call it here so it works for blueprints too. max_age = self.get_send_file_max_age(filename) return send_from_directory( t.cast(str, self.static_folder), filename, max_age=max_age ) def open_resource( self, resource: str, mode: str = "rb", encoding: str | None = None ) -> t.IO[t.AnyStr]: """Open a resource file relative to :attr:`root_path` for reading. For example, if the file ``schema.sql`` is next to the file ``app.py`` where the ``Flask`` app is defined, it can be opened with: .. code-block:: python with app.open_resource("schema.sql") as f: conn.executescript(f.read()) :param resource: Path to the resource relative to :attr:`root_path`. :param mode: Open the file in this mode. Only reading is supported, valid values are ``"r"`` (or ``"rt"``) and ``"rb"``. :param encoding: Open the file with this encoding when opening in text mode. This is ignored when opening in binary mode. .. versionchanged:: 3.1 Added the ``encoding`` parameter. """ if mode not in {"r", "rt", "rb"}: raise ValueError("Resources can only be opened for reading.") path = os.path.join(self.root_path, resource) if mode == "rb": return open(path, mode) # pyright: ignore return open(path, mode, encoding=encoding) def open_instance_resource( self, resource: str, mode: str = "rb", encoding: str | None = "utf-8" ) -> t.IO[t.AnyStr]: """Open a resource file relative to the application's instance folder :attr:`instance_path`. Unlike :meth:`open_resource`, files in the instance folder can be opened for writing. :param resource: Path to the resource relative to :attr:`instance_path`. :param mode: Open the file in this mode. :param encoding: Open the file with this encoding when opening in text mode. This is ignored when opening in binary mode. .. versionchanged:: 3.1 Added the ``encoding`` parameter. """ path = os.path.join(self.instance_path, resource) if "b" in mode: return open(path, mode) return open(path, mode, encoding=encoding) def create_jinja_environment(self) -> Environment: """Create the Jinja environment based on :attr:`jinja_options` and the various Jinja-related methods of the app. Changing :attr:`jinja_options` after this will have no effect. Also adds Flask-related globals and filters to the environment. .. versionchanged:: 0.11 ``Environment.auto_reload`` set in accordance with ``TEMPLATES_AUTO_RELOAD`` configuration option. .. versionadded:: 0.5 """ options = dict(self.jinja_options) if "autoescape" not in options: options["autoescape"] = self.select_jinja_autoescape if "auto_reload" not in options: auto_reload = self.config["TEMPLATES_AUTO_RELOAD"] if auto_reload is None: auto_reload = self.debug options["auto_reload"] = auto_reload rv = self.jinja_environment(self, **options) rv.globals.update( url_for=self.url_for, get_flashed_messages=get_flashed_messages, config=self.config, # request, session and g are normally added with the # context processor for efficiency reasons but for imported # templates we also want the proxies in there. request=request, session=session, g=g, ) rv.policies["json.dumps_function"] = self.json.dumps return rv def create_url_adapter(self, request: Request | None) -> MapAdapter | None: """Creates a URL adapter for the given request. The URL adapter is created at a point where the request context is not yet set up so the request is passed explicitly. .. versionchanged:: 3.1 If :data:`SERVER_NAME` is set, it does not restrict requests to only that domain, for both ``subdomain_matching`` and ``host_matching``. .. versionchanged:: 1.0 :data:`SERVER_NAME` no longer implicitly enables subdomain matching. Use :attr:`subdomain_matching` instead. .. versionchanged:: 0.9 This can be called outside a request when the URL adapter is created for an application context. .. versionadded:: 0.6 """ if request is not None: if (trusted_hosts := self.config["TRUSTED_HOSTS"]) is not None: request.trusted_hosts = trusted_hosts # Check trusted_hosts here until bind_to_environ does. request.host = get_host(request.environ, request.trusted_hosts) # pyright: ignore subdomain = None server_name = self.config["SERVER_NAME"] if self.url_map.host_matching: # Don't pass SERVER_NAME, otherwise it's used and the actual # host is ignored, which breaks host matching. server_name = None elif not self.subdomain_matching: # Werkzeug doesn't implement subdomain matching yet. Until then, # disable it by forcing the current subdomain to the default, or # the empty string. subdomain = self.url_map.default_subdomain or "" return self.url_map.bind_to_environ( request.environ, server_name=server_name, subdomain=subdomain ) # Need at least SERVER_NAME to match/build outside a request. if self.config["SERVER_NAME"] is not None: return self.url_map.bind( self.config["SERVER_NAME"], script_name=self.config["APPLICATION_ROOT"], url_scheme=self.config["PREFERRED_URL_SCHEME"], ) return None def raise_routing_exception(self, request: Request) -> t.NoReturn: """Intercept routing exceptions and possibly do something else. In debug mode, intercept a routing redirect and replace it with an error if the body will be discarded. With modern Werkzeug this shouldn't occur, since it now uses a 308 status which tells the browser to resend the method and body. .. versionchanged:: 2.1 Don't intercept 307 and 308 redirects. :meta private: :internal: """ if ( not self.debug or not isinstance(request.routing_exception, RequestRedirect) or request.routing_exception.code in {307, 308} or request.method in {"GET", "HEAD", "OPTIONS"} ): raise request.routing_exception # type: ignore[misc] from .debughelpers import FormDataRoutingRedirect raise FormDataRoutingRedirect(request) def update_template_context(self, context: dict[str, t.Any]) -> None: """Update the template context with some commonly used variables. This injects request, session, config and g into the template context as well as everything template context processors want to inject. Note that the as of Flask 0.6, the original values in the context will not be overridden if a context processor decides to return a value with the same key. :param context: the context as a dictionary that is updated in place to add extra variables. """ names: t.Iterable[str | None] = (None,) # A template may be rendered outside a request context. if request: names = chain(names, reversed(request.blueprints)) # The values passed to render_template take precedence. Keep a # copy to re-apply after all context functions. orig_ctx = context.copy() for name in names: if name in self.template_context_processors: for func in self.template_context_processors[name]: context.update(self.ensure_sync(func)()) context.update(orig_ctx) def make_shell_context(self) -> dict[str, t.Any]: """Returns the shell context for an interactive shell for this application. This runs all the registered shell context processors. .. versionadded:: 0.11 """ rv = {"app": self, "g": g} for processor in self.shell_context_processors: rv.update(processor()) return rv def run( self, host: str | None = None, port: int | None = None, debug: bool | None = None, load_dotenv: bool = True, **options: t.Any, ) -> None: """Runs the application on a local development server. Do not use ``run()`` in a production setting. It is not intended to meet security and performance requirements for a production server. Instead, see :doc:`/deploying/index` for WSGI server recommendations. If the :attr:`debug` flag is set the server will automatically reload for code changes and show a debugger in case an exception happened. If you want to run the application in debug mode, but disable the code execution on the interactive debugger, you can pass ``use_evalex=False`` as parameter. This will keep the debugger's traceback screen active, but disable code execution. It is not recommended to use this function for development with automatic reloading as this is badly supported. Instead you should be using the :command:`flask` command line script's ``run`` support. .. admonition:: Keep in Mind Flask will suppress any server error with a generic error page unless it is in debug mode. As such to enable just the interactive debugger without the code reloading, you have to invoke :meth:`run` with ``debug=True`` and ``use_reloader=False``. Setting ``use_debugger`` to ``True`` without being in debug mode won't catch any exceptions because there won't be any to catch. :param host: the hostname to listen on. Set this to ``'0.0.0.0'`` to have the server available externally as well. Defaults to ``'127.0.0.1'`` or the host in the ``SERVER_NAME`` config variable if present. :param port: the port of the webserver. Defaults to ``5000`` or the port defined in the ``SERVER_NAME`` config variable if present. :param debug: if given, enable or disable debug mode. See :attr:`debug`. :param load_dotenv: Load the nearest :file:`.env` and :file:`.flaskenv` files to set environment variables. Will also change the working directory to the directory containing the first file found. :param options: the options to be forwarded to the underlying Werkzeug server. See :func:`werkzeug.serving.run_simple` for more information. .. versionchanged:: 1.0 If installed, python-dotenv will be used to load environment variables from :file:`.env` and :file:`.flaskenv` files. The :envvar:`FLASK_DEBUG` environment variable will override :attr:`debug`. Threaded mode is enabled by default. .. versionchanged:: 0.10 The default port is now picked from the ``SERVER_NAME`` variable. """ # Ignore this call so that it doesn't start another server if # the 'flask run' command is used. if os.environ.get("FLASK_RUN_FROM_CLI") == "true": if not is_running_from_reloader(): click.secho( " * Ignoring a call to 'app.run()' that would block" " the current 'flask' CLI command.\n" " Only call 'app.run()' in an 'if __name__ ==" ' "__main__"\' guard.', fg="red", ) return if get_load_dotenv(load_dotenv): cli.load_dotenv() # if set, env var overrides existing value if "FLASK_DEBUG" in os.environ: self.debug = get_debug_flag() # debug passed to method overrides all other sources if debug is not None: self.debug = bool(debug) server_name = self.config.get("SERVER_NAME") sn_host = sn_port = None if server_name: sn_host, _, sn_port = server_name.partition(":") if not host: if sn_host: host = sn_host else: host = "127.0.0.1" if port or port == 0: port = int(port) elif sn_port: port = int(sn_port) else: port = 5000 options.setdefault("use_reloader", self.debug) options.setdefault("use_debugger", self.debug) options.setdefault("threaded", True) cli.show_server_banner(self.debug, self.name) from werkzeug.serving import run_simple try: run_simple(t.cast(str, host), port, self, **options) finally: # reset the first request information if the development server # reset normally. This makes it possible to restart the server # without reloader and that stuff from an interactive shell. self._got_first_request = False def test_client(self, use_cookies: bool = True, **kwargs: t.Any) -> FlaskClient: """Creates a test client for this application. For information about unit testing head over to :doc:`/testing`. Note that if you are testing for assertions or exceptions in your application code, you must set ``app.testing = True`` in order for the exceptions to propagate to the test client. Otherwise, the exception will be handled by the application (not visible to the test client) and the only indication of an AssertionError or other exception will be a 500 status code response to the test client. See the :attr:`testing` attribute. For example:: app.testing = True client = app.test_client() The test client can be used in a ``with`` block to defer the closing down of the context until the end of the ``with`` block. This is useful if you want to access the context locals for testing:: with app.test_client() as c: rv = c.get('/?vodka=42') assert request.args['vodka'] == '42' Additionally, you may pass optional keyword arguments that will then be passed to the application's :attr:`test_client_class` constructor. For example:: from flask.testing import FlaskClient class CustomClient(FlaskClient): def __init__(self, *args, **kwargs): self._authentication = kwargs.pop("authentication") super(CustomClient,self).__init__( *args, **kwargs) app.test_client_class = CustomClient client = app.test_client(authentication='Basic ....') See :class:`~flask.testing.FlaskClient` for more information. .. versionchanged:: 0.4 added support for ``with`` block usage for the client. .. versionadded:: 0.7 The `use_cookies` parameter was added as well as the ability to override the client to be used by setting the :attr:`test_client_class` attribute. .. versionchanged:: 0.11 Added `**kwargs` to support passing additional keyword arguments to the constructor of :attr:`test_client_class`. """ cls = self.test_client_class if cls is None: from .testing import FlaskClient as cls return cls( # type: ignore self, self.response_class, use_cookies=use_cookies, **kwargs ) def test_cli_runner(self, **kwargs: t.Any) -> FlaskCliRunner: """Create a CLI runner for testing CLI commands. See :ref:`testing-cli`. Returns an instance of :attr:`test_cli_runner_class`, by default :class:`~flask.testing.FlaskCliRunner`. The Flask app object is passed as the first argument. .. versionadded:: 1.0 """ cls = self.test_cli_runner_class if cls is None: from .testing import FlaskCliRunner as cls return cls(self, **kwargs) # type: ignore def handle_http_exception( self, e: HTTPException ) -> HTTPException | ft.ResponseReturnValue: """Handles an HTTP exception. By default this will invoke the registered error handlers and fall back to returning the exception as response. .. versionchanged:: 1.0.3 ``RoutingException``, used internally for actions such as slash redirects during routing, is not passed to error handlers. .. versionchanged:: 1.0 Exceptions are looked up by code *and* by MRO, so ``HTTPException`` subclasses can be handled with a catch-all handler for the base ``HTTPException``. .. versionadded:: 0.3 """ # Proxy exceptions don't have error codes. We want to always return # those unchanged as errors if e.code is None: return e # RoutingExceptions are used internally to trigger routing # actions, such as slash redirects raising RequestRedirect. They # are not raised or handled in user code. if isinstance(e, RoutingException): return e handler = self._find_error_handler(e, request.blueprints) if handler is None: return e return self.ensure_sync(handler)(e) # type: ignore[no-any-return] def handle_user_exception( self, e: Exception ) -> HTTPException | ft.ResponseReturnValue: """This method is called whenever an exception occurs that should be handled. A special case is :class:`~werkzeug .exceptions.HTTPException` which is forwarded to the :meth:`handle_http_exception` method. This function will either return a response value or reraise the exception with the same traceback. .. versionchanged:: 1.0 Key errors raised from request data like ``form`` show the bad key in debug mode rather than a generic bad request message. .. versionadded:: 0.7 """ if isinstance(e, BadRequestKeyError) and ( self.debug or self.config["TRAP_BAD_REQUEST_ERRORS"] ): e.show_exception = True if isinstance(e, HTTPException) and not self.trap_http_exception(e): return self.handle_http_exception(e) handler = self._find_error_handler(e, request.blueprints) if handler is None: raise return self.ensure_sync(handler)(e) # type: ignore[no-any-return] def handle_exception(self, e: Exception) -> Response: """Handle an exception that did not have an error handler associated with it, or that was raised from an error handler. This always causes a 500 ``InternalServerError``. Always sends the :data:`got_request_exception` signal. If :data:`PROPAGATE_EXCEPTIONS` is ``True``, such as in debug mode, the error will be re-raised so that the debugger can display it. Otherwise, the original exception is logged, and an :exc:`~werkzeug.exceptions.InternalServerError` is returned. If an error handler is registered for ``InternalServerError`` or ``500``, it will be used. For consistency, the handler will always receive the ``InternalServerError``. The original unhandled exception is available as ``e.original_exception``. .. versionchanged:: 1.1.0 Always passes the ``InternalServerError`` instance to the handler, setting ``original_exception`` to the unhandled error. .. versionchanged:: 1.1.0 ``after_request`` functions and other finalization is done even for the default 500 response when there is no handler. .. versionadded:: 0.3 """ exc_info = sys.exc_info() got_request_exception.send(self, _async_wrapper=self.ensure_sync, exception=e) propagate = self.config["PROPAGATE_EXCEPTIONS"] if propagate is None: propagate = self.testing or self.debug if propagate: # Re-raise if called with an active exception, otherwise # raise the passed in exception. if exc_info[1] is e: raise raise e self.log_exception(exc_info) server_error: InternalServerError | ft.ResponseReturnValue server_error = InternalServerError(original_exception=e) handler = self._find_error_handler(server_error, request.blueprints) if handler is not None: server_error = self.ensure_sync(handler)(server_error) return self.finalize_request(server_error, from_error_handler=True) def log_exception( self, exc_info: (tuple[type, BaseException, TracebackType] | tuple[None, None, None]), ) -> None: """Logs an exception. This is called by :meth:`handle_exception` if debugging is disabled and right before the handler is called. The default implementation logs the exception as error on the :attr:`logger`. .. versionadded:: 0.8 """ self.logger.error( f"Exception on {request.path} [{request.method}]", exc_info=exc_info ) def dispatch_request(self) -> ft.ResponseReturnValue: """Does the request dispatching. Matches the URL and returns the return value of the view or error handler. This does not have to be a response object. In order to convert the return value to a proper response object, call :func:`make_response`. .. versionchanged:: 0.7 This no longer does the exception handling, this code was moved to the new :meth:`full_dispatch_request`. """ req = request_ctx.request if req.routing_exception is not None: self.raise_routing_exception(req) rule: Rule = req.url_rule # type: ignore[assignment] # if we provide automatic options for this URL and the # request came with the OPTIONS method, reply automatically if ( getattr(rule, "provide_automatic_options", False) and req.method == "OPTIONS" ): return self.make_default_options_response() # otherwise dispatch to the handler for that endpoint view_args: dict[str, t.Any] = req.view_args # type: ignore[assignment] return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return] def full_dispatch_request(self) -> Response: """Dispatches the request and on top of that performs request pre and postprocessing as well as HTTP exception catching and error handling. .. versionadded:: 0.7 """ self._got_first_request = True try: request_started.send(self, _async_wrapper=self.ensure_sync) rv = self.preprocess_request() if rv is None: rv = self.dispatch_request() except Exception as e: rv = self.handle_user_exception(e) return self.finalize_request(rv) def finalize_request( self, rv: ft.ResponseReturnValue | HTTPException, from_error_handler: bool = False, ) -> Response: """Given the return value from a view function this finalizes the request by converting it into a response and invoking the postprocessing functions. This is invoked for both normal request dispatching as well as error handlers. Because this means that it might be called as a result of a failure a special safe mode is available which can be enabled with the `from_error_handler` flag. If enabled, failures in response processing will be logged and otherwise ignored. :internal: """ response = self.make_response(rv) try: response = self.process_response(response) request_finished.send( self, _async_wrapper=self.ensure_sync, response=response ) except Exception: if not from_error_handler: raise self.logger.exception( "Request finalizing failed with an error while handling an error" ) return response def make_default_options_response(self) -> Response: """This method is called to create the default ``OPTIONS`` response. This can be changed through subclassing to change the default behavior of ``OPTIONS`` responses. .. versionadded:: 0.7 """ adapter = request_ctx.url_adapter methods = adapter.allowed_methods() # type: ignore[union-attr] rv = self.response_class() rv.allow.update(methods) return rv def ensure_sync(self, func: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]: """Ensure that the function is synchronous for WSGI workers. Plain ``def`` functions are returned as-is. ``async def`` functions are wrapped to run and wait for the response. Override this method to change how the app runs async views. .. versionadded:: 2.0 """ if iscoroutinefunction(func): return self.async_to_sync(func) return func def async_to_sync( self, func: t.Callable[..., t.Coroutine[t.Any, t.Any, t.Any]] ) -> t.Callable[..., t.Any]: """Return a sync function that will run the coroutine function. .. code-block:: python result = app.async_to_sync(func)(*args, **kwargs) Override this method to change how the app converts async code to be synchronously callable. .. versionadded:: 2.0 """ try: from asgiref.sync import async_to_sync as asgiref_async_to_sync except ImportError: raise RuntimeError( "Install Flask with the 'async' extra in order to use async views." ) from None return asgiref_async_to_sync(func) def url_for( self, /, endpoint: str, *, _anchor: str | None = None, _method: str | None = None, _scheme: str | None = None, _external: bool | None = None, **values: t.Any, ) -> str: """Generate a URL to the given endpoint with the given values. This is called by :func:`flask.url_for`, and can be called directly as well. An *endpoint* is the name of a URL rule, usually added with :meth:`@app.route() <route>`, and usually the same name as the view function. A route defined in a :class:`~flask.Blueprint` will prepend the blueprint's name separated by a ``.`` to the endpoint. In some cases, such as email messages, you want URLs to include the scheme and domain, like ``https://example.com/hello``. When not in an active request, URLs will be external by default, but this requires setting :data:`SERVER_NAME` so Flask knows what domain to use. :data:`APPLICATION_ROOT` and :data:`PREFERRED_URL_SCHEME` should also be configured as needed. This config is only used when not in an active request. Functions can be decorated with :meth:`url_defaults` to modify keyword arguments before the URL is built. If building fails for some reason, such as an unknown endpoint or incorrect values, the app's :meth:`handle_url_build_error` method is called. If that returns a string, that is returned, otherwise a :exc:`~werkzeug.routing.BuildError` is raised. :param endpoint: The endpoint name associated with the URL to generate. If this starts with a ``.``, the current blueprint name (if any) will be used. :param _anchor: If given, append this as ``#anchor`` to the URL. :param _method: If given, generate the URL associated with this method for the endpoint. :param _scheme: If given, the URL will have this scheme if it is external. :param _external: If given, prefer the URL to be internal (False) or require it to be external (True). External URLs include the scheme and domain. When not in an active request, URLs are external by default. :param values: Values to use for the variable parts of the URL rule. Unknown keys are appended as query string arguments, like ``?a=b&c=d``. .. versionadded:: 2.2 Moved from ``flask.url_for``, which calls this method. """ req_ctx = _cv_request.get(None) if req_ctx is not None: url_adapter = req_ctx.url_adapter blueprint_name = req_ctx.request.blueprint # If the endpoint starts with "." and the request matches a # blueprint, the endpoint is relative to the blueprint. if endpoint[:1] == ".": if blueprint_name is not None: endpoint = f"{blueprint_name}{endpoint}" else: endpoint = endpoint[1:] # When in a request, generate a URL without scheme and # domain by default, unless a scheme is given. if _external is None: _external = _scheme is not None else: app_ctx = _cv_app.get(None) # If called by helpers.url_for, an app context is active, # use its url_adapter. Otherwise, app.url_for was called # directly, build an adapter. if app_ctx is not None: url_adapter = app_ctx.url_adapter else: url_adapter = self.create_url_adapter(None) if url_adapter is None: raise RuntimeError( "Unable to build URLs outside an active request" " without 'SERVER_NAME' configured. Also configure" " 'APPLICATION_ROOT' and 'PREFERRED_URL_SCHEME' as" " needed." ) # When outside a request, generate a URL with scheme and # domain by default. if _external is None: _external = True # It is an error to set _scheme when _external=False, in order # to avoid accidental insecure URLs. if _scheme is not None and not _external: raise ValueError("When specifying '_scheme', '_external' must be True.") self.inject_url_defaults(endpoint, values) try: rv = url_adapter.build( # type: ignore[union-attr] endpoint, values, method=_method, url_scheme=_scheme, force_external=_external, ) except BuildError as error: values.update( _anchor=_anchor, _method=_method, _scheme=_scheme, _external=_external ) return self.handle_url_build_error(error, endpoint, values) if _anchor is not None: _anchor = _url_quote(_anchor, safe="%!#$&'()*+,/:;=?@") rv = f"{rv}#{_anchor}" return rv def make_response(self, rv: ft.ResponseReturnValue) -> Response: """Convert the return value from a view function to an instance of :attr:`response_class`. :param rv: the return value from the view function. The view function must return a response. Returning ``None``, or the view ending without returning, is not allowed. The following types are allowed for ``view_rv``: ``str`` A response object is created with the string encoded to UTF-8 as the body. ``bytes`` A response object is created with the bytes as the body. ``dict`` A dictionary that will be jsonify'd before being returned. ``list`` A list that will be jsonify'd before being returned. ``generator`` or ``iterator`` A generator that returns ``str`` or ``bytes`` to be streamed as the response. ``tuple`` Either ``(body, status, headers)``, ``(body, status)``, or ``(body, headers)``, where ``body`` is any of the other types allowed here, ``status`` is a string or an integer, and ``headers`` is a dictionary or a list of ``(key, value)`` tuples. If ``body`` is a :attr:`response_class` instance, ``status`` overwrites the exiting value and ``headers`` are extended. :attr:`response_class` The object is returned unchanged. other :class:`~werkzeug.wrappers.Response` class The object is coerced to :attr:`response_class`. :func:`callable` The function is called as a WSGI application. The result is used to create a response object. .. versionchanged:: 2.2 A generator will be converted to a streaming response. A list will be converted to a JSON response. .. versionchanged:: 1.1 A dict will be converted to a JSON response. .. versionchanged:: 0.9 Previously a tuple was interpreted as the arguments for the response object. """ status: int | None = None headers: HeadersValue | None = None # unpack tuple returns if isinstance(rv, tuple): len_rv = len(rv) # a 3-tuple is unpacked directly if len_rv == 3: rv, status, headers = rv # type: ignore[misc] # decide if a 2-tuple has status or headers elif len_rv == 2: if isinstance(rv[1], (Headers, dict, tuple, list)): rv, headers = rv # pyright: ignore else: rv, status = rv # type: ignore[assignment,misc] # other sized tuples are not allowed else: raise TypeError( "The view function did not return a valid response tuple." " The tuple must have the form (body, status, headers)," " (body, status), or (body, headers)." ) # the body must not be None if rv is None: raise TypeError( f"The view function for {request.endpoint!r} did not" " return a valid response. The function either returned" " None or ended without a return statement." ) # make sure the body is an instance of the response class if not isinstance(rv, self.response_class): if isinstance(rv, (str, bytes, bytearray)) or isinstance(rv, cabc.Iterator): # let the response class set the status and headers instead of # waiting to do it manually, so that the class can handle any # special logic rv = self.response_class( rv, # pyright: ignore status=status, headers=headers, # type: ignore[arg-type] ) status = headers = None elif isinstance(rv, (dict, list)): rv = self.json.response(rv) elif isinstance(rv, BaseResponse) or callable(rv): # evaluate a WSGI callable, or coerce a different response # class to the correct type try: rv = self.response_class.force_type( rv, # type: ignore[arg-type] request.environ, ) except TypeError as e: raise TypeError( f"{e}\nThe view function did not return a valid" " response. The return type must be a string," " dict, list, tuple with headers or status," " Response instance, or WSGI callable, but it" f" was a {type(rv).__name__}." ).with_traceback(sys.exc_info()[2]) from None else: raise TypeError( "The view function did not return a valid" " response. The return type must be a string," " dict, list, tuple with headers or status," " Response instance, or WSGI callable, but it was a" f" {type(rv).__name__}." ) rv = t.cast(Response, rv) # prefer the status if it was provided if status is not None: if isinstance(status, (str, bytes, bytearray)): rv.status = status else: rv.status_code = status # extend existing headers with provided headers if headers: rv.headers.update(headers) return rv def preprocess_request(self) -> ft.ResponseReturnValue | None: """Called before the request is dispatched. Calls :attr:`url_value_preprocessors` registered with the app and the current blueprint (if any). Then calls :attr:`before_request_funcs` registered with the app and the blueprint. If any :meth:`before_request` handler returns a non-None value, the value is handled as if it was the return value from the view, and further request handling is stopped. """ names = (None, *reversed(request.blueprints)) for name in names: if name in self.url_value_preprocessors: for url_func in self.url_value_preprocessors[name]: url_func(request.endpoint, request.view_args) for name in names: if name in self.before_request_funcs: for before_func in self.before_request_funcs[name]: rv = self.ensure_sync(before_func)() if rv is not None: return rv # type: ignore[no-any-return] return None def process_response(self, response: Response) -> Response: """Can be overridden in order to modify the response object before it's sent to the WSGI server. By default this will call all the :meth:`after_request` decorated functions. .. versionchanged:: 0.5 As of Flask 0.5 the functions registered for after request execution are called in reverse order of registration. :param response: a :attr:`response_class` object. :return: a new response object or the same, has to be an instance of :attr:`response_class`. """ ctx = request_ctx._get_current_object() # type: ignore[attr-defined] for func in ctx._after_request_functions: response = self.ensure_sync(func)(response) for name in chain(request.blueprints, (None,)): if name in self.after_request_funcs: for func in reversed(self.after_request_funcs[name]): response = self.ensure_sync(func)(response) if not self.session_interface.is_null_session(ctx.session): self.session_interface.save_session(self, ctx.session, response) return response def do_teardown_request( self, exc: BaseException | None = _sentinel, # type: ignore[assignment] ) -> None: """Called after the request is dispatched and the response is returned, right before the request context is popped. This calls all functions decorated with :meth:`teardown_request`, and :meth:`Blueprint.teardown_request` if a blueprint handled the request. Finally, the :data:`request_tearing_down` signal is sent. This is called by :meth:`RequestContext.pop() <flask.ctx.RequestContext.pop>`, which may be delayed during testing to maintain access to resources. :param exc: An unhandled exception raised while dispatching the request. Detected from the current exception information if not passed. Passed to each teardown function. .. versionchanged:: 0.9 Added the ``exc`` argument. """ if exc is _sentinel: exc = sys.exc_info()[1] for name in chain(request.blueprints, (None,)): if name in self.teardown_request_funcs: for func in reversed(self.teardown_request_funcs[name]): self.ensure_sync(func)(exc) request_tearing_down.send(self, _async_wrapper=self.ensure_sync, exc=exc) def do_teardown_appcontext( self, exc: BaseException | None = _sentinel, # type: ignore[assignment] ) -> None: """Called right before the application context is popped. When handling a request, the application context is popped after the request context. See :meth:`do_teardown_request`. This calls all functions decorated with :meth:`teardown_appcontext`. Then the :data:`appcontext_tearing_down` signal is sent. This is called by :meth:`AppContext.pop() <flask.ctx.AppContext.pop>`. .. versionadded:: 0.9 """ if exc is _sentinel: exc = sys.exc_info()[1] for func in reversed(self.teardown_appcontext_funcs): self.ensure_sync(func)(exc) appcontext_tearing_down.send(self, _async_wrapper=self.ensure_sync, exc=exc) def app_context(self) -> AppContext: """Create an :class:`~flask.ctx.AppContext`. Use as a ``with`` block to push the context, which will make :data:`current_app` point at this application. An application context is automatically pushed by :meth:`RequestContext.push() <flask.ctx.RequestContext.push>` when handling a request, and when running a CLI command. Use this to manually create a context outside of these situations. :: with app.app_context(): init_db() See :doc:`/appcontext`. .. versionadded:: 0.9 """ return AppContext(self) def request_context(self, environ: WSGIEnvironment) -> RequestContext: """Create a :class:`~flask.ctx.RequestContext` representing a WSGI environment. Use a ``with`` block to push the context, which will make :data:`request` point at this request. See :doc:`/reqcontext`. Typically you should not call this from your own code. A request context is automatically pushed by the :meth:`wsgi_app` when handling a request. Use :meth:`test_request_context` to create an environment and context instead of this method. :param environ: a WSGI environment """ return RequestContext(self, environ) def test_request_context(self, *args: t.Any, **kwargs: t.Any) -> RequestContext: """Create a :class:`~flask.ctx.RequestContext` for a WSGI environment created from the given values. This is mostly useful during testing, where you may want to run a function that uses request data without dispatching a full request. See :doc:`/reqcontext`. Use a ``with`` block to push the context, which will make :data:`request` point at the request for the created environment. :: with app.test_request_context(...): generate_report() When using the shell, it may be easier to push and pop the context manually to avoid indentation. :: ctx = app.test_request_context(...) ctx.push() ... ctx.pop() Takes the same arguments as Werkzeug's :class:`~werkzeug.test.EnvironBuilder`, with some defaults from the application. See the linked Werkzeug docs for most of the available arguments. Flask-specific behavior is listed here. :param path: URL path being requested. :param base_url: Base URL where the app is being served, which ``path`` is relative to. If not given, built from :data:`PREFERRED_URL_SCHEME`, ``subdomain``, :data:`SERVER_NAME`, and :data:`APPLICATION_ROOT`. :param subdomain: Subdomain name to append to :data:`SERVER_NAME`. :param url_scheme: Scheme to use instead of :data:`PREFERRED_URL_SCHEME`. :param data: The request body, either as a string or a dict of form keys and values. :param json: If given, this is serialized as JSON and passed as ``data``. Also defaults ``content_type`` to ``application/json``. :param args: other positional arguments passed to :class:`~werkzeug.test.EnvironBuilder`. :param kwargs: other keyword arguments passed to :class:`~werkzeug.test.EnvironBuilder`. """ from .testing import EnvironBuilder builder = EnvironBuilder(self, *args, **kwargs) try: return self.request_context(builder.get_environ()) finally: builder.close() def wsgi_app( self, environ: WSGIEnvironment, start_response: StartResponse ) -> cabc.Iterable[bytes]: """The actual WSGI application. This is not implemented in :meth:`__call__` so that middlewares can be applied without losing a reference to the app object. Instead of doing this:: app = MyMiddleware(app) It's a better idea to do this instead:: app.wsgi_app = MyMiddleware(app.wsgi_app) Then you still have the original application object around and can continue to call methods on it. .. versionchanged:: 0.7 Teardown events for the request and app contexts are called even if an unhandled error occurs. Other events may not be called depending on when an error occurs during dispatch. See :ref:`callbacks-and-errors`. :param environ: A WSGI environment. :param start_response: A callable accepting a status code, a list of headers, and an optional exception context to start the response. """ ctx = self.request_context(environ) error: BaseException | None = None try: try: ctx.push() response = self.full_dispatch_request() except Exception as e: error = e response = self.handle_exception(e) except: # noqa: B001 error = sys.exc_info()[1] raise return response(environ, start_response) finally: if "werkzeug.debug.preserve_context" in environ: environ["werkzeug.debug.preserve_context"](_cv_app.get()) environ["werkzeug.debug.preserve_context"](_cv_request.get()) if error is not None and self.should_ignore_error(error): error = None ctx.pop(error) def __call__( self, environ: WSGIEnvironment, start_response: StartResponse ) -> cabc.Iterable[bytes]: """The WSGI server calls the Flask application object as the WSGI application. This calls :meth:`wsgi_app`, which can be wrapped to apply middleware. """ return self.wsgi_app(environ, start_response) 这是我的app.py
最新发布
12-24
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值