- 博客(51)
- 收藏
- 关注
原创 Oracle:ORA-14300:partitioning key maps to a partition outside maximum permitted number of partitions
Oracle报错:ORA-14300:partitioning key maps to a partition outside maximum permitted number of partitions
2024-02-28 16:37:40
843
原创 hive中get_json_object函数不支持解析json中文key
hive中get_json_object函数不支持解析json中文key
2023-08-18 15:35:33
1656
原创 Spark SQL报错: Task failed while writing rows.
Spark SQL报错: Task failed while writing rows.
2023-07-26 15:07:13
1948
原创 Oracle报错:ORA-14402: updating partition key column would cause a partition change
Oracle报错:ORA-14402: updating partition key column would cause a partition change
2023-05-12 10:19:00
7342
原创 Spark SQL报错:java.lang.StackOverflowError(栈溢出)原因及 解决方案
Spark SQL报错:java.lang.StackOverflowError
2023-04-18 10:43:46
2317
原创 ClickHouse写入常见问题: Too many parts (300). Merges are processing significantly slower than inserts.
ClickHouse exception, code: 1002, host: 10.129.170.80, port: 8123; Code: 252. DB::Exception: Too many parts (300). Merges are processing significantly slower than inserts. (TOO_MANY_PARTS) (version 22.3.2.1)
2023-02-02 17:56:36
7393
8
原创 Datax往ClickHouse数据库插入数据前执行truncate table遇到超时问题Read timed out
该问题是由于数据量过大(也有可能是网络问题),处理请求太久,导致 ClickHouse 连接超时,建议在 ClickHouse 链接字符串后面加上参数:`?socket_timeout=600000`
2023-01-11 10:52:14
2927
原创 SparkSQL中控制文件输出数量
Coalesce and Repartition Hint或者spark.sql.adaptive.enabled和spark.sql.adaptive.coalescePartitions.enabled为true
2022-12-19 17:33:41
3311
原创 Oracle报错:ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
2022-12-09 11:36:51
7346
原创 Hive实现日期维表
首先要实现一个日期列表,这里可以使用 函数,比如说我们想要 [2022-12-01, 2022-12-31] 的日期列表,具体实现如下:函数解释:上述日期列表有了之后,具体日期维度的求解相对来说就比较简单了,各个维度的具体求解如下:获取月获取日获取时获取分获取秒当前时间是一年中的第几周所在周的第几天当前日期所在周的下周周几的日期函数:next_day(string start_date, string day_of_week)day_of_
2022-12-07 18:15:26
2116
原创 Hive中date_format()函数的用法
格式化日期时间,将日期按照自己想要的格式输出。date_format(date, format):date参数是合法的日期,format参数是规定日期输出的格式。4. 常用的符号标识可参考Hive官网中 date_format() 函数的描述。获取年月日时分秒:
2022-12-06 16:14:55
21424
原创 Hive中split函数分隔符为分号时报错问题
Hive中split函数分隔符为分号时报错问题 Error while compiling statement: FAILED: ParseException line 1:17 cannot recognize input near '' '' '' in select expression
2022-11-01 16:39:22
2133
1
原创 MySQL报错:ERROR 1118 (42000): Row size too large. 或者 Row size too large (> 8126).
ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs.Row size too large (> 8126).
2022-10-25 18:18:16
23864
原创 MySQL中存储的数据查询的时候区分大小写问题
涉及字符串的各种运算其核心必然涉及到采用何种字符排序规则(COLLATE,也有翻译为"核对")。本质上 MySQL 是通过 COLLATE 取值决定字符串运算是否大小写敏感。`utf8_general_ci` 是一个具体的 COLLATE 取值。每个具体的 COLLATE 都对应唯一的字符集,可以看出该 COLLATE 对应字符集为 `utf8`。而与大小写敏感问题相关的是其后缀 `_ci`,MySQL 官方文档对其的解释是 **`Case Ignore`** 的缩写,即大小写不敏感。由于 MySQL
2022-10-20 18:24:23
6909
1
原创 datax同步数据到ClickHouse时同步时间特别长,原因:Too many partitions for single INSERT block (more than 100).
ClickHouse exception, code: 1002, host: 10.129.170.80, port: 8123; Code: 252. DB::Exception: Too many partitions for single INSERT block (more than 100). The limit is controlled by 'max_partitions_per_insert_block' setting.
2022-09-28 18:34:00
2767
原创 Spark报错:ERROR shuffle.RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks
ERROR shuffle.RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks java.io.IOException: Failed to connect to hostname/192.168.xx.xxx:50002 at
2022-09-20 17:47:03
3052
2
原创 java报错:Error occurred during initialization of VM java/lang/NoClassDefFoundError: java/lang/Object
java报错:Error occurred during initialization of VM java/lang/NoClassDefFoundError: java/lang/Object
2022-08-11 17:06:35
16113
原创 Zookeeper中解决zookeeper.out文件输出位置问题
使用过 Zookeeper 的小伙伴都知道,Zookeeper 中运行日志 zookeeper.out 文件的输出路径默认为启动脚本的当前路径,导致Zookeeper 集群启动失败想看日志时总是不记得输出日志在哪儿,不方便查看日志文件,所以需要修改日志输出位置及方式,方便查看日志。......
2022-08-04 17:01:55
2415
原创 Spark运行任务时报错:org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of...
org.apache.spark.SparkException:Task failed while writing rows.Caused by: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /user/hive/warehouse/hs_data_odsdb.db is exceeded: quota = 13194139533312 B = 12 TB but diskspace ..
2022-07-26 17:13:04
2544
原创 Spark报错:需要 REFRESH TABLE tableName 解决
Spark错误:It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running ‘REFRESH TABLE tableName’ command in SQL or by recreating the Dataset/DataFrame involved.
2022-07-06 17:58:54
7743
1
原创 Hive报错FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
Hive报错FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2022-07-05 16:21:25
2088
原创 Mysql中date_format()函数及interval关键字的用法
Mysql中date_format()函数及interval关键字的用法
2022-07-05 15:46:54
7616
2
原创 2160. 拆分数位后四位数字的最小和 (Minimum Sum of Four Digit Number After Splitting Digits)
拆分数位后四位数字的最小和:给你一个四位 正 整数 num 。请你使用 num 中的 数位 ,将 num 拆成两个新的整数 new1 和 new2 。new1 和 new2 中可以有 前导 0 ,且 num 中 所有 数位都必须使用。
2022-04-10 00:19:01
526
原创 237. 删除链表中的节点 (Delete Node in a Linked List)
题目: 237. Delete Node in a Linked List (删除链表中的节点)Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be delet
2022-04-05 21:40:27
867
原创 java中格式化日期
java中格式化日期public static void main(String[] args) throws ParseException { String str = "20220107"; SimpleDateFormat sdf1 = new SimpleDateFormat("yyyyMMdd"); //将上述格式的字符串("yyyyMMdd"),转换为日期 Date date = sdf1.parse(str); System.out.println(date);//Fri Jan
2022-01-11 15:12:51
356
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人