- 博客(54)
- 资源 (10)
- 问答 (1)
- 收藏
- 关注
原创 解决HashMap迭代时报错java.util.ConcurrentModificationException:null
解决HashMap迭代时报错java.util.ConcurrentModificationException:null
2023-02-17 09:45:33
980
原创 java连接mongodb报错:Command failed with error 18
java连接mongodb报错:Command failed with error 18
2022-11-10 17:29:25
435
原创 瀑布流推荐用到Redis布隆过滤器报错:ERR exbloom is already exists
瀑布流推荐用到Redis布隆过滤器报错:ERR exbloom is already existsorg.springframework.data.redis.RedisSystemException: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: ERR Error running script (call to f_7eccf151e55936985c918ea89cf06
2022-10-21 16:46:51
431
原创 如何通过客户端访问内网Redis服务?
生成内网的redis一般不会开放外网访问权限--为了安全。作为开发者又想通过客户端进行访问内网redis怎么办?
2022-10-18 10:49:11
909
原创 Failed to load resource: the server responded with a status of 504
Failed to load resource: the server responded with a status of 504
2022-07-07 18:49:13
6677
原创 HQL错误:cannot be cast to org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerDirectAccess
Job failed with java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.persistence.HashMapWrapper cannot be cast to org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerDirectAccess
2022-05-11 10:02:51
2827
原创 阿里云:Could not connect to SMTP host: smtp.163.com, port: 25
阿里云:Could not connect to SMTP host: smtp.163.com, port: 25
2022-04-13 11:50:24
7691
原创 flinkCDC报错:but this is no longer available on the server
Caused by: org.apache.kafka.connect.errors.ConnectException: The connector is trying to read binlog starting at GTIDs afc2c4d5-7061-11ec-a4a5-00163e35e020:1-1717327 and binlog file 'mysql-bin.000443', pos=6207230, skipping 0 events plus 0 rows, but this is
2022-04-13 09:52:44
4824
原创 sqoop从mysql导数据到hdfs报错: SQLException in nextKeyValue
sqoop从mysql导数据到hdfs报错: SQLException in nextKeyValueCaused by: java.sql.SQLException: YEARCaused by: java.lang.IllegalArgumentException: YEAR
2022-02-16 18:38:36
1945
原创 MySQL建库建表:utf8和utf8mb4的区别
1)简介 MySQL在5.5.3之后增加了这个utf8mb4的编码,mb4就是most bytes 4的意思,专门用来兼容四字节的unicode。好在utf8mb4是utf8的超集,除了将编码改为utf8mb4外不需要做其他转换。当然,为了节省空间,一般情况下使用utf8也就够了。2)内容描述 那上面说了既然utf8能够存下大部分中文汉字,那为什么还要使用utf8mb4呢? 原来mysql支持的 utf8 编码最大字符长度为 3 字节,如果遇到 4 字节的宽字符就会插入异常了。三个字节的...
2022-02-07 14:07:19
790
原创 sqoop导出hive分区表字段到mysql报错:Can‘t parse input data: ‘0‘
sqoop导出hive分区表到mysql报错:Can't parse input data: '0'
2022-01-14 18:13:51
1953
原创 2022年-数仓-【时间维度表】-年、周、节假日
2022年-数仓-【时间维度表】-年、周、节假日新年了,送你一份新年礼物,做数仓的都需要,哈哈!!!
2022-01-13 09:54:01
2350
4
原创 Azkaban重新编译,解决:Could not connect to SMTP host: smtp.163.com, port: 465【2022年01月10日】
Azkaban重新编译!Could not connect to SMTP host: smtp.163.com, port: 465
2022-01-10 18:15:07
8404
1
原创 azkaban报错:azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 64
azkaban报错:azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 64
2022-01-05 11:14:37
3830
原创 azkaban报错: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 127
报错如下:05-01-2022 10:02:54 CST ods_to_dwd_log ERROR - Job run failed!java.lang.RuntimeException: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 127 at azkaban.jobExecutor.ProcessJob.run(ProcessJob.java:312) .
2022-01-05 10:22:32
1791
原创 azkaban报错:java.lang.IllegalStateException: Process has not yet started.
azkaban报错:java.lang.IllegalStateException: Process has not yet started.
2022-01-05 10:08:34
2440
原创 azkaban调度任务的时候报错:hive: command not found
azkaban调度任务的时候报错:hive: command not found,具体错误如下:27-12-2021 15:06:26 CST hdfs_to_ods_log INFO - ================== 日志日期为 2021-12-25 ==================27-12-2021 15:06:26 CST hdfs_to_ods_log INFO - /home/test/bin/hdfs_to_ods_log.sh: line 18: hive: comman
2021-12-27 15:19:50
1904
原创 hive on spark 配置时报错:Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorPa
1.执行sql语句,报错信息。hive> insert into table student values(1,'abc'); Query ID = atguigu_20200814150018_318272cf-ede4-420c-9f86-c5357b57aa11 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec
2021-12-27 15:13:07
4322
1
原创 dolphinscheduler初始化元数据报错---原来是密码中有特殊符号“$“
执行命令:sh ./script/create-dolphinscheduler.sh报错如下:[xxxx@xxxx dolphinscheduler]$ sh ./script/create-dolphinscheduler.sh . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)
2021-12-23 11:45:38
3142
1
原创 sqoop从mysql导数据到hdfs使用lzop压缩格式,报:NullPointerException
具体报错如下:Error: java.lang.NullPointerException at com.hadoop.mapreduce.LzoSplitRecordReader.initialize(LzoSplitRecordReader.java:63) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:560) at org.ap
2021-12-21 15:25:25
1656
原创 执行hive> show databases;报错:FAILED: HiveException java.lang.RuntimeException
hive> show databases;FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
2021-12-20 11:58:13
4515
原创 bin/hive 启动报错:Operation category READ is not supported in state standby
bin/hive 启动报错:Operation category READ is not supported in state standby
2021-12-20 10:46:18
1641
原创 bin/bash: 坏的解释器: 没有那个文件或目录
进行jia#!/bin/bashfor i in hadoop112 hadoop113doecho "==================在$i生成日志信息========================"ssh $i "cd /opt/module/applog; java -jar gmall2020-mock-log-2021-01-22.jar >/dev/null 2>&1 &"done
2021-06-20 10:10:28
693
原创 shell命令:nohup后台启动方式
nohup命令使用场景在使用shell窗口启动服务进程的时候,有些进程需要在关闭shell窗口的情况下依然保持运行状态,就可以使用nohup命令;
2021-06-20 09:43:05
3940
原创 kafka修改broker id后,启动kafka报错
错误:[2016-10-13 13:49:56,746] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)kafka.common.InconsistentBrokerIdException: Configured broker.id 2 doesn't match stored broker.id 0 in meta.prope
2021-06-16 20:43:29
553
3
原创 报错:Aggregation is not enabled
报错:Aggregation is not enabled错误描述:Aggregation is not enabled. Try the nodemanager at hadoop102:46139Or see application log at http://hadoop102:8042/node/application/application_1622931290897_0065解决方法:经过分析是历史服务器没有开启日志聚集功能,所以查看不到;配置历史服务器配置步骤:map
2021-06-10 13:14:58
1244
1
原创 实现Hadoop在Map与Reduce阶段压缩(手写压缩与解压缩代码)
Hadoop在Map与Reduce阶段都是通过配置文件进行实现的,具体见下文。手写压缩文件与解压缩问题有代码演示,请客官笑纳。1. Map输出进行压缩// 设置在map输出阶段压缩 conf.set("mapreduce.map.outputt.compress", "true");// 设置解压缩编码器 conf.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress..
2021-06-09 20:45:23
271
2
TA创建的收藏夹 TA关注的收藏夹
TA关注的人