- 博客(131)
- 收藏
- 关注
转载 Maxwell 报错 java.lang.RuntimeException: error: unhandled character set ‘utf8mb3‘(bug记录)
原因:是由于MySQL的版本是8.0以上的原因,里面设置的标识符格式是utf8mb4,Maxwell不支持,需要设置成错误指定的格式。pwd=jzc1 提取码: jzc1。原文链接:https://blog.youkuaiyun.com/wsgz_0305/article/details/133613681。https://pan.baidu.com/s/1D24R3UKJPqCTeYaLt9Dctg 提取码: kgim。
2025-02-10 13:44:34
55
1
原创 hive执行sql报错:The value of property yarn.resourcemanager.zk-address must not be null
在yarn-site.xml中添加下面配置。
2025-01-17 11:35:44
89
转载 org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Webapps failed to start. Ignoring for now
HA机制下yarn-site.xml需要加入以下配置。
2025-01-17 11:23:21
31
转载 Macbook配置VMware Fusion虚拟机网络配置及更新yum源
cd /etc/sysconfig/network-scripts/,修改ifcfg-ens33。原文链接:https://blog.youkuaiyun.com/weixin_44796239/article/details/118757059。mac配置VMware Fusion虚拟机网络配置,配置静态IP ,保证能上网,相比Windows要麻烦一写。下面一行是子网掩码,要和上面修改networking时的子网掩码保持一致。牢记网关IP(是否修改自行决定,要记着)3、重新打开VMware的网络配置。
2024-12-12 15:14:42
342
转载 MacOS catalina系统Vmware黑屏无法操作
解决无法添加VMware辅助功能的问题。重新打开Rootless机制。关闭Rootless机制。解决VMWare黑屏问题。
2023-07-29 10:02:25
524
原创 Dinky:问题总结
一、启动时指定flink版本,因为dinky本身也集成了部分flink。不要相信官网,官网只是个简单举例(下面为官网截图)二、数据源管理新增mysql时的url。
2023-06-30 10:39:35
526
原创 Kafka:spark.rdd.MapPartitionsRDD cannot be cast to streaming.kafka010.HasOffsetRange
①当通过KafkaUtils.createDirectStream方法接收到kafka信息后返回的是JavaInputDStream类型的数据时,后续通过转换算子返回后的数据格式是JavaDStream。②如果后续多个算子想用JavaInputDStream类型的数据,需要将代码拆开,分成多份,否则会报错。
2023-06-20 10:43:53
175
原创 Spark:failed to launch: nice -n 0 /opt/spark/bin/spark-class org.apache.spark.deploy.worker.
node03: failed to launch: nice -n 0 /opt/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://node01:7077node03: full log in /opt/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-node03.out
2022-09-03 16:28:03
2771
转载 xxx is not in the sudoers file.This incident will be reported.的解决方法
子用户使用sudo执行命令
2022-07-26 10:48:31
604
原创 Spark:spark2.4.0安装
软件准备:Index of /dist/spark,选择跟hadoop集成的版本1,解压:tar -zxvf spark-2.4.0-bin-hadoop2.6.tgz mv spark-2.4.0-bin-hadoop2.6 spark vim /etc/profile.d/bigdata-etc.shexport SPARK_HOME=/opt/sparkexport PATH=$PATH:$SPARK_HOME/bin:$S......
2022-05-31 16:44:19
1402
原创 Hive: Task failed task_ Job failed as tasks failed. failedMaps:1 failedReob failed as tasks failed
beeline在插入大批量的数据时报错,但是hive能插入改为插入之前先查询一下,报出错误:GC overhead limit exceeded这就好多了,不就是jvm内存溢出了嘛,因为hive能插入,所以不用改hive-site.xml配置文件,只改hive-env.sh就行,打开HADOOP_HEAPSIZE并调大,if语句中也跟着改if [ "$SERVICE" = "cli" ]; then if [ -z "$DEBUG" ]; then export HADOOP_
2022-05-16 15:06:29
1558
转载 Linux:Root(管理员)新建用户,并赋普通用户文件夹的权限,同时普通用户设置文件权限仅自己能访问
1、新建用户(1)为了获取创建用户的权限,切换为root用户peng@ubuntu:~$ sudo su(2)添加一个新用户(如用户名为xyz)root@ubuntu:/home/peng# adduser xyz然后根据系统提示,1、输入密码;2、再次确认密码;之后一直回车,直到输入y回车完成新用户xyz的创建2、赋普通用户文件夹的权限命令是 chown -R 用户名 文件夹路径例:sudo chown -R xyz /mnt/ssd1/yuyu(请注意:1、文件夹/mnt/
2022-05-12 18:17:24
7780
转载 Flink:flink1.12.0 on yarn部署
1.下载安装包Index of /dist/flink2.上传flink-1.12.0-bin-scala_2.12.tgz到node01的指定目录3.解压:tar -zxvf flink-1.12.0-bin-scala_2.12.tgz4、修改名称 mv flink-1.12.0-bin-scala_2.12 flink5、添加系统环境变量 并source生效 export FLINK_HOME=/opt/flink export PATH=$PATH:$......
2022-05-12 18:16:02
840
原创 Kafka:The Cluster ID 6OgYsn2hRUm5KnEHt9XXoA doesn‘t match stored clusterId Some(mAzhR9xTQBeC7BNhiWVS
[2022-05-10 09:15:36,223] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)[2022-05-10 09:15:36,360] INFO Cluster ID = 6OgYsn2hRUm5KnEHt9XXoA (kafka.server.KafkaServer)[2022-05-10 09:15:36,366] ERROR Fatal error during Kaf.
2022-05-10 09:29:05
414
原创 Hadoop:HA 踩坑 - 所有 namenode 都是standby
状况:所有namenode都是standby,但是zookeeper启动正常,即ZK服务未生效,不能高可用切换HA尝试一:手动强制转化某个namenode为active操作:在某台namenode上,执行 hdfs haadmin -transitionToActive --forcemanual nn1 (nn1是你的某台nameservice-id)结果:nn1被成功转为active。但是在stop-dfs.sh后再一次start-dfs.sh后,所有namenode仍然都是stan
2022-05-09 15:52:26
621
原创 Flink:flink问题总结
问题一:Caused by: org.apache.flink.configuration.IllegalConfigurationException: Sum of configured Framework Heap Memory (128.000mb (134217728 bytes)), Framework Off-Heap Memory (128.000mb (134217728 bytes)), Task Off-Heap Memory (0 bytes), Managed Memory (5
2022-04-27 09:20:31
2962
转载 Flink:问题及解决:ClassNotFoundException: org.apache.hadoop.security.UserGroupInformation
此问题是因为flink 执行环境下缺少相关jar:可以在lib 下补充以下jar 包 aws-java-sdk-s3-1.11.1030.jar flink-shaded-hadoop-2-uber-3.1.1.3.0.1.0-187-10.0.jar hadoop-aws-3.1.0.jar(3.1.0是你自己hadoop版本号)...
2022-04-26 10:44:04
3241
原创 Hive:Schema version 1.2.0 does not match metastore‘s schema version 2.1.0 问题
在hive-site.xml里关闭元数据验证机制<property> <name>hive.metastore.schema.verification</name> <value>false</value></property>
2022-04-25 13:41:47
1939
原创 Hive:Schema initialization FAILED Metastore state would be inconsistent
[root@node3 hive-2.3.4]# schematool -dbType mysql -initSchemaSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/opt/hive-2.3.4/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found bindin
2022-04-02 14:00:18
4696
1
转载 Hive:beeline启动Found class jline.Terminal, but interface was expected...
Found class jline.Terminal, but interface was expected...
2022-04-02 13:53:40
199
转载 Spark:spark on yarn任务java.nio.channels.ClosedChannelException
原因:给节点分配的内存少,yarn kill了spark application解决:配置yarn-site.xml<property> <name>yarn.nodemanager.pmem-check-enabled</name> <value>false</value></property><property> <name>yarn.nodemanager.vmem-che......
2022-04-02 13:47:42
434
转载 Centos7:安装Mysql
wget \https://cdn.mysql.com/archives/mysql-5.7/mysql-community-client-5.7.32-1.el7.x86_64.rpm \https://cdn.mysql.com/archives/mysql-5.7/mysql-community-common-5.7.32-1.el7.x86_64.rpm \https://cdn.mysql.com/archives/mysql-5.7/mysql-community-libs-5.7.32-
2022-03-30 11:03:00
281
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人