
hadoop
bbdlinden
一直努力的大数据工程师
展开
-
he current failed datanode replacement policy is DEFAULT报错
hadoop The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration报错 解决办法:修改hdfs-site.xml文件 添加 <property> <name>dfs.client.blo原创 2020-05-17 17:49:49 · 1263 阅读 · 0 评论 -
hadoop集群搭建时的问题
INFO ipc.Client: Retrying connect to server: linden10004/192.168.174.104:43012. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS) ...原创 2020-02-18 15:13:10 · 193 阅读 · 0 评论