HDFS的滚动升级: Rolling Upgrade

本文深入探讨了HDFS的滚动升级(Rolling Upgrade)概念,旨在解决大规模集群升级时的服务中断问题。从升级原理、相关命令及实际操作步骤出发,详细解析了包括prepare、query、finalize等命令的执行流程,并分析了HA、非HA、联邦集群的Upgrade、Downgrade和Rollback操作。滚动升级允许在升级过程中保持集群服务,降低了升级成本。

前言


目前Hadoop版本更新迭代的速度还是比较快的,每次新版本的发布,都是一件令人期待的事情.因为这意味着用户可以使用新的功能特性,又或者说在新版中某某模块性能得到了巨大提升等等.现在问题来了,如果我们想用新的Hadoop版本,那么我们必须对现有版本进行升级.在一定程度上来讲,Hadoop的升级说白了其实就是HDFS的升级,也就是数据上的升级.在传统的升级方案中,我们往往需要停止集群服务来做这样的操作.如果集群规模已经是非常大的情况下时,这样做的代价实在太高.为了解决这类的问题,HDFS在2.4.0以及以上版本中引入了Rolling Upgrade的概念,也就是滚动升级的意思.Rolling Upgrade升级方式最重要的一点是它可以保证在升级过程中集群依然可以对外提供服务.本文将会从升级原理以及实际步骤操作两方面进行相关内容的阐述.

Rolling Upgrade相关指令以及原理


在讲述Rolling Upgrade实际操作步骤之前,我们有必要事先了解其中相关的使用命令,以及这些命令的作用和原理.主要是下面这6个:

  • 1.hdfs dfsadmin下的子命令
    • 1.1 hdfs dfsadmin -rollingUpgrade prepare
    • 1.2 hdfs dfsadmin -rollingUpgrade query
    • 1.3 hdfs dfsadmin -rollingUpgrade finalize
  • 2.hdfs namenode下的子命令
    • 2.1 hdfs namenode -rollingUpgrade rollback
    • 2.2 hdfs namenode -rollingUpgrade started
  • 3.hdfs datanode下的子命令
    • 3.1 hdfs datanode rollback

下面我们对上面的命令一一做分析.

hdfs dfsadmin下的升级命令


首先是hdfs dfsadmin下的3条命令,这是官方文档对这3条命令的解释:

query: Query the current rolling upgrade status.
prepare: Prepare a new rolling upgrade.
finalize : Finalize the current rolling upgrade.

可能看完官方文档的解释,我们还是会有点迷惑,没有关系,下面我们直接定位到相关代码,来深入了解其背后的操作原理.

hdfs dfsadmin的入口类为DFSAdmin,顺着此类,我们可以找到对应服务端的处理方法,在NameNodeRpcServer类中,如下:

  public RollingUpgradeInfo rollingUpgrade(RollingUpgradeAction action) throws IOException {
    checkNNStartup();
    LOG.info("rollingUpgrade " + action);
    // 不同hdfs dfsadmin -rollingUpgrade操作对应不同action处理
    switch(action) {
    case QUERY:
      // 对应hdfs dfsadmin -rollingUpgrade query
      return namesystem.queryRollingUpgrade();
    case PREPARE:
      // 对应hdfs dfsadmin -rollingUpgrade prepare
      return namesystem.startRollingUpgrade();
    case FINALIZE:
      // 对应hdfs dfsadmin -rollingUpgrade finalize
      return namesystem.finalizeRollingUpgrade();
    default:
      throw new UnsupportedActionException(action + " is not yet supported.");
    }
  }

NameNodeRpcServer类的rollingUpgrade方法继而调用FSNamesystem的方法完成最后的操作.我们首先分析Prepare的处理逻辑,代码如下:

  RollingUpgradeInfo startRollingUpgrade() throws IOException {
    checkSuperuserPrivilege();
    checkOperation(OperationCategory.WRITE);
    writeLock();
    try {
      checkOperation(OperationCategory.WRITE);
      // 判断是否正处于rolling upgrade状态,如果是,则返回相关信息
      if (isRollingUpgrade()) {
        return rollingUpgradeInfo;
      }
      // 如果不是,则执行rolling upgrade开始动作,分为HA和非HA模式
      long startTime = now();
      if (!haEnabled) { // for non-HA, we require NN to be in safemode
        startRollingUpgradeInternalForNonHA(startTime);
      } else { // for HA, NN cannot be in safemode
        checkNameNodeSafeMode("Failed to start rolling upgrade");
        startRollingUpgradeInternal(startTime);
      }
      // 执行editlog相关记录操作
      getEditLog().logStartRollingUpgrade(rollingUpgradeInfo.getStartTime());
      if (haEnabled) {
        // roll the edit log to make sure the standby NameNode can tail
        getFSImage().rollEditLog();
      }
    } finally {
      writeUnlock();
    }
    ...
    // 返回升级对象信息
    return rollingUpgradeInfo;
  }

在startRollingUpgrade执行方法中,主要创建了将来用于rollback的镜像文件,但是在实现方式上,HA与非HA模式并不完全相同,以非HA模式为例:

  private void startRollingUpgradeInternalForNonHA(long startTime)
      throws IOException {
    Preconditions.checkState(!haEnabled);
    if (!isInSafeMode()) {
      throw new IOException("Safe mode should be turned ON "
          + "in order to create namespace image.");
    }
    checkRollingUpgrade("start rolling upgrade");
    getFSImage().checkUpgrade();
    // 做一次checkpoint,生成一个新的用于rollback的镜像
    // in non-HA setup, we do an extra checkpoint to generate a rollback image
    getFSImage().saveNamespace(this, NameNodeFile.IMAGE_ROLLBACK, null);
    LOG.info("Successfully saved namespace for prepar
root@job-da8abcdd-9948-4878-9d20-371dceb00ee1-master-0:/home# start-dfs.sh Starting namenodes on [master] /opt/hadoop/hadoop/bin/hdfs: 26: function: not found /opt/hadoop/hadoop/bin/hdfs: 28: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 29: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 30: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 31: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 32: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 33: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 35: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 36: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 37: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 38: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 39: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 40: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 41: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 42: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 43: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 44: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 45: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 46: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 47: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 48: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 49: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 50: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 51: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 52: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 53: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 54: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 55: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 56: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 57: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 58: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 59: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 60: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 61: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 62: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 63: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 64: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 65: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 66: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 67: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 68: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 69: hadoop_generate_usage: not found /opt/hadoop/hadoop/bin/hdfs: 77: function: not found /opt/hadoop/hadoop/bin/hdfs: 218: hadoop_validate_classname: not found /opt/hadoop/hadoop/bin/hdfs: 219: hadoop_exit_with_usage: not found /opt/hadoop/hadoop/bin/hdfs: 226: [[: not found /opt/hadoop/hadoop/bin/hdfs: 235: [[: not found ERROR: Cannot execute /opt/hadoop/hadoop/bin/../libexec/hdfs-config.sh. Starting datanodes /opt/hadoop/hadoop/bin/hdfs: 26: function: not found /opt/hadoop/hadoop/bin/hdfs: 28: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 29: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 30: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 31: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 32: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 33: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 35: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 36: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 37: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 38: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 39: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 40: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 41: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 42: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 43: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 44: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 45: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 46: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 47: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 48: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 49: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 50: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 51: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 52: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 53: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 54: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 55: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 56: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 57: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 58: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 59: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 60: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 61: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 62: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 63: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 64: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 65: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 66: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 67: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 68: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 69: hadoop_generate_usage: not found /opt/hadoop/hadoop/bin/hdfs: 77: function: not found /opt/hadoop/hadoop/bin/hdfs: 218: hadoop_validate_classname: not found /opt/hadoop/hadoop/bin/hdfs: 219: hadoop_exit_with_usage: not found /opt/hadoop/hadoop/bin/hdfs: 226: [[: not found /opt/hadoop/hadoop/bin/hdfs: 235: [[: not found ERROR: Cannot execute /opt/hadoop/hadoop/bin/../libexec/hdfs-config.sh. Starting secondary namenodes [job-da8abcdd-9948-4878-9d20-371dceb00ee1-master-0] /opt/hadoop/hadoop/bin/hdfs: 26: function: not found /opt/hadoop/hadoop/bin/hdfs: 28: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 29: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 30: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 31: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 32: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 33: hadoop_add_option: not found /opt/hadoop/hadoop/bin/hdfs: 35: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 36: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 37: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 38: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 39: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 40: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 41: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 42: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 43: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 44: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 45: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 46: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 47: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 48: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 49: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 50: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 51: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 52: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 53: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 54: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 55: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 56: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 57: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 58: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 59: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 60: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 61: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 62: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 63: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 64: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 65: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 66: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 67: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 68: hadoop_add_subcommand: not found /opt/hadoop/hadoop/bin/hdfs: 69: hadoop_generate_usage: not found /opt/hadoop/hadoop/bin/hdfs: 77: function: not found /opt/hadoop/hadoop/bin/hdfs: 218: hadoop_validate_classname: not found /opt/hadoop/hadoop/bin/hdfs: 219: hadoop_exit_with_usage: not found /opt/hadoop/hadoop/bin/hdfs: 226: [[: not found /opt/hadoop/hadoop/bin/hdfs: 235: [[: not found ERROR: Cannot execute /opt/hadoop/hadoop/bin/../libexec/hdfs-config.sh.
最新发布
07-11
<think>我们正在处理一个Hadoop启动HDFS服务时出现的错误。用户执行start-dfs.sh脚本时遇到了两个主要问题: 1. 提示各种函数未找到(function not found) 2. 无法执行hdfs-config.sh脚本(Cannot execute /opt/hadoop/hadoop/bin/../libexec/hdfs-config.sh) 根据经验,这类问题通常与脚本的执行环境有关,比如脚本的权限问题、脚本的编码格式(如DOS格式与Unix格式的差异)或者环境变量配置问题。 首先,我们检查脚本的权限。如果脚本没有可执行权限,那么就会出现无法执行的错误。我们可以使用`ls -l`命令检查权限,并使用`chmod`命令添加可执行权限。 其次,检查脚本的编码格式。如果脚本是在Windows环境下编辑过,可能会包含DOS格式的换行符(CRLF)而不是Unix格式的换行符(LF),这可能导致解释器无法正确识别。我们可以使用`dos2unix`工具转换格式。 另外,函数未定义的错误可能是由于环境变量配置问题或者脚本中函数定义未被正确加载。我们需要检查hdfs-config.sh脚本中是否正确定义了这些函数,以及是否在调用前被正确加载。 具体步骤: 1. 检查并修复脚本权限: ```bash # 检查权限 ls -l /opt/hadoop/hadoop/libexec/hdfs-config.sh # 如果没有可执行权限,添加权限 chmod +x /opt/hadoop/hadoop/libexec/hdfs-config.sh ``` 2. 检查并转换脚本格式: ```bash # 检查文件类型(查看是否有CRLF) file /opt/hadoop/hadoop/libexec/hdfs-config.sh # 或者使用cat -v查看是否有^M cat -v /opt/hadoop/hadoop/libexec/hdfs-config.sh | head # 安装dos2unix(如果未安装) sudo apt-get install dos2unix # 对于Debian/Ubuntu # 转换文件 dos2unix /opt/hadoop/hadoop/libexec/hdfs-config.sh ``` 3. 检查脚本中的函数定义: 打开hdfs-config.sh,查看其中是否定义了缺失的函数(比如`hdfs_java_setup`等)。通常,这些函数应该在脚本中定义,或者通过`source`引入其他文件。检查脚本开头是否有引入其他配置文件的语句,例如: ```bash # 通常会有类似这样的语句 if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then . "${HADOOP_CONF_DIR}/hadoop-env.sh" fi ``` 确保这些配置文件存在且路径正确。 4. 检查环境变量: 确保`HADOOP_CONF_DIR`环境变量已正确设置,指向Hadoop的配置目录(通常为`/opt/hadoop/hadoop/etc/hadoop`)。可以在执行start-dfs.sh之前设置环境变量,或者检查`hadoop-env.sh`等配置文件。 5. 执行start-dfs.sh时添加调试信息: 在start-dfs.sh脚本的开头添加`set -x`,这样可以看到执行过程的详细输出,有助于定位问题。 如果以上步骤不能解决问题,可能需要查看具体的错误信息,并进一步分析。 由于用户的问题描述中提到了“各种函数未找到”,因此我们需要具体查看是哪些函数未找到。常见的函数如`hdfs_java_setup`、`hadoop_validate_env`等,这些函数通常在`hdfs-config.sh`或其他配置脚本中定义。 另外,注意Hadoop版本差异,不同版本的脚本可能略有不同。 下面,我们按照上述思路给出详细解答。</think>### Hadoop启动HDFS服务错误解决方案 当执行`start-dfs.sh`脚本出现 **"function not found"** 和 **"Cannot execute /opt/hadoop/hadoop/bin/../libexec/hdfs-config.sh"** 错误时,通常由以下原因引起: #### 1. 脚本权限问题 (最常见原因) ```bash # 检查脚本权限 (应显示 -rwxr-xr-x) ls -l /opt/hadoop/hadoop/libexec/hdfs-config.sh # 修复权限 (755表示所有者可读写执行,其他用户可读执行) sudo chmod 755 /opt/hadoop/hadoop/libexec/*.sh sudo chmod 755 /opt/hadoop/hadoop/sbin/*.sh ``` #### 2. 文件编码格式错误 (Windows/Linux换行符不兼容) ```bash # 检查文件格式 (应显示 "ASCII text",而非"with CRLF") file /opt/hadoop/hadoop/libexec/hdfs-config.sh # 安装转换工具并修复 sudo apt-get install dos2unix # Debian/Ubuntu sudo yum install dos2unix # CentOS/RHEL sudo dos2unix /opt/hadoop/hadoop/libexec/*.sh ``` #### 3. 环境变量配置错误 检查`hadoop-env.sh`中的关键配置: ```bash # 确保JAVA_HOME正确指向JDK安装路径 export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 # 检查Hadoop配置路径 export HADOOP_CONF_DIR=/opt/hadoop/hadoop/etc/hadoop ``` #### 4. 函数未定义问题 在`hdfs-config.sh`开头添加调试信息: ```bash #!/usr/bin/env bash set -x # 启用详细执行追踪 echo "Current PATH: $PATH" source "${HADOOP_CONF_DIR}/hadoop-env.sh" # 确保加载环境 ``` #### 5. 路径链接问题 检查符号链接是否有效: ```bash # 验证实际路径 ls -l /opt/hadoop/hadoop/bin/../libexec/hdfs-config.sh # 重建符号链接 (示例) cd /opt/hadoop/hadoop/bin ln -sfn ../libexec/hdfs-config.sh ``` #### 验证修复步骤 1. 修复权限和文件格式 2. 重新加载环境变量: ```bash source /opt/hadoop/hadoop/etc/hadoop/hadoop-env.sh ``` 3. 手动测试脚本: ```bash /opt/hadoop/hadoop/libexec/hdfs-config.sh ``` 4. 再次启动HDFS: ```bash start-dfs.sh ``` > **注意**:若使用非root用户操作,确保该用户对`/opt/hadoop`目录有所有权: > ```bash > sudo chown -R hadoopuser:hadoopgroup /opt/hadoop > ``` [^1]: Hadoop官方文档 - 启动故障排除指南
评论 4
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值