hadoop启动之“hadoop-daemon.sh”详解

本文详细解析了Hadoop启动文件hadoop-daemon.sh的作用及使用方法,包括如何在不同机器上启动Hadoop的各类后台程序,并提供了代码注释帮助理解其工作流程。

        今天看了一下启动文件“hadoop-daemon.sh”,仔细看了一下大概知道它的作用,使用“hadoop-daemon.sh“脚本启动和停止hadoop后台程序。它可以做到在A机器上启动”namenode“,B机器启动”secondarynamenode“ C机器启动”datanode“, ”tasktracker“,具体启动如下 :

 ./hadoop-daemon.sh start namenode
 ./hadoop-daemon.sh start secondarynamenode
 ./hadoop-daemon.sh start jobtracker
 ./hadoop-daemon.sh start datanode
 ./hadoop-daemon.sh start tasktracker

如果要停止可以运行如下命令

 ./hadoop-daemon.sh stop namenode
 ./hadoop-daemon.sh stop secondarynamenode
 ./hadoop-daemon.sh stop jobtracker
 ./hadoop-daemon.sh stop datanode
 ./hadoop-daemon.sh stop tasktracker


下面是是代码分析,把主要部分都写了注释,代码不多,比较容易看懂,但有一处没看懂

# Runs a Hadoop command as a daemon.
#
# Environment Variables
#
#   HADOOP_CONF_DIR  Alternate conf dir. Default is ${HADOOP_HOME}/conf.
#   HADOOP_LOG_DIR   Where log files are stored.  PWD by default.
#   HADOOP_MASTER    host:path where hadoop code should be rsync'd from
#   HADOOP_PID_DIR   The pid files are stored. /tmp by default.
#   HADOOP_IDENT_STRING   A string representing this instance of hadoop. $USER by default
#   HADOOP_NICENESS The scheduling priority for daemons. Defaults to 0.
##

usage="Usage: hadoop-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] (start|stop) <hadoop-command> <args...>"
# 第一个参数是下面的“startStop”是"start" or "end"
# 第二个参数是“hadoop-command”
# hadoop-command:就是namenode|datanode|secondarynamenode|jobtracker|tasktracke

# if no args specified, show usage
if [ $# -le 1 ]; then
  echo $usage
  exit 1
fi

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/hadoop-config.sh

# get arguments
startStop=$1
shift
command=$1
shift

# 日记文件默认一共5个,后缀是log,log.1,log.2,log.3,log4,log5, 
# 每次写日记的文件名都是.log, 但是上一次的.log.num变成.log.num+1,
# .log.5被.log.4覆盖后结束
hadoop_rotate_log ()
{
    log=$1;
    num=5;
if [ -n "$2" ]; then
num=$2
    fi
    if [ -f "$log" ]; then # rotate logs
while [ $num -gt 1 ]; do
prev=`expr $num - 1`
[ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num"
num=$prev
done
mv "$log" "$log.$num";
    fi
}

if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
fi

# get log directory
if [ "$HADOOP_LOG_DIR" = "" ]; then
  export HADOOP_LOG_DIR="$HADOOP_HOME/logs"
fi
mkdir -p "$HADOOP_LOG_DIR"

if [ "$HADOOP_PID_DIR" = "" ]; then
  HADOOP_PID_DIR=/tmp
fi

if [ "$HADOOP_IDENT_STRING" = "" ]; then
  export HADOOP_IDENT_STRING="$USER"
fi

# some variables
export HADOOP_LOGFILE=hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.log
export HADOOP_ROOT_LOGGER="INFO,DRFA"
log=$HADOOP_LOG_DIR/hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.out
pid=$HADOOP_PID_DIR/hadoop-$HADOOP_IDENT_STRING-$command.pid

# Set default scheduling priority
if [ "$HADOOP_NICENESS" = "" ]; then
    export HADOOP_NICENESS=0
fi

case $startStop in

  (start)

    mkdir -p "$HADOOP_PID_DIR"

    # 如果已经启动就提示先关闭,然后程序退出
    if [ -f $pid ]; then
  # 假如$command是“namenode”,就先判断它是否启动,由于$pid是存放
  # “namenode”运行的进程号,所以可以通过kill -0 `cat $pid`判断
      if kill -0 `cat $pid` > /dev/null 2>&1; then
        echo $command running as process `cat $pid`.  Stop it first.
        exit 1
      fi
    fi

    # 不知道干啥???
    if [ "$HADOOP_MASTER" != "" ]; then
      echo rsync from $HADOOP_MASTER
      rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' --exclude='contrib/hod/logs/*' $HADOOP_MASTER/ "$HADOOP_HOME"
    fi

# 记录日志
    hadoop_rotate_log $log
    echo starting $command, logging to $log
    cd "$HADOOP_HOME"

# nice指令可以改变程序执行的优先权等级
# 下面这段代码是核心代码,怎么样启动hadoop
    nohup nice -n $HADOOP_NICENESS "$HADOOP_HOME"/bin/hadoop --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
    # 将刚刚启动的进程号写入“$pid”文件
echo $! > $pid
    sleep 1; head "$log"
    ;;

  (stop)

    if [ -f $pid ]; then
      if kill -0 `cat $pid` > /dev/null 2>&1; then
        echo stopping $command
# 通过“$pid”文件内容,也就是进程号来关闭
        kill `cat $pid`
      else
        echo no $command to stop
      fi
    else
      echo no $command to stop
    fi
    ;;

  (*)
    echo $usage
    exit 1
    ;;

esac

我在配置虚拟机Hadoop,目前遇到问题了,帮我分析一下。[root@master bin]# su ./start-dfs.sh su: user ./start-dfs.sh does not exist [root@master bin]# hdfs namenode -format WARNING: /export/server/hadoop/logs does not exist. Creating. 2025-10-14 03:15:33,652 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = master/192.168.80.130 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 3.1.3 STARTUP_MSG: classpath = /export/server/hadoop/etc/hadoop:/export/server/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/export/server/hadoop/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/export/server/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/export/server/hadoop/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/export/server/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/export/server/hadoop/share/hadoop/common/lib/checker-qual-2.5.2.jar:/export/server/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/export/server/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/export/server/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/export/server/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/export/server/hadoop/share/hadoop/common/lib/commons-compress-1.18.jar:/export/server/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/export/server/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/export/server/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/export/server/hadoop/share/hadoop/common/lib/commons-lang3-3.4.jar:/export/server/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/export/server/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/export/server/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/export/server/hadoop/share/hadoop/common/lib/curator-client-2.13.0.jar:/export/server/hadoop/share/hadoop/common/lib/curator-framework-2.13.0.jar:/export/server/hadoop/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/export/server/hadoop/share/hadoop/common/lib/error_prone_annotations-2.2.0.jar:/export/server/hadoop/share/hadoop/common/lib/failureaccess-1.0.jar:/export/server/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/export/server/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar:/export/server/hadoop/share/hadoop/common/lib/hadoop-annotations-3.1.3.jar:/export/server/hadoop/share/hadoop/common/lib/hadoop-auth-3.1.3.jar:/export/server/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/export/server/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/export/server/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/export/server/hadoop/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/export/server/hadoop/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/export/server/hadoop/share/hadoop/common/lib/jackson-core-2.7.8.jar:/export/server/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/export/server/hadoop/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/export/server/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/export/server/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/export/server/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/export/server/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/export/server/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/export/server/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/export/server/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/export/server/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/export/server/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/export/server/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/export/server/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/export/server/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/export/server/hadoop/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/export/server/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/export/server/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/export/server/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/export/server/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/export/server/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/export/server/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/export/server/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/export/server/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/export/server/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/export/server/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/export/server/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/export/server/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/export/server/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/export/server/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/export/server/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/export/server/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/export/server/hadoop/share/hadoop/common/lib/zookeeper-3.4.13.jar:/export/server/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/export/server/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/export/server/hadoop/share/hadoop/common/hadoop-common-3.1.3-tests.jar:/export/server/hadoop/share/hadoop/common/hadoop-common-3.1.3.jar:/export/server/hadoop/share/hadoop/common/hadoop-nfs-3.1.3.jar:/export/server/hadoop/share/hadoop/common/hadoop-kms-3.1.3.jar:/export/server/hadoop/share/hadoop/hdfs:/export/server/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/export/server/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/export/server/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/export/server/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/export/server/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.1.3.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/export/server/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/export/server/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/export/server/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/export/server/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/export/server/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/export/server/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/export/server/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/export/server/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/export/server/hadoop/share/hadoop/hdfs/lib/curator-framework-2.13.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/export/server/hadoop/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/export/server/hadoop/share/hadoop/hdfs/lib/error_prone_annotations-2.2.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/export/server/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/export/server/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.1.3.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/export/server/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/export/server/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/export/server/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/export/server/hadoop/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/export/server/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/export/server/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/export/server/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/export/server/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/export/server/hadoop/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/export/server/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/export/server/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.3-tests.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.3.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.3.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.3-tests.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.3.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.3-tests.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.3.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.3-tests.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.3.jar:/export/server/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.3.jar:/export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn:/export/server/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/export/server/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/export/server/hadoop/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/export/server/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/export/server/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/export/server/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/export/server/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/export/server/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/export/server/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/export/server/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/export/server/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/export/server/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/export/server/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/export/server/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/export/server/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/export/server/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/export/server/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/export/server/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/export/server/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/export/server/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/export/server/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.1.3.jar:/export/server/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.1.3.jar STARTUP_MSG: build = https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579; compiled by 'ztang' on 2019-09-12T02:47Z STARTUP_MSG: java = 1.8.0_241 ************************************************************/ 2025-10-14 03:15:33,677 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2025-10-14 03:15:33,821 ERROR conf.Configuration: error parsing conf core-site.xml com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [31,0,"file:/export/server/hadoop/etc/hadoop/core-site.xml"] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager$HookEntry.<init>(ShutdownHookManager.java:207) at org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:304) at org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:723) at org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1754) 2025-10-14 03:15:33,824 ERROR namenode.NameNode: Failed to start namenode. java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [31,0,"file:/export/server/hadoop/etc/hadoop/core-site.xml"] at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3024) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager$HookEntry.<init>(ShutdownHookManager.java:207) at org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:304) at org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:723) at org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1754) Caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [31,0,"file:/export/server/hadoop/etc/hadoop/core-site.xml"] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) ... 11 more 2025-10-14 03:15:33,899 INFO util.ExitUtil: Exiting with status 1: java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [31,0,"file:/export/server/hadoop/etc/hadoop/core-site.xml"] 2025-10-14 03:15:33,918 ERROR conf.Configuration: error parsing conf core-site.xml com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [31,0,"file:/export/server/hadoop/etc/hadoop/core-site.xml"] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145) at org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102) Exception in thread "Thread-1" java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [31,0,"file:/export/server/hadoop/etc/hadoop/core-site.xml"] at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3024) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145) at org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102) Caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [31,0,"file:/export/server/hadoop/etc/hadoop/core-site.xml"] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) ... 9 more [root@master bin]# cd /sbin [root@master sbin]# start-dfs.sh -bash: /usr/sbin/start-dfs.sh: 权限不够 [root@master sbin]# sudo start-dfs.sh sudo: start-dfs.sh:找不到命令 [root@master sbin]# vi core-site.xml [root@master sbin]# cd /export/server [root@master server]# vi core-site.xml [root@master server]# vi hdfs-site.xml [root@master server]# cd /export/server/hadoop/etc/hadoop/ [root@master hadoop]# ls capacity-scheduler.xml kms-log4j.properties configuration.xsl kms-site.xml container-executor.cfg log4j.properties core-site.xml mapred-env.cmd hadoop-env.cmd mapred-env.sh hadoop-env.sh mapred-queues.xml.template hadoop-metrics2.properties mapred-site.xml hadoop-policy.xml shellprofile.d hadoop-user-functions.sh.example ssl-client.xml.example hdfs-site.xml ssl-server.xml.example httpfs-env.sh user_ec_policies.xml.template httpfs-log4j.properties workers httpfs-signature.secret yarn-env.cmd httpfs-site.xml yarn-env.sh kms-acls.xml yarnservice-log4j.properties kms-env.sh yarn-site.xml [root@master hadoop]# vi core-site.xml [root@master hadoop]# cd /sbin [root@master sbin]# start-dfs.sh -bash: /usr/sbin/start-dfs.sh: 权限不够 [root@master sbin]# sudo chomd +x /usr/sbin/start-dfs.sh sudo: chomd:找不到命令 [root@master sbin]# chomd +x /usr/sbin/start-dfs.sh -bash: chomd: 未找到命令 [root@master sbin]# chmod +x /usr/sbin/start-dfs.sh [root@master sbin]# start-dfs.sh [root@master sbin]# jps 18301 Jps [root@master sbin]# ../start-dfs.sh -bash: ../start-dfs.sh: 没有那个文件或目录 [root@master sbin]# ./start-dfs.sh [root@master sbin]# jps 18872 Jps [root@master sbin]#
10-15
root@master server]# [root@master server]# which hadoop /root/export/server/hadoop/bin/hadoop [root@master server]# whereis hadoop hadoop: /root/export/server/hadoop/bin/hadoop /root/export/server/hadoop/bin/hadoop.cmd [root@master server]# ls -l /usr/bin/hadoop ls: 无法访问/usr/bin/hadoop: 没有那个文件或目录 [root@master server]# mysql --version mysql Ver 15.1 Distrib 5.5.68-MariaDB, for Linux (x86_64) using readline 5.1 [root@master server]# which mysql /usr/bin/mysql [root@master server]# [root@master server]# cat > /root/start-hive.sh << 'EOF' > #!/bin/bash > > source /etc/profile > > echo "?? 启动 Hadoop DFS..." > /export/server/hadoop/sbin/start-dfs.sh > > echo "?? 手动启动 DataNode..." > /export/server/hadoop/sbin/hadoop-daemon.sh start datanode > > echo "?? 启动 MySQL..." > /export/server/mysql/bin/mysqld_safe --user=mysql & > > sleep 8 > > echo "?? 启动 Hive Metastore..." > nohup /export/server/hive/bin/hive --service metastore > /export/server/hive/metastore.log 2>&1 & > > sleep 12 > > echo "?? 启动 HiveServer2..." > nohup /export/server/hive/bin/hive --service hiveserver2 > /export/server/hive/hiveserver2.log 2>&1 & > > echo "? 所有服务已启动!请执行 jps 查看进程" > EOF [root@master server]# #一键启动hadoop集群 [root@master server]# cd /root [root@master ~]# /root/start-hive.sh bash: /root/start-hive.sh: 权限不够 [root@master ~]# jps 96417 DataNode 96162 NameNode 65336 Jps 103240 ResourceManager 103483 NodeManager 96894 SecondaryNameNode [root@master ~]# [root@master ~]# chmod 777 /root/start-hive.sh [root@master ~]# /root/start-hive.sh ?? 启动 Hadoop DFS... /root/start-hive.sh:行6: /export/server/hadoop/sbin/start-dfs.sh: 没有那个文件或目录 ?? 手动启动 DataNode... /root/start-hive.sh:行9: /export/server/hadoop/sbin/hadoop-daemon.sh: 没有那个文件或目录 ?? 启动 MySQL... 2025-11-26T14:10:49.278859Z mysqld_safe Logging to '/export/server/mysql/data/error.log'. 2025-11-26T14:10:49.374057Z mysqld_safe Starting mysqld daemon with databases from /export/server/mysql/data ?? 启动 Hive Metastore... ?? 启动 HiveServer2... ? 所有服务已启动!请执行 jps 查看进程 [root@master ~]# jps 96417 DataNode 96162 NameNode 103240 ResourceManager 103483 NodeManager 96894 SecondaryNameNode 66734 Jps [root@master ~]# [root@master ~]# #启动mysql [root@master ~]# mysql -u root -p Enter password: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111) [root@master ~]# 要怎么解决MySQL的问题然后安装hive
最新发布
11-27
[root@master hadoop]# cd $HADOOP_HOME/sbin [root@master sbin]# vi start-dfs.sh [root@master sbin]# vi stop-dfs.sh [root@master sbin]# vi stop-dfs.sh [root@master sbin]# vi start-yarn.sh [root@master sbin]# vi start-yarn.sh [root@master sbin]# vi stop-yarn.sh [root@master sbin]# # 创建 tmp 目录 [root@master sbin]# mkdir -p /var/log/hadoop/tmp [root@master sbin]# chmod 777 /var/log/hadoop/tmp [root@master sbin]# [root@master sbin]# # 创建 HDFS 数据目录 [root@master sbin]# mkdir -p /data/hadoop/hdfs/name [root@master sbin]# mkdir -p /data/hadoop/hdfs/data [root@master sbin]# chmod -R 777 /data/hadoop/hdfs [root@master sbin]# [root@master sbin]# # 创建 YARN 日志目录 [root@master sbin]# mkdir -p /data/hadoop/yarn/local [root@master sbin]# mkdir -p /data/tmp/logs [root@master sbin]# chmod -R 777 /data/hadoop/yarn [root@master sbin]# chmod -R 777 /data/tmp [root@master sbin]# hdfs namenode -format namenode is running as process 94099. Stop it first. [root@master sbin]# cd $HADOOP_HOME [root@master hadoop]# [root@master hadoop]# # 启动 HDFS [root@master hadoop]# sbin/start-dfs.sh sbin/start-dfs.sh:行1: OP_HOME/sbin/stop-dfs.sh: 没有那个文件或目录 WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. Starting namenodes on [master] 上一次登录:四 11月 6 20:50:16 CST 2025pts/2 上 master: Unable to get valid context for root Starting datanodes 上一次登录:四 11月 6 21:11:30 CST 2025pts/2 上 slave3: ssh: Could not resolve hostname slave3: Name or service not known slave2: ssh: Could not resolve hostname slave2: Name or service not known localhost: Unable to get valid context for root slave1: ssh: connect to host slave1 port 22: No route to host 2025-11-06 21:11:36,193 ERROR conf.Configuration: error parsing conf yarn-site.xml com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [102,0,"file:/export/server/hadoop/etc/hadoop/yarn-site.xml"] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1254) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1532) at org.apache.hadoop.security.Groups.<init>(Groups.java:113) at org.apache.hadoop.security.Groups.<init>(Groups.java:102) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:336) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:303) at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1827) at org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:709) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:659) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:570) at org.apache.hadoop.hdfs.tools.GetConf.run(GetConf.java:344) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.GetConf.main(GetConf.java:361) Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [102,0,"file:/export/server/hadoop/etc/hadoop/yarn-site.xml"] at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3024) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1254) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1532) at org.apache.hadoop.security.Groups.<init>(Groups.java:113) at org.apache.hadoop.security.Groups.<init>(Groups.java:102) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:336) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:303) at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1827) at org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:709) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:659) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:570) at org.apache.hadoop.hdfs.tools.GetConf.run(GetConf.java:344) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.GetConf.main(GetConf.java:361) Caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [102,0,"file:/export/server/hadoop/etc/hadoop/yarn-site.xml"] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) ... 18 more [root@master hadoop]# [root@master hadoop]# # 启动 YARN [root@master hadoop]# sbin/start-yarn.sh Starting resourcemanagers on [] 上一次登录:四 11月 6 21:11:30 CST 2025pts/2 上 slave2: ssh: Could not resolve hostname slave2: Name or service not known slave3: ssh: Could not resolve hostname slave3: Name or service not known localhost: Unable to get valid context for root slave1: ssh: connect to host slave1 port 22: No route to host Starting nodemanagers 上一次登录:四 11月 6 21:11:37 CST 2025pts/2 上 slave3: ssh: Could not resolve hostname slave3: Name or service not known slave2: ssh: Could not resolve hostname slave2: Name or service not known localhost: Unable to get valid context for root slave1: ssh: connect to host slave1 port 22: No route to host [root@master hadoop]# [root@master hadoop]# # 启动 JobHistory Server [root@master hadoop]# sbin/mr-jobhistory-daemon.sh start historyserver WARNING: Use of this script to start the MR JobHistory daemon is deprecated. WARNING: Attempting to execute replacement "mapred --daemon start" instead. ERROR: Cannot set priority of historyserver process 112921 [root@master hadoop]# # 启动 JobHistory Server [root@master hadoop]# sbin/mr-jobhistory-daemon.sh start historyserver WARNING: Use of this script to start the MR JobHistory daemon is deprecated. WARNING: Attempting to execute replacement "mapred --daemon start" instead. ERROR: Cannot set priority of historyserver process 113141 [root@master hadoop]# jps 94099 NameNode 94307 SecondaryNameNode 87714 DataNode 113301 Jps [root@master hadoop]# [root@master hadoop]#
11-07
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

sinoyang

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值