PRVF-0002 : could not retrieve local node name

本文介绍了解决Oracle安装过程中遇到的PRVF-0002错误的方法。该错误通常由于主机名反向解析失败引起。通过配置DNS服务或修改/etc/hosts文件添加主机名和域名全称,可以有效解决此问题。

安装 oracle 的时候,./runInstaller 启动报错  PRVF-0002 : could not retrieve local node name

 

碰到这个错误是因为 OUT试图对你主机名的反向解析。因此,

你需要设置一个用于反向解析和解析DNS服务,或者,可以修改/etc/hosts文件,添加主机名和域名全称,例如:

[root@12c01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.100.25.16 12c01

处理后,接触问题。

2025-03-29 10:14:00.654114 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed 2025-03-29 10:14:00.660132 I | op-k8sutil: deployment "rook-ceph-mgr-a" did not change, nothing to update 2025-03-29 10:14:00.660179 I | cephclient: getting or creating ceph auth key "mgr.b" 2025-03-29 10:14:00.918206 I | op-mgr: deployment for mgr rook-ceph-mgr-b already exists. updating if needed 2025-03-29 10:14:00.922071 I | op-k8sutil: deployment "rook-ceph-mgr-b" did not change, nothing to update 2025-03-29 10:14:02.012907 E | ceph-cluster-controller: failed to reconcile CephCluster "rook-ceph/rook-ceph". failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: failed to start ceph mgr: failed to enable mgr services: failed to enable service monitor: service monitor could not be enabled: failed to retrieve servicemonitor. servicemonitors.monitoring.coreos.com "rook-ceph-mgr" is forbidden: User "system:serviceaccount:rook-ceph:rook-ceph-system" cannot get resource "servicemonitors" in API group "monitoring.coreos.com" in the namespace "rook-ceph" 2025-03-29 10:14:02.052359 E | ceph-spec: failed to update cluster condition to {Type:Ready Status:True Reason:ClusterCreated Message:Cluster created successfully LastHeartbeatTime:2025-03-29 10:14:02.048230153 +0000 UTC m=+9812.323876586 LastTransitionTime:2025-03-29 08:40:15 +0000 UTC}. failed to update object "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please apply your changes to the latest version and try again 2025-03-29 10:25:15.419892 E | ceph-nodedaemon-controller: node reconcile failed: failed to enable service monitor: service monitor could not be enabled: failed to retrieve servicemonitor. servicemonitors.monitoring.coreos.com "rook-ceph-exporter" is forbidden: User "system:serviceaccount:rook-ceph:rook-ceph-system" cannot get resource "servicemonitors" in API group "monitoring.coreos.com" in the namespace "rook-ceph" 2025-03-29 10:25:16.017269 E | ceph-nodedaemon-controller: node reconcile failed: failed to enable service monitor: service monitor could not be enabled: failed to retrieve servicemonitor. servicemonitors.monitoring.coreos.com "rook-ceph-exporter" is forbidden: User "system:serviceaccount:rook-ceph:rook-ceph-system" cannot get resource "servicemonitors" in API group "monitoring.coreos.com" in the namespace "rook-ceph" 2025-03-29 10:25:18.622406 E | ceph-nodedaemon-controller: node reconcile failed: failed to enable service monitor: service monitor could not be enabled: failed to retrieve servicemonitor. servicemonitors.monitoring.coreos.com "rook-ceph-exporter" is forbidden: User "system:serviceaccount:rook-ceph:rook-ceph-system" cannot get resource "servicemonitors" in API group "monitoring.coreos.com" in the namespace "rook-ceph"
03-30
#!/bin/bash # =================================== # GlusterFS 一键安全部署脚本 # 功能:SSH免密 + 集群部署 + 复制卷 + 客户端挂载 + 权限设置 + 自动目录结构 # 作者:ljx # 使用:在管理节点(如 10.240.24.15)运行,首次需输入各节点密码 # 注意:数据端口自动分配(默认从 49152 开始),无需手动设置 # =================================== # ============ 配置参数 ============ # 管理节点(运行此脚本的机器)IP MANAGER_NODE="10.240.24.15" # GlusterFS 存储节点列表(三副本) GLUSTER_NODES=("10.240.24.15" "10.240.24.113" "10.240.24.156") # 客户端节点列表(需要挂载的机器) CLIENT_NODES=("10.240.24.43" "10.240.24.93" "10.240.24.5" "10.240.24.49") # SSH 用户(建议 root) SSH_USER="root" # Brick 存储目录 BRICK_DIR="/opt/gfs_brick" # GlusterFS 存储卷名称 VOLUME_NAME="gv0" # 客户端挂载点 MOUNT_POINT="/glusterfs" # 本地 SSH 密钥路径 KEY_FILE="$HOME/.ssh/id_rsa" # ============ 工具函数 ============ log() { echo -e "\e[36m$(date '+%H:%M:%S') >> $1\e[0m" } warn() { echo -e "\e[33m$(date '+%H:%M:%S') ⚠ $1\e[0m" >&2 } error() { echo -e "\e[31m$(date '+%H:%M:%S') ❌ $1\e[0m" >&2 exit 1 } # 远程命令执行(带重试) run_cmd() { local node=$1 local cmd=$2 local retries=${3:-1} local delay=${4:-2} for ((i=1; i<=retries; i++)); do if ssh -o StrictHostKeyChecking=no \ -o ConnectTimeout=10 \ -o LogLevel=ERROR \ ${SSH_USER}@$node "$cmd"; then return 0 fi if [ $i -lt $retries ]; then warn "命令在节点 $node 执行失败,${delay}秒后重试 (尝试 $i/$retries)" sleep $delay fi done error "在节点 $node 执行命令失败 (尝试 $retries 次): $cmd" } # ============ 步骤 1: 配置 SSH 免密登录 ============ setup_ssh_passwordless() { log "步骤 1/7: 配置 SSH 免密登录" # 生成本地 SSH 密钥(如不存在) if [[ ! -f "$KEY_FILE" ]]; then ssh-keygen -t rsa -b 2048 -N "" -f "$KEY_FILE" -q log "SSH 密钥已生成: $KEY_FILE" fi # 收集所有唯一节点 declare -A seen ALL_NODES=() for node in "${GLUSTER_NODES[@]}" "${CLIENT_NODES[@]}"; do if [[ -n "$node" && -z "${seen[$node]}" ]]; then ALL_NODES+=("$node") seen[$node]=1 fi done for node in "${ALL_NODES[@]}"; do log "检查节点: $node" # 检查是否已配置免密 if ssh -o StrictHostKeyChecking=no \ -o ConnectTimeout=5 \ -o BatchMode=yes \ -o PasswordAuthentication=no \ ${SSH_USER}@$node "true" &>/dev/null; then log "✅ 节点 $node 已支持免密登录" continue fi log "⚠ 节点 $node 尚未配置免密,正在配置(请输入密码)..." if ssh-copy-id -o StrictHostKeyChecking=no ${SSH_USER}@$node; then log "✅ 节点 $node 免密配置成功" else warn "节点 $node 配置失败,请确认网络、SSH 服务、密码正确" fi done } # ============ 步骤 2: 安装 GlusterFS 服务端 ============ install_gluster_server() { log "步骤 2/7: 在所有 Gluster 节点安装 GlusterFS 服务端" for node in "${GLUSTER_NODES[@]}"; do run_cmd $node " # 安装软件包 yum install -y glusterfs-server # 启动 GlusterFS 管理守护进程并设置开机自启 systemctl start glusterd systemctl enable glusterd # 验证服务状态 if systemctl is-active glusterd >/dev/null; then echo 'glusterd 服务运行正常' else echo 'glusterd 服务启动失败' exit 1 fi " 3 5 done log "✅ GlusterFS 服务端安装完成" } # ============ 步骤 3: 配置集群与存储卷 ============ setup_gluster_cluster() { log "步骤 3/7: 配置 GlusterFS 集群和存储卷" # 创建 Brick 目录 for node in "${GLUSTER_NODES[@]}"; do run_cmd $node " mkdir -p $BRICK_DIR chmod -R 777 $BRICK_DIR chown -R nobody:nobody $BRICK_DIR " done # 探针节点(幂等) for node in "${GLUSTER_NODES[@]}"; do if [[ "$node" != "$MANAGER_NODE" ]]; then if ! run_cmd $MANAGER_NODE "gluster peer status --mode=script" 2>/dev/null | grep -q "$node"; then log "探针节点 $node" run_cmd $MANAGER_NODE "gluster peer probe $node --mode=script" 3 5 log "✅ 节点 $node 探针成功" else log "节点 $node 已在集群中,跳过" fi fi done log "等待集群稳定..." sleep 10 # 创建或启动卷 if run_cmd $MANAGER_NODE "gluster volume list --mode=script" | grep -q "^$VOLUME_NAME$"; then log "卷 $VOLUME_NAME 已存在" if ! run_cmd $MANAGER_NODE "gluster volume status $VOLUME_NAME --mode=script" &>/dev/null; then log "卷未启动,正在启动..." run_cmd $MANAGER_NODE "gluster volume start $VOLUME_NAME --mode=script" 3 5 log "✅ 卷 $VOLUME_NAME 启动成功" else log "卷 $VOLUME_NAME 已启动" fi else log "创建卷 $VOLUME_NAME..." local bricks="" for node in "${GLUSTER_NODES[@]}"; do bricks+="$node:$BRICK_DIR " done run_cmd $MANAGER_NODE "gluster volume create $VOLUME_NAME replica ${#GLUSTER_NODES[@]} $bricks force --mode=script" 3 5 run_cmd $MANAGER_NODE "gluster volume start $VOLUME_NAME --mode=script" 3 5 log "✅ 卷 $VOLUME_NAME 创建并启动成功" fi # 设置 ping 超时(提升稳定性) run_cmd $MANAGER_NODE "gluster volume set $VOLUME_NAME network.ping-timeout 30" log "✅ 卷配置完成" } # ============ 步骤 4: 安装 GlusterFS 客户端 ============ install_gluster_client() { log "步骤 4/7: 在所有客户端节点安装 GlusterFS 客户端" for client in "${CLIENT_NODES[@]}"; do run_cmd $client " # 安装客户端软件 yum install -y glusterfs-fuse # 创建挂载点 mkdir -p $MOUNT_POINT " 3 5 done log "✅ GlusterFS 客户端安装完成" } # ============ 步骤 5: 配置客户端挂载 ============ setup_clients() { log "步骤 5/7: 配置客户端挂载" local backups=$(IFS=:; echo "${GLUSTER_NODES[*]}") local fstab_line="$MANAGER_NODE:/$VOLUME_NAME $MOUNT_POINT glusterfs defaults,_netdev,backup-volfile-servers=$backups 0 0" for client in "${CLIENT_NODES[@]}"; do log "配置客户端: $client" # 强制清理残留 run_cmd $client " if mount | grep -q '$MOUNT_POINT'; then pkill -f 'glusterfs.*$MOUNT_POINT' 2>/dev/null || true sleep 1 umount -f '$MOUNT_POINT' 2>/dev/null || umount -l '$MOUNT_POINT' 2>/dev/null || true sleep 2 fi " # 精确清理 fstab 并写入 run_cmd $client " sed -i '/^[^#].*$MOUNT_POINT/d' /etc/fstab 2>/dev/null || true echo '$fstab_line' >> /etc/fstab " # 挂载并验证 if run_cmd $client "mount $MOUNT_POINT"; then if run_cmd $client "df -h | grep -q '$MOUNT_POINT'"; then log "✅ 客户端 $client 挂载成功" run_cmd $client "chmod 777 $MOUNT_POINT" # 保障权限 else warn "⚠ 客户端 $client 挂载未生效,请检查" fi else warn "客户端 $client 挂载失败" fi done } # ============ 步骤 6: 创建目录结构 ============ setup_directory_structure() { log "步骤 6/7: 创建标准目录结构 (dev/sit/artifact)" for client in "${CLIENT_NODES[@]}"; do log "在客户端 $client 上创建目录结构" run_cmd $client " # 创建基础目录 mkdir -p $MOUNT_POINT/dev/artifact mkdir -p $MOUNT_POINT/sit/artifact # 设置权限 chmod -R 777 $MOUNT_POINT/dev chmod -R 777 $MOUNT_POINT/sit echo '✅ 目录结构创建完成' " done log "✅ 所有客户端目录结构创建完成" } # ============ 步骤 7: 统一设置权限 ============ setup_permissions() { log "步骤 7/7: 设置权限" # 设置服务端权限 for node in "${GLUSTER_NODES[@]}"; do run_cmd $node "chmod -R 777 $BRICK_DIR" 3 2 done # 设置客户端权限 for client in "${CLIENT_NODES[@]}"; do run_cmd $client "chmod -R 777 $MOUNT_POINT" 3 2 done log "✅ 权限设置完成" } # ============ 验证部署 ============ verify_deployment() { log "正在验证部署结果..." log "卷信息:" run_cmd $MANAGER_NODE "gluster volume info $VOLUME_NAME --mode=script" && log "✅ 获取成功" log "客户端挂载状态:" for client in "${CLIENT_NODES[@]}"; do if run_cmd $client "df -h | grep -q glusterfs"; then log "✅ 客户端 $client 挂载正常" else warn "⚠ 客户端 $client 未挂载" fi done log "权限检查:" for client in "${CLIENT_NODES[@]}"; do run_cmd $client "ls -ld $MOUNT_POINT" done log "✅ 部署验证完成!" } # ============ 主流程 ============ main() { echo "==================================" echo ">> GlusterFS 部署开始 | $(date '+%Y-%m-%d %H:%M:%S')" echo "==================================" setup_ssh_passwordless install_gluster_server setup_gluster_cluster install_gluster_client setup_clients setup_directory_structure setup_permissions verify_deployment echo "" log "✅ GlusterFS 部署成功!" echo "提示:客户端重启后会自动挂载(已写入 /etc/fstab)" echo "防火墙请开放 24007(管理)和 49152-49251(数据端口范围)" echo "==================================" } # ============ 执行 ============ main "$@"为什么我用这个脚本去安装,还是报错:[2025-09-12 02:21:48.782249] E [glusterfsd.c:834:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists Mount failed. Check the log file for more details. 10:21:48 ❌ 在节点 10.240.24.43 执行命令失败 (尝试 1 次): mount /glusterfs [root@dataops-web6 dataops]# glusterd.log日志如下:ignum (15), shutting down [2025-09-12 02:20:36.952077] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 7.0 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2025-09-12 02:20:36.952368] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 3115349 [2025-09-12 02:20:36.990806] I [MSGID: 106478] [glusterd.c:1426:init] 0-management: Maximum allowed open file descriptors set to 65536 [2025-09-12 02:20:36.990849] I [MSGID: 106479] [glusterd.c:1482:init] 0-management: Using /var/lib/glusterd as working directory [2025-09-12 02:20:36.990856] I [MSGID: 106479] [glusterd.c:1488:init] 0-management: Using /var/run/gluster as pid file working directory [2025-09-12 02:20:36.993133] I [socket.c:1014:__socket_server_bind] 0-socket.management: process started listening on port (24007) [2025-09-12 02:20:36.994763] W [MSGID: 103071] [rdma.c:4472:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [没有那个设备] [2025-09-12 02:20:36.994785] W [MSGID: 103055] [rdma.c:4782:init] 0-rdma.management: Failed to initialize IB Device [2025-09-12 02:20:36.994792] W [rpc-transport.c:366:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2025-09-12 02:20:36.994849] W [rpcsvc.c:1981:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2025-09-12 02:20:36.994855] E [MSGID: 106244] [glusterd.c:1781:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2025-09-12 02:20:36.995369] I [socket.c:957:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 12 [2025-09-12 02:20:36.995546] I [MSGID: 106059] [glusterd.c:1865:init] 0-management: max-port override: 60999 [2025-09-12 02:20:38.945542] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [没有那个文件或目录] [2025-09-12 02:20:38.945585] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [没有那个文件或目录] [2025-09-12 02:20:38.945596] I [MSGID: 106514] [glusterd-store.c:2279:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 70000 [2025-09-12 02:20:38.945622] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/options. [没有那个文件或目录] [2025-09-12 02:20:38.952533] I [MSGID: 106194] [glusterd-store.c:4102:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. [2025-09-12 02:20:38.952567] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.upgrade. [没有那个文件或目录] [2025-09-12 02:20:38.952578] I [glusterd.c:1998:init] 0-management: Regenerating volfiles due to a max op-version mismatch or glusterd.upgrade file not being present, op_version retrieved:0, max op_version: 70000 Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 1024 8: option max-port 60999 9: option event-threads 1 10: option ping-timeout 0 11: option transport.rdma.listen-port 24008 12: option transport.socket.listen-port 24007 13: option transport.socket.read-fail-log off 14: option transport.socket.keepalive-interval 2 15: option transport.socket.keepalive-time 10 16: option transport-type rdma 17: option working-directory /var/lib/glusterd 18: end-volume 19: +------------------------------------------------------------------------------+ [2025-09-12 02:20:38.957466] I [MSGID: 101190] [event-epoll.c:674:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2025-09-12 02:20:38.957571] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume gv0.10.240.24.15.opt-gfs_brick [2025-09-12 02:20:38.957660] E [MSGID: 106176] [glusterd-handshake.c:1066:__server_getspec] 0-management: Failed to mount the volume [2025-09-12 02:20:38.990358] I [MSGID: 106142] [glusterd-pmap.c:290:pmap_registry_bind] 0-pmap: adding brick /opt/gfs_brick on port 49152 [2025-09-12 02:20:39.366088] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume shd/gv0 [2025-09-12 02:20:39.366142] E [MSGID: 106048] [glusterd-handshake.c:323:build_volfile_path] 0-management: Couldn't find volinfo for volid=gv0 [2025-09-12 02:20:39.366149] E [MSGID: 106176] [glusterd-handshake.c:1066:__server_getspec] 0-management: Failed to mount the volume [2025-09-12 02:20:47.481395] I [MSGID: 106487] [glusterd-handler.c:1339:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req [2025-09-12 02:20:47.781441] I [MSGID: 106487] [glusterd-handler.c:1082:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 10.240.24.113 24007 [2025-09-12 02:20:47.781643] I [MSGID: 106128] [glusterd-handler.c:3541:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 10.240.24.113 (24007) [2025-09-12 02:20:47.787866] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout [2025-09-12 02:20:47.787889] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2025-09-12 02:20:47.790418] I [MSGID: 106498] [glusterd-handler.c:3470:glusterd_friend_add] 0-management: connect returned 0 [2025-09-12 02:20:47.790960] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [没有那个文件或目录] [2025-09-12 02:20:47.791006] I [MSGID: 106477] [glusterd.c:182:glusterd_uuid_generate_save] 0-management: generated UUID: 7accb06d-6b23-49e5-b13a-d89c3a3fa6ed [2025-09-12 02:20:47.821195] I [MSGID: 106511] [glusterd-rpc-ops.c:250:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: 11bbb559-dd26-4928-b798-0b1f54028eb7, host: 10.240.24.113 [2025-09-12 02:20:47.821231] I [MSGID: 106511] [glusterd-rpc-ops.c:403:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req [2025-09-12 02:20:47.827051] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 11bbb559-dd26-4928-b798-0b1f54028eb7, host: 10.240.24.113, port: 0 [2025-09-12 02:20:47.835354] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70000 [2025-09-12 02:20:47.843466] I [MSGID: 106490] [glusterd-handler.c:2789:__glusterd_handle_probe_query] 0-glusterd: Received probe from uuid: 11bbb559-dd26-4928-b798-0b1f54028eb7 [2025-09-12 02:20:47.845760] I [MSGID: 106493] [glusterd-handler.c:2850:__glusterd_handle_probe_query] 0-glusterd: Responded to 10.240.24.113, op_ret: 0, op_errno: 0, ret: 0 [2025-09-12 02:20:47.847090] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 11bbb559-dd26-4928-b798-0b1f54028eb7 [2025-09-12 02:20:47.855972] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.240.24.113 (0), ret: 0, op_ret: 0 [2025-09-12 02:20:47.865958] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 11bbb559-dd26-4928-b798-0b1f54028eb7 [2025-09-12 02:20:47.865985] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2025-09-12 02:20:47.866033] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 11bbb559-dd26-4928-b798-0b1f54028eb7 [2025-09-12 02:20:48.134377] I [MSGID: 106487] [glusterd-handler.c:1339:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req [2025-09-12 02:20:48.423573] I [MSGID: 106487] [glusterd-handler.c:1082:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 10.240.24.156 24007 [2025-09-12 02:20:48.423814] I [MSGID: 106128] [glusterd-handler.c:3541:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 10.240.24.156 (24007) [2025-09-12 02:20:48.428941] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout [2025-09-12 02:20:48.428965] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2025-09-12 02:20:48.431323] I [MSGID: 106498] [glusterd-handler.c:3470:glusterd_friend_add] 0-management: connect returned 0 [2025-09-12 02:20:48.455486] I [MSGID: 106511] [glusterd-rpc-ops.c:250:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: bd35e935-ee3d-4b0f-8867-95436d5fb1c3, host: 10.240.24.156 [2025-09-12 02:20:48.455521] I [MSGID: 106511] [glusterd-rpc-ops.c:403:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req [2025-09-12 02:20:48.461431] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: bd35e935-ee3d-4b0f-8867-95436d5fb1c3, host: 10.240.24.156, port: 0 [2025-09-12 02:20:48.470525] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70000 [2025-09-12 02:20:48.477876] I [MSGID: 106490] [glusterd-handler.c:2789:__glusterd_handle_probe_query] 0-glusterd: Received probe from uuid: bd35e935-ee3d-4b0f-8867-95436d5fb1c3 [2025-09-12 02:20:48.479151] I [MSGID: 106493] [glusterd-handler.c:2850:__glusterd_handle_probe_query] 0-glusterd: Responded to 10.240.24.156, op_ret: 0, op_errno: 0, ret: 0 [2025-09-12 02:20:48.481386] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: bd35e935-ee3d-4b0f-8867-95436d5fb1c3 [2025-09-12 02:20:48.486901] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.240.24.156 (0), ret: 0, op_ret: 0 [2025-09-12 02:20:48.493378] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: bd35e935-ee3d-4b0f-8867-95436d5fb1c3 [2025-09-12 02:20:48.493403] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2025-09-12 02:20:48.497859] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: bd35e935-ee3d-4b0f-8867-95436d5fb1c3 [2025-09-12 02:20:48.497877] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 11bbb559-dd26-4928-b798-0b1f54028eb7 [2025-09-12 02:20:59.039046] E [MSGID: 106451] [glusterd-utils.c:7935:glusterd_is_path_in_use] 0-management: /opt/gfs_brick is already part of a volume [文件已存在] [2025-09-12 02:20:59.077362] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume gv0.10.240.24.15.opt-gfs_brick [2025-09-12 02:20:59.077553] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume shd/gv0 [2025-09-12 02:20:59.420262] I [glusterd-utils.c:6465:glusterd_brick_start] 0-management: discovered already-running brick /opt/gfs_brick [2025-09-12 02:20:59.420292] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2025-09-12 02:20:59.454301] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2025-09-12 02:20:59.454424] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600 [2025-09-12 02:20:59.454552] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600 [2025-09-12 02:20:59.454619] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped [2025-09-12 02:20:59.454632] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: nfs service is stopped [2025-09-12 02:20:59.454761] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600 [2025-09-12 02:20:59.454894] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600 [2025-09-12 02:20:59.454999] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped [2025-09-12 02:20:59.455010] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: bitd service is stopped [2025-09-12 02:20:59.455545] I [MSGID: 106568] [glusterd-proc-mgmt.c:92:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 3088531 [2025-09-12 02:21:00.459771] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600 [2025-09-12 02:21:00.459897] I [MSGID: 106567] [glusterd-svc-mgmt.c:230:glusterd_svc_start] 0-management: Starting glustershd service [2025-09-12 02:21:01.461537] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600 [2025-09-12 02:21:01.461714] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped [2025-09-12 02:21:01.461730] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: scrub service is stopped [2025-09-12 02:21:01.461883] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume shd/gv0 [2025-09-12 02:21:03.860339] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped [2025-09-12 02:21:03.860360] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: quotad service is stopped [2025-09-12 02:21:03.860426] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped [2025-09-12 02:21:03.860435] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: bitd service is stopped [2025-09-12 02:21:03.860498] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped [2025-09-12 02:21:03.860506] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: scrub service is stopped [2025-09-12 02:21:03.860641] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume gv0.10.240.24.15.opt-gfs_brick [2025-09-12 02:21:03.860801] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume shd/gv0 [2025-09-12 02:21:03.860867] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume gv0.10.240.24.15.opt-gfs_brick [2025-09-12 02:21:03.860966] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume shd/gv0 [2025-09-12 02:21:48.791190] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume /gv0
最新发布
09-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值