解决MySQL:1042错误-Can't get hostname for your address

MySQL错误1042-Can't get hostname for your address

解决办法:

需要修改配置文件,在[mysqld]下增加skip-name-resolve(忽略主机名的方式访问),并重启MYSQL服务可解决此问题。

例如:MYSQL配置文件在: /etc/my.cnf,

    编辑它,在[mysqld]下增加skip-name-resolve,在执行service mysql restart 即可。

version: '3.7' services: server1: image: base-server container_name: server1 command: ["sh", "-c", "/scripts/start_services.sh"] environment: - VT_HOSTNAME=server1 - VT_TOPOLOGY=etcd:http://server1:2379 ports: - "15000:15000" - "15306:3306" networks: vitess-network: ipv4_address: 172.16.0.101 volumes: - ./scripts:/scripts - etcd-data:/var/lib/etcd # 修改挂载路径 server2: image: mysql-server container_name: server2 command: ["sh", "-c", "while ! nc -z server1 2379; do sleep 1; done; /scripts/init_master.sh"] environment: - VT_HOSTNAME=server2 - VT_TOPOLOGY=etcd:http://server1:2379 - VTDATAROOT=/vt/vtdataroot # 显式指定数据目录 networks: vitess-network: ipv4_address: 172.16.0.102 volumes: - ./scripts:/scripts - mysql-master:/vt/vtdataroot depends_on: - server1 server3: image: mysql-server container_name: server3 command: ["sh", "-c", "while ! nc -z server1 2379; do sleep 1; done; /scripts/init_replica.sh"] environment: - VT_HOSTNAME=server3 - VT_TOPOLOGY=etcd:http://server1:2379 - VTDATAROOT=/vt/vtdataroot networks: vitess-network: ipv4_address: 172.16.0.103 volumes: - ./scripts:/scripts - mysql-replica:/vt/vtdataroot depends_on: - server1 volumes: etcd-data: mysql-master: mysql-replica: networks: vitess-network: driver: bridge ipam: config: - subnet: 172.16.0.0/24 Dockerfile.base: FROM vitess/base USER root RUN chmod 777 /var/lib/etcd USER vitess Dockerfile.server: FROM vitess/base USER root RUN apt-get update && apt-get install -y netcat || yum install -y nc || apk add --no-cache gnu-netcat # 安装 etcdctl 和网络检测工具 RUN apt-get update && \ apt-get install -y curl && \ rm -rf /var/lib/apt/lists/* USER vitess start_services.sh: #!/bin/bash RUN apt-get update && apt-get install -y netcat || yum install -y nc || apk add --no-cache gnu-netcat # 启动 etcd、vtctld、vtgate etcd --data-dir /etcd & sleep 5 vtctld --topo_implementation etcd2 --topo_global_server_address http://localhost:2379 & vtgate --topo_implementation etcd2 --topo_global_server_address http://localhost:2379 --mysql_server_port 3306 & init_master.sh 和 init_replica.sh : #!/bin/bash # 启动 MySQL 从实例和 vttablet vttablet \ --topo_implementation etcd2 \ --topo_global_server_address server1:2379 \ --tablet-path "zone1-101" \ --init_keyspace "test_keyspace" \ --init_shard "0" \ --init_tablet_type "replica" \ --port 15002 \ --grpc_port 16002 \ --db_port 3306 2025-05-21 15:41:10.240 | /scripts/start_services.sh: line 3: RUN: command not found 2025-05-21 15:41:10.244 | /scripts/start_services.sh: line 3: yum: command not found 2025-05-21 15:41:10.248 | /scripts/start_services.sh: line 3: apk: command not found 2025-05-21 15:41:11.117 | {"level":"info","ts":"2025-05-21T07:41:11.109Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--data-dir","/etcd"]} 2025-05-21 15:41:11.119 | {"level":"info","ts":"2025-05-21T07:41:11.117Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]} 2025-05-21 15:41:11.124 | {"level":"info","ts":"2025-05-21T07:41:11.124Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["http://localhost:2379"]} 2025-05-21 15:41:11.125 | {"level":"info","ts":"2025-05-21T07:41:11.125Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":false,"name":"default","data-dir":"/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"],"listen-client-urls":["http://localhost:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"default=http://localhost:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} 2025-05-21 15:41:11.125 | {"level":"info","ts":"2025-05-21T07:41:11.125Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"default","data-dir":"/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]} 2025-05-21 15:41:11.125 | {"level":"info","ts":"2025-05-21T07:41:11.125Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"default","data-dir":"/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]} 2025-05-21 15:41:11.125 | {"level":"warn","ts":"2025-05-21T07:41:11.125Z","caller":"etcdmain/etcd.go:146","msg":"failed to start etcd","error":"cannot access data directory: mkdir /etcd: permission denied"} 2025-05-21 15:41:11.126 | {"level":"fatal","ts":"2025-05-21T07:41:11.125Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"cannot access data directory: mkdir /etcd: permission denied","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:32\nruntime.main\n\truntime/proc.go:225"} nc: getaddrinfo for host "server1" port 2379: No address associated with hostname nc: getaddrinfo for host "server1" port 2379: No address associated with hostname nc: getaddrinfo for host "server1" port 2379: No address associated with hostname nc: getaddrinfo for host "server1" port 2379: No address associated with hostname nc: getaddrinfo for host "server1" port 2379: No address associated with hostname 启动失败
05-22
# 创建命名空间 apiVersion: v1 kind: Namespace metadata: name: starrocks --- # 配置MinIO访问密钥Secret apiVersion: v1 kind: Secret metadata: name: starrocks-s3-secret namespace: starrocks type: Opaque stringData: AWS_ACCESS_KEY_ID: "admin" # MinIO访问密钥 AWS_SECRET_ACCESS_KEY: "izhMinio@pasword" # MinIO密钥 AWS_ENDPOINT: "http://minio-svc.minio-ns.svc.cluster.local:9000" # MinIO服务地址 AWS_DEFAULT_REGION: "cn-north-1" --- # FE ConfigMap apiVersion: v1 kind: ConfigMap metadata: name: fe-config namespace: starrocks data: fe.conf: | brpc_port = 8060 enable_brpc_ssl = false enable_fqdn_mode = true cloud_native_meta_port = 6090 meta_dir = /opt/starrocks/fe/meta edit_log_port = 9010 http_port = 8030 query_port = 9030 JAVA_OPTS = -Xmx16g -Xms16g --- # StorageClass(确保集群已部署local-path-provisioner) apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-path-starrocks provisioner: rancher.io/local-path volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Retain # 保留数据防止误删除 --- # FE Service apiVersion: v1 kind: Service metadata: name: fe-service namespace: starrocks spec: type: NodePort ports: - name: edit-log # 新增9010端口映射 port: 9010 targetPort: 9010 - name: query port: 9030 targetPort: 9030 nodePort: 31030 - name: http port: 8030 targetPort: 8030 nodePort: 31031 selector: app: fe --- # BE Service apiVersion: v1 kind: Service metadata: name: be-service namespace: starrocks spec: clusterIP: None ports: - name: be port: 9060 targetPort: 9060 - name: heartbeat port: 9050 targetPort: 9050 selector: app: be --- # BE ConfigMap apiVersion: v1 kind: ConfigMap metadata: name: be-config namespace: starrocks data: be.conf: | heartbeat_service_port = 9050 heartbeat_service_address = fe-service.starrocks.svc.cluster.local storage_root_path = /opt/starrocks/storage_data webserver_port = 8040 be_port = 9060 be_http_port = 8040 brpc_port = 8060 cloud_native_meta_port = 6090 # 存算分离核心配置(需与MinIO完全匹配) aws_s3_endpoint = minio-svc.minio-ns.svc.cluster.local:9000 aws_s3_path = s3://starrocks-data/ aws_s3_access_key = admin aws_s3_secret_key = izhMinio@pasword aws_s3_use_aws_sdk_default_behavior = false # 必须关闭AWS默认行为 aws_s3_enable_path_style_access = true # MinIO需要启用路径模式 # 存算分离配置 run_mode = shared_data cloud_native_storage_type = S3 aws_s3_path = "s3://starrocks-data/" # 生产环境JVM参数 JAVA_OPTS = -Xmx32g -Xms32g -XX:+UseG1GC JAVA_HOME = /opt/starrocks/jdk-17 --- # FE StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: fe namespace: starrocks spec: serviceName: fe-service replicas: 3 selector: matchLabels: app: fe template: metadata: labels: app: fe spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: ["fe"] topologyKey: "kubernetes.io/hostname" containers: - name: fe image: registry.cn-shanghai.aliyuncs.com/vsi-open/starrocks-fe:2025-05-22 command: - /bin/sh - -c - | # 添加日志输出便于调试 echo "Starting FE with ID ${FE_ID}"; if [ "${FE_ID}" = "fe-0" ]; then /opt/starrocks/fe/bin/start_fe.sh --host_type=FQDN else export JAVA_OPTS="-Xmx16g -Xms16g -Dcom.alibaba.rocketmq.client.sendSmartMsg=false" /opt/starrocks/fe/bin/start_fe.sh --helper=fe-0.fe-service.starrocks.svc.cluster.local:9010 --host_type=FQDN fi lifecycle: preStop: exec: command: ["/opt/starrocks/fe/bin/stop_fe.sh"] env: - name: FE_MASTER_SERVERS # 主节点专用配置 value: "fe-0.fe-service.starrocks.svc.cluster.local:9010" - name: FE_SERVERS value: "fe-0.fe-service.starrocks.svc.cluster.local:9010,fe-1.fe-service.starrocks.svc.cluster.local:9010,fe-2.fe-service.starrocks.svc.cluster.local:9010" - name: FE_ID valueFrom: fieldRef: fieldPath: metadata.name ports: - containerPort: 9010 # 内部通信端口 - containerPort: 9030 # MySQL协议端口 - containerPort: 8030 # HTTP端口 volumeMounts: - name: meta mountPath: /opt/starrocks/fe/meta - name: config mountPath: /opt/starrocks/fe/conf/fe.conf subPath: fe.conf readinessProbe: httpGet: path: /api/health port: 8030 initialDelaySeconds: 120 periodSeconds: 20 failureThreshold: 5 livenessProbe: httpGet: path: /api/health port: 8030 initialDelaySeconds: 200 periodSeconds: 30 failureThreshold: 3 volumes: - name: config configMap: name: fe-config volumeClaimTemplates: - metadata: name: meta spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path-starrocks" resources: requests: storage: 50Gi --- # BE StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: be namespace: starrocks spec: serviceName: be-service replicas: 4 selector: matchLabels: app: be template: metadata: labels: app: be spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: ["be"] topologyKey: "kubernetes.io/hostname" containers: - name: be image: registry.cn-shanghai.aliyuncs.com/vsi-open/starrocks-be:2025-05-22 command: ["/opt/starrocks/be/bin/start_be.sh"] lifecycle: preStop: exec: command: ["/opt/starrocks/be/bin/stop_be.sh"] envFrom: - secretRef: name: starrocks-s3-secret env: - name: AWS_REGION value: "cn-north-1" - name: AWS_S3_ENDPOINT value: "minio-svc.minio-ns.svc.cluster.local:9000" - name: AWS_S3_USE_AWS_SDK_DEFAULT_BEHAVIOR value: "false" ports: - containerPort: 9060 name: hb-port # 心跳端口 - containerPort: 8040 name: http-port # 管理端口 - containerPort: 9050 name: hb-svc-port # 心跳服务端口 volumeMounts: - name: config mountPath: /opt/starrocks/be/conf/be.conf subPath: be.conf - name: storage mountPath: /opt/starrocks/be/storage resources: requests: cpu: 8 memory: 8Gi limits: cpu: 16 memory: 16Gi livenessProbe: tcpSocket: port: 9060 initialDelaySeconds: 120 periodSeconds: 20 readinessProbe: tcpSocket: port: 9060 initialDelaySeconds: 60 periodSeconds: 15 volumes: - name: config configMap: name: be-config volumeClaimTemplates: - metadata: name: meta spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path-starrocks" resources: requests: storage: 50Gi - metadata: name: storage spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path-starrocks" resources: requests: storage: 100Gi # 根据数据量调整 ```不用考虑minio,与宁状态如下 ``` ➜ StarRocks git:(master) ✗ kubectl get pod -n starrocks -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES be-0 1/1 Running 0 152m 172.18.3.241 follower03 <none> <none> be-1 1/1 Running 0 151m 172.18.1.57 follower01 <none> <none> be-2 1/1 Running 0 150m 172.18.0.130 leader01 <none> <none> be-3 1/1 Running 0 149m 172.18.2.222 follower02 <none> <none> fe-0 1/1 Running 0 7m22s 172.18.3.62 follower03 <none> <none> fe-1 0/1 Running 1 (46s ago) 5m17s 172.18.1.93 follower01 <none> <none> ➜ StarRocks git:(master) ✗ kubectl describe pod fe-1 -n starrocks Name: fe-1 Namespace: starrocks Priority: 0 Service Account: default Node: follower01/192.168.11.102 Start Time: Tue, 27 May 2025 15:47:56 +0800 Labels: app=fe apps.kubernetes.io/pod-index=1 controller-revision-hash=fe-675df448fb statefulset.kubernetes.io/pod-name=fe-1 Annotations: <none> Status: Running IP: 172.18.1.93 IPs: IP: 172.18.1.93 Controlled By: StatefulSet/fe Containers: fe: Container ID: containerd://463884b0da7eb4156097e7692746062da4046cc63df8d6d0f4c3dc615d4246ab Image: registry.cn-shanghai.aliyuncs.com/vsi-open/starrocks-fe:2025-05-22 Image ID: registry.cn-shanghai.aliyuncs.com/vsi-open/starrocks-fe@sha256:f581e1038681fa2f5d65bbb082f1d47d49fcab8ca37afb02c0eec5685947952d Ports: 9010/TCP, 9030/TCP, 8030/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /bin/sh -c echo "Starting FE with ID ${FE_ID}"; if [ "${FE_ID}" = "fe-0" ]; then /opt/starrocks/fe/bin/start_fe.sh else echo "--helper=fe-0.fe-service.starrocks.svc.cluster.local:9010"; /opt/starrocks/fe/bin/start_fe.sh --helper=fe-0.fe-service.starrocks.svc.cluster.local:9010 fi State: Running Started: Tue, 27 May 2025 15:52:27 +0800 Last State: Terminated Reason: Error Exit Code: 143 Started: Tue, 27 May 2025 15:47:57 +0800 Finished: Tue, 27 May 2025 15:52:27 +0800 Ready: False Restart Count: 1 Liveness: http-get http://:8030/api/health delay=200s timeout=1s period=30s #success=1 #failure=3 Readiness: http-get http://:8030/api/health delay=120s timeout=1s period=20s #success=1 #failure=5 Environment: FE_MASTER_SERVERS: fe-0.fe-service.starrocks.svc.cluster.local:9010 FE_SERVERS: fe-0.fe-service.starrocks.svc.cluster.local:9010,fe-1.fe-service.starrocks.svc.cluster.local:9010,fe-2.fe-service.starrocks.svc.cluster.local:9010 FE_ID: fe-1 (v1:metadata.name) Mounts: /opt/starrocks/fe/conf/fe.conf from config (rw,path="fe.conf") /opt/starrocks/fe/meta from meta (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65kpf (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: meta: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: meta-fe-1 ReadOnly: false config: Type: ConfigMap (a volume populated by a ConfigMap) Name: fe-config Optional: false kube-api-access-65kpf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m32s default-scheduler Successfully assigned starrocks/fe-1 to follower01 Normal Pulled 62s (x2 over 5m32s) kubelet Container image "registry.cn-shanghai.aliyuncs.com/vsi-open/starrocks-fe:2025-05-22" already present on machine Normal Created 62s (x2 over 5m32s) kubelet Created container: fe Normal Started 62s (x2 over 5m32s) kubelet Started container fe Warning Unhealthy 62s (x9 over 3m23s) kubelet Readiness probe failed: Get "http://172.18.1.93:8030/api/health": dial tcp 172.18.1.93:8030: connect: connection refused Warning Unhealthy 62s (x3 over 2m2s) kubelet Liveness probe failed: Get "http://172.18.1.93:8030/api/health": dial tcp 172.18.1.93:8030: connect: connection refused Normal Killing 62s kubelet Container fe failed liveness probe, will be restarted Warning FailedPreStopHook 62s kubelet PreStopHook failed kubectl exec fe-0 -n starrocks -- curl http://localhost:8030/api/health % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 60 100 60 0 0 46224 0 --:--:-- --:--:-- --:--:-- 60000{"online_backend_num":0,"total_backend_num":4,"status":"OK"} ➜ StarRocks git:(master) ✗ ``` 我手动登陆fe-0 将4个be添加“ALTER SYSTEM ADD BACKEND”,但其alive都是false 特别说明 我的镜像是基于fe-ubuntu:latest与be-ubuntu:latest,不支持/opt/starrocks/be/bin/start_backend.sh -be_port=9060 webserver_port=8040 请修正
最新发布
05-28
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值