MONGODB高可用Repl Set+Sharding配置

本文详细介绍了一种基于MongoDB的分片集群部署方案,包括创建目录、配置复制集、设置配置服务器、启动路由进程、激活分片及验证分片等步骤。通过具体的命令和实例演示,帮助读者掌握MongoDB分片集群的搭建过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

     此次配置基于测试环境,话不多说,直接上干货。

步骤:

1、创建目录
数据目录
mkdir -p /mongodata/shard1_1 分片1副本集1数据目录
mkdir -p /mongodata/shard1_2 分片1副本集2数据目录
mkdir -p /mongodata/shard1_3 分片1副本集3数据目录
mkdir -p /mongodata/shard2_1 分片2副本集1数据目录
mkdir -p /mongodata/shard2_2 分片2副本集2数据目录
mkdir -p /mongodata/shard2_3 分片2副本集3数据目录
三个配置服务器目录
mkdir -p /mongodata/config/rep_1   配置服务器1数据目录
mkdir -p /mongodata/config/rep_2   配置服务器1数据目录
mkdir -p /mongodata/config/rep_3   配置服务器1数据目录
所有日志文件目录
mkdir -p /mongodata/log


2、配置Repl Set
nohup mongod --shardsvr --replSet shard1 --dbpath /mongodata/shard1_1 --port 11111 --logpath /mongodata/log/mongod1_1.log --logappend &
nohup mongod --shardsvr --replSet shard1 --dbpath /mongodata/shard1_2 --port 22222 --logpath /mongodata/log/mongod1_2.log --logappend &
nohup mongod --shardsvr --replSet shard1 --dbpath /mongodata/shard1_3 --port 33333 --logpath /mongodata/log/mongod1_3.log --logappend &
nohup mongod --shardsvr --replSet shard2 --dbpath /mongodata/shard2_1 --port 44444 --logpath /mongodata/log/mongod2_1.log --logappend &
nohup mongod --shardsvr --replSet shard2 --dbpath /mongodata/shard2_2 --port 55555 --logpath /mongodata/log/mongod2_2.log --logappend &
nohup mongod --shardsvr --replSet shard2 --dbpath /mongodata/shard2_3 --port 60000 --logpath /mongodata/log/mongod2_3.log --logappend &


登录mongo --port 11111
config={_id:'shard1',members:[{_id:0,host:'127.0.0.1:11111'},{_id:1,host:'127.0.0.1:22222'},{_id:2,host:'127.0.0.1:33333'}]};
rs.initiate(config);

rs.status();




登录 mongo --port 44444
config={_id:'shard2',members:[{_id:0,host:'127.0.0.1:44444'},{_id:1,host:'127.0.0.1:55555'},{_id:2,host:'127.0.0.1:60000'}]};
rs.initiate(config);

rs.status();




3、配置config server
nohup mongod --configsvr --dbpath /mongodata/config/rep_1 --port 20000 --logpath /mongodata/log/config1.log --logappend &
nohup mongod --configsvr --dbpath /mongodata/config/rep_2 --port 30000 --logpath /mongodata/log/config2.log --logappend &
nohup mongod --configsvr --dbpath /mongodata/config/rep_3 --port 40000 --logpath /mongodata/log/config3.log --logappend &


4、配置 sharding server(route process)
nohup mongos --configdb 127.0.0.1:20000,127.0.0.1:30000,127.0.0.1:40000 --port 50000 --chunkSize 2 --logpath /mongodata/log/mongos.log --logappend &


5、配置sharding cluster
mongo --port 50000 
use admin;
sh.addShard("shanrd1/127.0.0.1:11111");
sh.addShard("shanrd2/127.0.0.1:44444");
sh.status();




6、激活分片

本测试采用test库,test表
db.runCommand({enablesharding:"test"});
db.runCommand({shardCollection:"test.test",key:{"_id":1}});


7、验证sharding
use test;
 for(var i=1; i<=20000; i++){
db.test.insert({_id:i,name:"test",age:i,uid:i+1});}--插入20000行数据
db.test.stats();





8、命令
printShardingStatus(db.getSisterDB("config"),1);查看分片状态具体信息db.printShardingStatus()
db.runCommand(shardlists:1);查看分片信息

{"t":{"$date":"2025-06-22T16:40:00.492+08:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":17},"outgoing":{"minWireVersion":6,"maxWireVersion":17},"isInternalClient":true}}} {"t":{"$date":"2025-06-22T16:40:00.493+08:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2025-06-22T16:40:00.493+08:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","namespace":"config.tenantMigrationDonors"}} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","namespace":"config.tenantMigrationRecipients"}} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"ShardSplitDonorService","namespace":"config.tenantSplitDonors"}} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"CONTROL", "id":5945603, "ctx":"main","msg":"Multi threading initialized"} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":12245,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"ADM01A"}} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"6.0.24","gitVersion":"1b052b94a23863fd12be97aaa4e4b1d96456e5cc","openSSLVersion":"OpenSSL 1.0.2o-fips 27 Mar 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"amazon2","distarch":"aarch64","target_arch":"aarch64"}}}} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Kylin","version":"V10"}}} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{}}} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"E", "c":"CONTROL", "id":20568, "ctx":"initandlisten","msg":"Error setting up listener","attr":{"error":{"code":9001,"codeName":"SocketException","errmsg":"Address already in use"}}} {"t":{"$date":"2025-06-22T16:40:00.494+08:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"REPL", "id":4794602, "ctx":"initandlisten","msg":"Attempting to enter quiesce mode"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"-", "id":6371601, "ctx":"initandlisten","msg":"Shutting down the FLE Crud thread pool"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"ASIO", "id":22582, "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"COMMAND", "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"CONTROL", "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"CONTROL", "id":6278511, "ctx":"initandlisten","msg":"Shutting down the Change Stream Expired Pre-images Remover"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"CONTROL", "id":8423404, "ctx":"initandlisten","msg":"mongod shutdown complete","attr":{"Summary of time elapsed":{"Statistics":{"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"1 ms","Time spent in quiesce mode":"0 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"0 ms","Shut down WaitForMajorityService":"0 ms","Shut down the global connection pool":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the migration util executor":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images remover":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"1 ms"}}}} {"t":{"$date":"2025-06-22T16:40:00.495+08:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":48}}
06-23
<think>我们正在分析MongoDB启动失败的问题,具体错误是"SocketException:Addressalreadyinuse"。这个错误表示MongoDB试图绑定的端口(默认27017)已经被其他进程占用。以下将逐步分析原因并提供解决方案。###原因分析1.**端口冲突**:MongoDB默认使用27017端口,如果该端口已被其他应用程序占用,则启动失败。这是最常见的原因。2.**MongoDB进程未完全退出**:可能之前运行的MongoDB实例没有正确关闭,导致端口仍然被占用。3.**其他应用程序占用端口**:某些其他服务或应用程序可能正在使用27017端口。###解决步骤####步骤1:确认端口占用情况在KylinV10(基于Linux)系统中,可以使用以下命令检查27017端口的占用情况:```bashsudolsof-i:27017```或者```bashsudonetstat-tulnp|grep27017```####步骤2:终止占用端口的进程如果发现端口被占用,可以终止该进程。假设占用进程的PID为12345(实际替换为查询到的PID):```bashsudokill-912345```####步骤3:重新启动MongoDB终止占用端口的进程后,尝试重新启动MongoDB服务:```bashsudosystemctlstartmongod#如果使用systemd管理服务```或者直接运行mongod(根据你的安装方式):```bashmongod--config/etc/mongod.conf#指定配置文件启动```####步骤4:更改MongoDB端口(可选)如果必须保留占用27017端口的服务,可以更改MongoDB的监听端口。编辑MongoDB配置文件(通常位于`/etc/mongod.conf`),修改`net.port`选项:```yamlnet:port:27018#更改为其他可用端口```然后重启MongoDB服务。###注意事项-**权限问题**:确保MongoDB有权限访问指定的端口(尤其是1024以下的端口需要root权限)。-**SELinux/防火墙**:如果更改了端口,需确保防火墙允许新端口通过。###总结端口占用是MongoDB启动失败的常见原因。通过检查并释放端口,或更改MongoDB配置,可以解决该问题[^3][^5]。§§相关问题§§1.如何检查Linux系统中某个端口的占用情况?2.如何配置MongoDB使用非默认端口?3.除了端口占用,还有哪些常见原因导致MongoDB启动失败?</think>### MongoDB启动失败分析:端口占用问题 (SocketException: Address already in use) #### 问题原因分析 当MongoDB启动时出现 `SocketException: Address already in use` 错误,根本原因是**27017端口已被其他进程占用**。在Kylin V10系统上,可能的具体原因包括: 1. **残留MongoDB进程未关闭**: - 之前运行的MongoDB实例未正常终止 - 服务崩溃后进程仍在后台运行 2. **其他服务占用端口**: - 其他数据库服务(如Redis、MySQL)可能使用了相同端口 - 安全软件或容器服务占用端口 3. **配置问题**: - 多实例冲突(多个MongoDB实例尝试绑定相同端口) - 未正确关闭先前服务就启动新实例[^3][^5] #### 解决方案步骤 ##### 1. 确认端口占用情况 在终端执行: ```bash sudo lsof -i :27017 # 或 sudo netstat -tulnp | grep 27017 ``` 输出示例: ``` COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mongod 1234 mongodb 11u IPv4 12345 0t0 TCP *:27017 (LISTEN) ``` ##### 2. 终止占用进程 ```bash sudo kill -9 <PID> # 替换<PID>为实际进程ID sudo killall mongod # 终止所有MongoDB进程 ``` ##### 3. 清理锁文件(重要) ```bash sudo rm /var/lib/mongodb/mongod.lock # 默认锁文件路径 ``` ##### 4. 更换启动端口(可选) 修改配置文件 `/etc/mongod.conf`: ```yaml net: port: 27018 # 改为空闲端口 ``` ##### 5. 重新启动服务 ```bash sudo systemctl start mongod sudo systemctl status mongod # 检查状态 ``` #### 特殊情况处理 - **如果占用进程是其他MongoDB实例**:确保使用 `--dbpath` 指定不同数据目录 - **系统服务冲突**:检查是否有其他服务配置为使用27017端口 - **防火墙限制**:检查防火墙是否允许端口访问: ```bash sudo firewall-cmd --list-ports sudo firewall-cmd --add-port=27017/tcp --permanent ``` #### 预防措施 1. 使用服务管理命令停止MongoDB而非直接终止进程: ```bash sudo systemctl stop mongod ``` 2. 定期检查服务状态: ```bash sudo systemctl is-active mongod ``` 3. 为测试环境配置专用端口范围 > **注意**:Kylin V10作为国产系统,其安全机制可能更严格,需确保SELinux或安全策略未阻止端口访问[^3][^5]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值