1.以副本集的方式启动mongodb实例
1.1 创建副本集目录
[mgousr01@vm1 ~]$ mkdir -p mongorep/{mg17/{bin,conf,data,logs,pid},mg27/{bin,conf,data,logs,pid},mg37/{bin,conf,data,logs,pid}}
[mgousr01@vm1 ~]$ ls -R mongorep/
mongorep/:
mg17 mg27 mg37
mongorep/mg17:
bin conf data logs pid
mongorep/mg27:
bin conf data logs pid
mongorep/mg37:
bin conf data logs pid
1.2 配置mongodb副本集(所有实例都需配置,以mg17为例)
[mgousr01@vm1 ~]$ vi /appbase/users/mgousr01/mongorep/mg17/conf/mg17.conf
dbpath=/appbase/users/mgousr01/mongorep/mg17/data/
logpath=/appbase/users/mgousr01/mongorep/mg17/logs/mg17.log
pidfilepath=/appbase/users/mgousr01/mongorep/mg17/pid/mg17.pid
directoryperdb=true
logappend=true
bind_ip=192.168.157.128
port=47017
oplogSize=10240
fork=true
replSet=rstl
1.3 启动mongodb实例
[mgousr01@vm1 ~]$ vi /appbase/users/mgousr01/mongorep/mg17/bin/start_mongodb.sh
mongod -f /appbase/users/mgousr01/mongorep/mg17/conf/mg17.conf
[mgousr01@vm1 ~]$ chmod +x /appbase/users/mgousr01/mongorep/mg17/bin/start_mongodb.sh
[mgousr01@vm1 ~]$ /appbase/users/mgousr01/mongorep/mg17/bin/start_mongodb.sh
1.4 查看mongodb实例的监听端口
[mgousr01@vm1 ~]$ ps -ef|grep mongod|grep -v grep
mgousr01 3688 1 0 11:04 ? 00:00:01 mongod -f /appbase/users/mgousr01/mongorep/mg17/conf/mg17.conf
mgousr01 3707 1 0 11:05 ? 00:00:00 mongod -f /appbase/users/mgousr01/mongorep/mg27/conf/mg27.conf
mgousr01 3726 1 1 11:05 ? 00:00:00 mongod -f /appbase/users/mgousr01/mongorep/mg37/conf/mg37.conf
[mgousr01@vm1 ~]$ netstat -tnpl|grep mongod
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 192.168.157.128:47027 0.0.0.0:* LISTEN 3707/mongod
tcp 0 0 192.168.157.128:47037 0.0.0.0:* LISTEN 3726/mongod
tcp 0 0 192.168.157.128:47017 0.0.0.0:* LISTEN 3688/mongod
注:从输出信息可以判断所有的mongodb副本集成员已经启动,分别占用了监听端口47017,47027,47037
2.连接到其中一个副本集成员
[mgousr01@vm1 ~]$ mongo 192.168.157.128:47017
MongoDB shell version: 3.0.3
connecting to: 192.168.157.128:47017/test
>
3.初始化副本集(一主两从架构)
> cfg=
{
"_id" : "rstl",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "192.168.157.128:47017"
},
{
"_id" : 1,
"host" : "192.168.157.128:47027"
},
{
"_id" : 2,
"host" : "192.168.157.128:47037"
}
]
}
> rs.initiate(cfg)
{ "ok" : 1 }
rstl:PRIMARY> rs.status()
{
"set" : "rstl",
"date" : ISODate("2015-12-03T09:14:07.046Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.157.128:47017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4447,
"optime" : Timestamp(1449131360, 1),
"optimeDate" : ISODate("2015-12-03T08:29:20Z"),
"electionTime" : Timestamp(1449131364, 1),
"electionDate" : ISODate("2015-12-03T08:29:24Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.157.128:47027",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2686,
"optime" : Timestamp(1449131360, 1),
"optimeDate" : ISODate("2015-12-03T08:29:20Z"),
"lastHeartbeat" : ISODate("2015-12-03T09:14:06.765Z"),
"lastHeartbeatRecv" : ISODate("2015-12-03T09:14:06.899Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.157.128:47037",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 2686,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-12-03T09:14:06.723Z"),
"lastHeartbeatRecv" : ISODate("2015-12-03T09:14:06.852Z"),
"pingMs" : 0,
"configVersion" : 1
}
],
"ok" : 1
}
注:观察到第三个成员节点的状态为"STARTUP2",证明副本集初始化没有成功,出现这个原因是磁盘空间不够导致的。
解决方法是将启动参数的oplogSize设置小一点或者增加磁盘空间,正常成员状态显示如下:
rstl:PRIMARY> rs.status();
{
"set" : "rstl",
"date" : ISODate("2015-12-04T06:31:40.931Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.157.128:47017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 71,
"optime" : Timestamp(1449210685, 1),
"optimeDate" : ISODate("2015-12-04T06:31:25Z"),
"electionTime" : Timestamp(1449210687, 1),
"electionDate" : ISODate("2015-12-04T06:31:27Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.157.128:47027",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 14,
"optime" : Timestamp(1449210685, 1),
"optimeDate" : ISODate("2015-12-04T06:31:25Z"),
"lastHeartbeat" : ISODate("2015-12-04T06:31:39.964Z"),
"lastHeartbeatRecv" : ISODate("2015-12-04T06:31:39.973Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.157.128:47037",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 14,
"optime" : Timestamp(1449210685, 1),
"optimeDate" : ISODate("2015-12-04T06:31:25Z"),
"lastHeartbeat" : ISODate("2015-12-04T06:31:39.964Z"),
"lastHeartbeatRecv" : ISODate("2015-12-04T06:31:39.973Z"),
"pingMs" : 0,
"configVersion" : 1
}
],
"ok" : 1
}
4.验证数据同步是否正常
rstl:PRIMARY> use soho
switched to db soho
rstl:PRIMARY> db.food.insert({name:"egg",price:38})
WriteResult({ "nInserted" : 1 })
rstl:PRIMARY> show collections
food
system.indexes
rstl:PRIMARY> db.food.find()
{ "_id" : ObjectId("5664fdfe1830846a3331ce02"), "name" : "egg", "price" : 38 }
[mgousr01@vm1 ~]$ mongo 192.168.157.128:47027
MongoDB shell version: 3.0.3
connecting to: 192.168.157.128:47027/test
rstl:SECONDARY> show dbs;
2015-12-07T11:34:05.753+0800 E QUERY Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }
at Error ()
at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
at shellHelper.show (src/mongo/shell/utils.js:630:33)
at shellHelper (src/mongo/shell/utils.js:524:36)
at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
rstl:SECONDARY> rs.slaveOk()
rstl:SECONDARY> show dbs;
local 0.203GB
soho 0.078GB
rstl:SECONDARY> use soho
switched to db soho
rstl:SECONDARY> show collections
food
system.indexes
rstl:SECONDARY> db.food.find();
{ "_id" : ObjectId("5664fdfe1830846a3331ce02"), "name" : "egg", "price" : 38 }
至此mongodb副本集搭建成功,后续将会研究两种常见的副本集架构(1主2从和1主1从1仲裁)在主节点挂了后的auto failover现象。
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/20801486/viewspace-1853447/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/20801486/viewspace-1853447/