1. 配置shard1所用到的replica sets:
1) 172.16.0.127
vi shard11.conf
replSet=shard1
port=11721
dbpath=/home/mongodb/data/shard11
logpath=/home/mongodb/log/shard11.log
logappend=true
fork=true
oplogSize=100
shardsvr=true
directoryperdb=true
./mongod -f ../conf/shard11.conf
2) 172.16.0.115
vi shard12.conf
replSet=shard1
port=11721
dbpath=/home/mongodb/data/shard12
logpath=/home/mongodb/log/shard12.log
logappend=true
fork=true
oplogSize=100
shardsvr=true
directoryperdb=true
./mongod -f ../conf/shard12.conf
3)
172.16.0.124:
vi shard13.conf
replSet=shard1
port=11721
dbpath=/home/mongodb/data/shard13
logpath=/home/mongodb/log/shard13.log
logappend=true
fork=true
oplogSize=100
shardsvr=true
directoryperdb=true
./mongod -f ../conf/shard13.conf
二、同样方式配置其他服务器
shard2:172.16.0.103:11722 172.16.0.122:11722 172.16.0.125:11722
shard3:172.16.0.121:11723 172.16.0.123:11723 172.16.0.114:11723
三、
参数解释:
dbpath:数据存放目录
logpath:日志存放路径
pidfilepath:进程文件,方便停止mongodb
directoryperdb:为每一个数据库按照数据库名建立文件夹存放
logappend:以追加的方式记录日志
replSet:replica set的名字
bind_ip:mongodb所绑定的ip地址
port:mongodb进程所使用的端口号,默认为27017
oplogSize:mongodb操作日志文件的最大大小。单位为Mb,默认为硬盘剩余空间的5%
fork:以后台方式运行进程
noprealloc:不预先分配存储
四、初始化replica set
配置主,备,仲裁节点,可以通过客户端连接mongodb,也可以直接在三个节点中选择一个连接mongodb。
用mongo连接其中一个mongod,执行:
- [root@localhost bin]# ./mongo 172.16.0.115:11721
- MongoDB shell version: 2.4.9
- connecting to: 172.17.253.217:27017/test
- > use admin
- switched to db admin
- > config={_id:'shard1',members:[{_id:0,host:"172.16.0.127:11721",priority:2},{_id:1,host:"172.16.0.124",priority:1},{_id:2,host::"172.16.0.115:11721",arbiterOnly:true}]}
- {
- "_id" : "shard1",
- "members" : [
- {
- "_id" : 0,
- "host" : "172.16.0.127:11721",
- "priority" : 2
- },
- {
- "_id" : 1,
- "host" : "172.16.0.124:11721",
- "priority" : 1
- },
- {
- "_id" : 2,
- "host" : "172.16.0.115:11721",
- "arbiterOnly" : true
- }
- ]
- }
- > rs.initiate(config)<SPAN>#使配置生效
- </SPAN>
- {
- "info" : "Config now saved locally. Should come online in about a minute.",
- "ok" : 1
- }
五、同样方式初始化shard2,shard3
出现问题:
我们会发现,本来应该是主库的显示unkown,仲裁库显示STARTUP2。报错:
- Can't take a write lock while out of disk space
Can't take a write lock while out of disk space
这个如何解决呢,经过百度一番之后,
将lock文件删除
rm /var/lib/mongodb/mongod.lock
最好也把journal日志删除,那玩意也很占硬盘,重启mongodb服务
六、配置三台config server,启动配置节点
1)172.16.0.115
vi config.conf
dbpath=/usr/local/mongodb/data/config
configsvr = true
port = 40000
logpath =/usr/local/mongodb/log/config.log
logappend = true
fork = true
./mongod -f ../conf/config.conf
2)172.16.0.103,172.16.0.114上分别启动config
七、启动路由节点
1)172.16.0.115
vi mongos_config.conf
configdb=172.16.0.103:40000,172.16.0.114:40000,172.16.0.115:40000
port = 50000
chunkSize = 5
logpath =/usr/local/mongodb/log/mongos.log
logappend=true
fork = true
./mongos -f ../conf/mongos_config.conf
2)172.16.0.103,172.16.0.114上分别启动config
八、配置Sharding
连接到其中一个mongos进程,并切换到admin数据库做以下配置
1. 连接到mongs,并切换到admin
./mongo 172.16.0.115:50000/admin这里必须连接路由节点
>db
Admin
2. 加入shards
如里shard是单台服务器,用>db.runCommand( { addshard : “<serverhostname>[:<port>]” } )这样的命令加入,如果shard是replica sets,用replicaSetName/<serverhostname>[:port][,serverhostname2[:port],…]这样的格式表示,例如本例执行:
- mongos> db.runCommand( { addshard : "shard1/172..16.0.127:11721,172..16.0.115:11721,172..16.0.124:11721",name:"shard1",maxsize:20480});
- { "shardAdded" : "shard1", "ok" : 1 }
mongos> db.runCommand( { addshard : "shard2/172..16.0.103:11722,172..16.0.122:11722,172..16.0.125:11722",name:"shard2",maxsize:20480});
{ "shardAdded" : "shard1", "ok" : 1 }
- mongos> db.runCommand( { addshard : "shard3/172..16.0.121:11723,172..16.0.123:11723,172..16.0.114:11723",name:"shard3",maxsize:20480});
- { "shardAdded" : "shard2", "ok" : 1 }
3、查看:
mongos> db.runCommand({listshards:1}) //每个切片有一个仲裁节点
mongos> db.runCommand({listshards:1})
{
"shards" : [
{
"_id" : "shard1",
"host" : "shard1/172.16.0.124:11721,172.16.0.127:11721"
},
{
"_id" : "shard2",
"host" : "shard2/172.16.0.122:11722,172.16.0.125:11722"
},
{
"_id" : "shard3",
"host" : "shard3/172.16.0.121:11723,172.16.0.123:11723"
}
],
"ok" : 1
}
九、数据库分片以及Collecton分片
1、激活数据库分片
命令:
> db.runCommand( { enablesharding : “<dbname>” } );
通过执行以上命令,可以让数据库跨shard,如果不执行这步,数据库只会存放在一个shard,一旦激活数据库分片,数据库中不同的collection将被存放在不同的shard上,但一个collection仍旧存放在同一个shard上,要使单个collection也分片,还需单独对collection作些操作
2、Collection分片
要使单个collection也分片存储,需要给collection指定一个分片key,通过以下命令操作:
> db.runCommand( { shardcollection : “<namespace>”,key : <shardkeypatternobject> });
注:
a. 分片的collection系统会自动创建一个索引(也可用户提前创建好)
b. 分片的collection只能有一个在分片key上的唯一索引,其它唯一索引不被允许
One note: a sharded collection can have only one unique index, which must exist on the shard key. No other unique indexes can exist on the collection.
十、分片collection例子
>db.runCommand( { shardcollection : “test.c1″,key : {id: 1} } )
>for (var i = 1; i <= 200003; i++) db.c1.save({id:i,value1:”1234567890″,value2:”1234567890″,value3:”1234567890″,value4:”1234567890″});
> db.c1.stats()(该命令可以查看表的存储状态)