【12.19】mongodb(下)

本文详细介绍了MongoDB的分片概念及其实现过程,包括配置服务器、分片服务器的搭建,以及分片测试和备份恢复的操作。通过分片,可以将大型数据集分散到多台服务器上,提高数据存储和访问效率。MongoDB的分片架构由mongos、config server和shard组成,确保了数据的高可用性和可扩展性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

21.36 mongodb分片介绍

  • 分片就是将数据库进行拆分,将大型集合分隔到不同服务器上。比如,本来100G的数据,可以分割成10份存储到10台服务器上,这样每台机器只有 10G 的数据。
  • 通过一个 mongos 的进程(路由)实现分片后的数据存储与访问,也就是说 mongos 是整个分片架构的核心,对客户端而言是不知道是否有分片的,客户端只需要把读写操作转达给 mongos 即可。
  • 虽然分片会把数据分隔到很多台服务器上,但是每一个节点都是需要有一个备用角色的,这样能保证数据的高可用。
  • 当系统需要更多空间或者资源的时候,分片可以让我们按需方便扩展,只需要把 mongodb 服务的机器加入到分片集群中即可

MongoDB 分片架构图:
在这里插入图片描述
MongoDB 分片相关概念:

  • mongos: 数据库集群请求的入口,所有的请求都通过 mongos 进行协调,不需要在应用程序添加一个路由选择器,mongos 自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生产环境通常有多 mongos 作为请求的入口,防止其中一个挂掉所有的 mongodb 请求都没有办法操作。
  • config server: 配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos 本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos 第一次启动或者关掉重启就会从 config server 加载配置信息,以后如果配置服务器信息变化会通知到所有的 mongos 更新自己的状态,这样 mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器,因为它存储了分片路由的元数据,防止数据丢失!
  • shard: 存储了一个集合部分数据的MongoDB实例,每个分片是单独的mongodb服务或者副本集,在生产环境中,所有的分片都应该是副本集。

21.37/21.38/21.39 mongodb分片搭建

1、准备
三台机器 A B C
A 搭建:mongos、config server、副本集 1 主节点、副本集 2 主节点、副本集 3 仲裁
B 搭建:mongos、config server、副本集 1 从节点、副本集 2 仲裁、副本集 3 从节点
C 搭建:mongos、config server、副本集 1 仲裁、副本集 2 从节点、副本集 3 主节点
端口分配:mongos 20000、config 21000、副本集1 27001、副本集2 27002、副本集3 27003
三台机器全部关闭 firewalld 和 selinux,或者增加对应端口的规则

2、三台机器上分别创建各角色所需要的目录

[root@arslinux-01 ~]# mkdir -p /data/mongodb/mongos/log
[root@arslinux-01 ~]# mkdir -p /data/mongodb/config/{data,log}
[root@arslinux-01 ~]# mkdir -p /data/mongodb/shard1/{data,log}
[root@arslinux-01 ~]# mkdir -p /data/mongodb/shard2/{data,log}
[root@arslinux-01 ~]# mkdir -p /data/mongodb/shard3/{data,log}

3、分片搭建——config server 配置
1)对 config sever 创建副本集,添加配置文件(三台机器都要操作,ip 要改)

[root@arslinux-01 ~]# mkdir /etc/mongod/
[root@arslinux-01 ~]# vim /etc/mongod/config.conf
pidfilepath = /var/run/mongodb/configsrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/congigsrv.log
logappend = true
bind_ip = 192.168.194.130
port = 21000
fork = true
configsvr = true 		#表示这是一个config server
replSet=configs 		#副本集名称
maxConns=20000 		#设置最大连接数

2)、分别启动三台机器 config server

[root@arslinux-01 ~]# mongod -f /etc/mongod/config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 15501
child process started successfully, parent exiting

3)、登录任意一台机器的21000端口,初始化副本集

[root@arslinux-01 ~]# mongo --host 192.168.194.130 --port 21000
MongoDB shell version v3.4.21
connecting to: mongodb://192.168.194.130:21000/
MongoDB server version: 3.4.21
Server has startup warnings: 
2019-12-19T10:45:47.344+0800 I CONTROL  [initandlisten] 
2019-12-19T10:45:47.344+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-12-19T10:45:47.344+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-12-19T10:45:47.344+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-12-19T10:45:47.344+0800 I CONTROL  [initandlisten] 
2019-12-19T10:45:47.345+0800 I CONTROL  [initandlisten] 
2019-12-19T10:45:47.345+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-12-19T10:45:47.345+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-12-19T10:45:47.345+0800 I CONTROL  [initandlisten] 
2019-12-19T10:45:47.345+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-12-19T10:45:47.345+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-12-19T10:45:47.345+0800 I CONTROL  [initandlisten] 
> config={_id:"configs",members:[{_id:0,host:"192.168.194.130:21000"},{_id:1,host:"192.168.194.132:21000"},{_id:2,host:"192.168.194.133:21000"}]}
{
	"_id" : "configs",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.194.130:21000"
		},
		{
			"_id" : 1,
			"host" : "192.168.194.132:21000"
		},
		{
			"_id" : 2,
			"host" : "192.168.194.133:21000"
		}
	]
}
> rs.initiate(config)
{ "ok" : 1 }
rs.status()
{
	"set" : "configs",
	"date" : ISODate("2019-12-19T02:53:52.177Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"configsvr" : true,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1563677627, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1563677627, 1),
			"t" : NumberLong(1)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1563677627, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1563677627, 1),
			"t" : NumberLong(1)
		}
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.194.130:21000",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 486,
			"optime" : {
				"ts" : Timestamp(1563677627, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2019-12-19T02:53:47Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1563677587, 1),
			"electionDate" : ISODate("2019-12-19T02:53:07Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "192.168.194.132:21000",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 55,
			"optime" : {
				"ts" : Timestamp(1563677627, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1563677627, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2019-12-19T02:53:47Z"),
			"optimeDurableDate" : ISODate("2019-12-19T02:53:47Z"),
			"lastHeartbeat" : ISODate("2019-12-19T02:53:51.665Z"),
			"lastHeartbeatRecv" : ISODate("2019-12-19T02:53:50.695Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.194.130:21000",
			"syncSourceHost" : "192.168.194.130:21000",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		},
		{
			"_id" : 2,
			"name" : "192.168.194.133:21000",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 55,
			"optime" : {
				"ts" : Timestamp(1563677627, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1563677627, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2019-12-19T02:53:47Z"),
			"optimeDurableDate" : ISODate("2019-12-19T02:53:47Z"),
			"lastHeartbeat" : ISODate("2019-12-19T02:53:51.684Z"),
			"lastHeartbeatRecv" : ISODate("2019-12-19T02:53:50.756Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.194.130:21000",
			"syncSourceHost" : "192.168.194.130:21000",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		}
	],
	"ok" : 1
}
configs:PRIMARY> 

4、分片搭建——分片配置
1)添加 shard1 配置文件(三台机器都操作)

[root@arslinux-01 ~]# vim /etc/mongod/shard1.conf
pidfilepath = /var/run/mongodb/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 192.168.194.130		//ip可以是0.0.0.0,安全起见分别设置对应 ip 比较好
port = 27001
fork = true
httpinterface=true		//打开web监控
rest=true
replSet=shard1		//副本集名称
shardsvr = true		//定义这是一个 shard 副本集
maxConns=20000		//设置最大连接数

2)添加 shard2 配置文件(三台机器都操作)

[root@arslinux-01 ~]# vim /etc/mongod/shard2.conf
pidfilepath = /var/run/mongodb/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 192.168.194.130		//不同机器,ip 不同
port = 27002			
fork = true
httpinterface=true		//打开web监控
rest=true
replSet=shard2		//副本集名称
shardsvr = true		//定义这是一个 shard 副本集
maxConns=20000		//设置最大连接数

3)添加 shard3 配置文件(三台机器都操作)

[root@arslinux-01 ~]# vim /etc/mongod/shard3.conf
pidfilepath = /var/run/mongodb/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 192.168.194.130		//不同机器,ip 不同
port = 27003
fork = true
httpinterface=true		//打开web监控
rest=true
replSet=shard3		//副本集名称
shardsvr = true		//定义这是一个 shard 副本集
maxConns=20000		//设置最大连接数

4)启动 shard1(三台机器都需要操作)

[root@arslinux-01 ~]# mongod -f /etc/mongod/shard1.conf 
[root@arslinux-02 ~]# mongod -f /etc/mongod/shard1.conf 
[root@arslinux-03 ~]# mongod -f /etc/mongod/shard1.conf 

5)初始化副本集 shard1
登陆 A 或 B 机器,初始化副本集,因为 C 机器是仲裁节点

[root@arslinux-01 ~]# mongo --host 192.168.194.130 --port 27001
> use admin
switched to db admin
> config = { _id: "shard1", members: [ {_id : 0, host : "192.168.194.130:27001"}, {_id: 1,host : "192.168.194.132:27001"},{_id : 2, host : "192.168.194.133:27001",arbiterOnly:true}] }
{
	"_id" : "shard1",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.194.130:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.194.132:27001"
		},
		{
			"_id" : 2,
			"host" : "192.168.194.133:27001",
			"arbiterOnly" : true
		}
	]
}
> rs.initiate(config)
{ "ok" : 1 }
shard1:OTHER>
shard1:PRIMARY>

6)启动 shard2(三台机器都需要操作)

[root@arslinux-01 ~]# mongod -f /etc/mongod/shard2.conf
[root@arslinux-02 ~]# mongod -f /etc/mongod/shard2.conf
[root@arslinux-03 ~]# mongod -f /etc/mongod/shard2.conf

7)初始化副本集 shard2
登陆 B 或 C 机器,初始化副本集,因为 A 机器是仲裁节点

[root@arslinux-02 ~]# mongo --host 192.168.194.132 --port 27002
> use admin
switched to db admin
> config = { _id: "shard2", members: [ {_id : 0, host : "192.168.194.130:27002" ,arbiterOnly:true},{_id : 1, host : "192.168.194.132:27002"},{_id : 2, host : "192.168.194.133:27002"}] }
{
	"_id" : "shard2",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.194.130:27002",
			"arbiterOnly" : true
		},
		{
			"_id" : 1,
			"host" : "192.168.194.132:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.194.133:27002"
		}
	]
}
> rs.initiate(config)
{ "ok" : 1 }
shard2:OTHER>
shard2:PRIMARY>

8)启动 shard3(三台机器都需要操作)

[root@arslinux-01 ~]# mongod -f /etc/mongod/shard3.conf
[root@arslinux-02 ~]# mongod -f /etc/mongod/shard3.conf
[root@arslinux-03 ~]# mongod -f /etc/mongod/shard3.conf

9)初始化副本集 shard3
登陆 A 或 C 机器,初始化副本集,因为 B 机器是仲裁节点

[root@arslinux-02 ~]# mongo --host 192.168.194.132 --port 27002
> use admin
switched to db admin
> config = { _id: "shard3", members: [ {_id : 0, host : "192.168.194.130:27003" },{_id : 1, host : "192.168.194.132:27003",arbiterOnly:true},{_id : 2, host : "192.168.194.133:27003"}] }
{
	"_id" : "shard3",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.194.130:27003"
		},
		{
			"_id" : 1,
			"host" : "192.168.194.132:27003",
			"arbiterOnly" : true
		},
		{
			"_id" : 2,
			"host" : "192.168.194.133:27003"
		}
	]
}
> rs.initiate(config)
{ "ok" : 1 }
shard3:OTHER> 
shard3:PRIMARY>

5、分片搭建——配置路由服务器
1)编辑配置文件(三台机器都操作)

[root@arslinux-01 ~]# vim /etc/mongod/mongos.conf
dfilepath = /var/run/mongodb/mongos.pid
logpath = /data/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 0.0.0.0		//ip 最好是本机 ip,0.0.0.0 可能不安全
port = 20000
fork = true
configdb = configs/192.168.194.130:21000,192.168.194.132:21000,192.168.194.133:21000
#监听的配置服务器,只能有1个或者3个,configs为配置服务器的副本集名字
maxConns=20000		//设置最大连接数

2)启动 mongos(三台机器都操作)

[root@arslinux-01 ~]# mongos -f /etc/mongod/mongos.conf
[root@arslinux-02 ~]# mongos -f /etc/mongod/mongos.conf
[root@arslinux-03 ~]# mongos -f /etc/mongod/mongos.conf

6、分片搭建——启用分片
1)登录任何一台机器的 20000 端口

[root@arslinux-01 ~]# mongo --host 192.168.194.130 --port 20000
mongos>

2)把所有分片和路由器串联(ip中间不能有空格)

mongos> sh.addShard("shard1/192.168.194.130:27001,192.168.194.132:27001,192.168.194.133:27001")
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> sh.addShard("shard2/192.168.194.130:27002,192.168.194.132:27002,192.168.194.133:27002")
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.addShard("shard3/192.168.194.130:27003,192.168.194.132:27003,192.168.194.133:27003")
{ "shardAdded" : "shard3", "ok" : 1 }

3)查看集群状态

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5d33d395fb77650f834a9fef")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.194.130:27001,192.168.194.132:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.194.132:27002,192.168.194.133:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.194.130:27003,192.168.194.133:27003",  "state" : 1 }
  active mongoses:
        "3.4.21" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
NaN
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:

创建成功!!!

21.40 mongodb分片测试

1、登录任一台机器 20000 端口

use admin
db.runCommand({ enablesharding : "testdb"}) 或者
sh.enableSharding("testdb") //指定要分片的数据库
db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } ) 或者
sh.shardCollection("testdb.table1",{"id":1} ) //#指定数据库里需要分片的集合和片键

[root@arslinux-01 ~]# mongo --host 192.168.194.130 --port 20000
mongos> use admin
switched to db admin
mongos> sh.enableSharding("testdb")
{ "ok" : 1 }
mongos> sh.shardCollection("testdb.table1",{"id":1} )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5d33d395fb77650f834a9fef")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.194.130:27001,192.168.194.132:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.194.132:27002,192.168.194.133:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.194.130:27003,192.168.194.133:27003",  "state" : 1 }
  active mongoses:
        "3.4.21" : 2
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
NaN
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "testdb",  "primary" : "shard2",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2	1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 

2)插入测试数据

mongos> use testdb
switched to db testdb
mongos> for (var i = 1; i <= 10000; i++) db.table1.save({id:i,"test1":"testval1"})
WriteResult({ "nInserted" : 1 })

3)继续创建多个库

{ "ok" : 1 }
mongos> sh.shardCollection("db2.cl2",{"id":1} )
{ "collectionsharded" : "db2.cl2", "ok" : 1 }
mongos> sh.enableSharding("db3")
{ "ok" : 1 }
mongos> sh.shardCollection("db3.cl3",{"id":1} )
{ "collectionsharded" : "db3.cl3", "ok" : 1 }
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5d33d395fb77650f834a9fef")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.194.130:27001,192.168.194.132:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.194.132:27002,192.168.194.133:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.194.130:27003,192.168.194.133:27003",  "state" : 1 }
  most recently active mongoses:
        "3.4.21" : 2
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
NaN
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "testdb",  "primary" : "shard2",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2	1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 
        {  "_id" : "db2",  "primary" : "shard3",  "partitioned" : true }
                db2.cl2
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard3	1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 0) 
        {  "_id" : "db3",  "primary" : "shard3",  "partitioned" : true }
                db3.cl3
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard3	1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 0)

可以看到,数据被分到了 shard2、shard3 下
数据量非常大的情况下,才会均匀分布

21.41 mongodb备份恢复

1、备份指定库
mongodump --host --port -d 数据库 -o 备份到哪里

[root@arslinux-01 ~]# mongodump --host 192.168.194.130 --port 20000 -d testdb -o /tmp/mongobak/
2019-12-19T16:23:27.581+0800	writing testdb.table1 to 
2019-12-19T16:23:27.769+0800	done dumping testdb.table1 (10000 documents)
[root@arslinux-01 ~]# ls /tmp/mongobak/
testdb
[root@arslinux-01 ~]# ls /tmp/mongobak/testdb/
table1.bson  table1.metadata.json
[root@arslinux-01 ~]# du -sh /tmp/mongobak/testdb/*
528K	/tmp/mongobak/testdb/table1.bson
4.0K	/tmp/mongobak/testdb/table1.metadata.json
[root@arslinux-01 ~]# cat /tmp/mongobak/testdb/table1.metadata.json 
{"options":{},"indexes":[{"v":2,"key":{"_id":1},"name":"_id_","ns":"testdb.table1"},{"v":2,"key":{"id":1.0},"name":"id_1","ns":"testdb.table1"}]}[root@arslinux-01 ~]#

2、备份所有库
mongodump --host --port -o 备份到哪里

[root@arslinux-01 ~]# mongodump --host 192.168.194.130 --port 20000 -o /tmp/mongobak2/
[root@arslinux-01 ~]# ll /tmp/mongobak2/
总用量 0
drwxrwxr-x 2 root root  80 12月  19 16:31 admin
drwxrwxr-x 2 root root 480 12月  19 16:31 config
drwxrwxr-x 2 root root  80 12月  19 16:31 db2
drwxrwxr-x 2 root root  80 12月  19 16:31 db3
drwxrwxr-x 2 root root  80 12月  19 16:31 testdb

3、备份指定的集合
shell
mongodump --host --port -d 数据库 -c 集合 -o 备份到哪里

[root@arslinux-01 ~]# mongodump --host 192.168.194.130 --port 20000 -d testdb -c table1 -o /tmp/mongobak3/
2019-12-19TT16:34:17.219+0800	writing testdb.table1 to 
2019-12-19TT16:34:17.414+0800	done dumping testdb.table1 (10000 documents)
[root@arslinux-01 ~]# ll /tmp/mongobak3/
总用量 0
drwxrwxr-x 2 root root 80 7月  21 16:34 testdb

4、导出集合为 json 文件

[root@arslinux-01 ~]# mongoexport --host 192.168.194.130 --port 20000 -d testdb -c table1 -o /tmp/table1.json 
2019-12-19T16:38:59.255+0800	connected to: 192.168.194.130:20000
2019-12-19T16:38:59.581+0800	exported 10000 records

table1.json 中就是我们插入的一条一条的数据
5、恢复所有库
1)先删除几个库

[root@arslinux-01 ~]# mongo --host 192.168.194.130 --port 20000 
mongos> use testdb
switched to db testdb
mongos> db.dropDatabase()
{ "dropped" : "testdb", "ok" : 1 }
mongos> use db2
switched to db db2
mongos> db.dropDatabase()
{ "dropped" : "db2", "ok" : 1 }
mongos> use db3
switched to db db3
mongos> db.dropDatabase()
{ "dropped" : "db3", "ok" : 1 
mongos> show databases
admin   0.000GB
config  0.001GB

2)config 是没办法恢复的,先删除 config 和 admin

[root@arslinux-01 ~]# rm -rf /tmp/mongobak2/admin/
[root@arslinux-01 ~]# rm -rf /tmp/mongobak2/config/

3)恢复

[root@arslinux-01 ~]# mongorestore --host 192.168.194.130 --port 20000 --drop /tmp/mongobak2/
[root@arslinux-01 ~]# mongo --host 192.168.194.130 --port 20000
mongos> show databases
admin   0.000GB
config  0.001GB
db2     0.000GB
db3     0.000GB
testdb  0.000GB

6、恢复指定库

[root@arslinux-01 ~]# mongorestore --host 192.168.194.130 --port 20000 -d testdb --drop 

7、恢复集合(这里要指定 bson 文件)

[root@arslinux-01 ~]# mongorestore --host 192.168.194.130 --port 20000 -d testdb -c table1 --drop /tmp/mongobak/testdb/table1.bson

8、导入集合

[root@arslinux-01 ~]# mongoimport --host 192.168.194.130 --port 20000 -d testdb -c table1 --file /tmp/mongobak/testdb/table1.metadata.json
2019-12-19T16:55:31.703+0800	connected to: 192.168.194.130:20000
2019-12-19T16:55:31.756+0800	imported 1 document

扩展内容
mongodb安全设置 http://www.mongoing.com/archives/631
mongodb执行js脚本 http://www.jianshu.com/p/6bd8934bd1ca

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值