Mongodb 副本集+分片
mongodb的分片功能是建立在副本集之上的,所以首先我們嘗試著配置副本集。
docker啟動3個已經安裝好mongo的映象
# docker run -idt --name mongodb_01 mongodb_master:v2 /bin/bash # docker run -idt --name mongodb_02 mongodb_master:v2 /bin/bash # docker run -idt --name mongodb_03 mongodb_master:v2 /bin/bash
檢視容器ip
# docker inspect mongodb_01 | grep IP
3個容器的ip為
172.17.0.4,172.17.0.5,172.17.0.6
進入容器,分別建立mongodb的資料和日誌資料夾,編輯配置檔案
# docker exec -it mongodb_01 /bin/bash # mkdir -p /opt/mongodb/rs0/data # mkdir -p /opt/mongodb/rs0/log # vi /usr/local/mongdb/conf/rs0.conf dbpath=/opt/mongodb/rs0/data #資料目錄 logpath=/opt/mongodb/rs0/log/rs0.log #日誌目錄 pidfilepath=/opt/mongodb/rs0/log/rs0.pid #pid logappend=true replSet=rs0 #副本集名稱 bind_ip=172.17.0.4 #容器對應的ip port=27617 fork=true maxConns=2000 //啟動容器 # mongod -f /usr/local/mongdb/conf/rs0.conf
3個容器的mongodb全部啟動後,隨便連線一個mongodb
# mongo --host 172.17.0.4 --port 27617 > rs.initiate()//初始化副本集 > rs.conf()//確認更改 > rs.add({host:"172.17.0.5:27518", priority: 6}) //將另外兩個mongo服務加入副本集 > rs.conf()//確認更改 > rs.status() //檢視副本及狀態
priority代表副本集的優先順序,數值越大優先順序越高,可以通過rs.status()檢視當前副本集的狀態,stateStr表示副本及的身份,可以看到172.17.0.5目前的身份是PRIMARY ,另外兩個都是SECONDARY,這時我們停止172.17.0.5的mongo服務,過一段時間再看,0.4和0.6中其中一臺節點就會變成PRIMARY ,再開啟0.5上的mongo服務,又會回覆成原來的狀態
"members" : [ { "_id" : 0, "name" : "172.17.0.6:27619", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 264637, "optime" : { "ts" : Timestamp(1539406655, 1), "t" : NumberLong(4) }, "optimeDate" : ISODate("2018-10-13T04:57:35Z"), "syncingTo" : "172.17.0.4:27617", "syncSourceHost" : "172.17.0.4:27617", "syncSourceId" : 2, "infoMessage" : "", "configVersion" : 3, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "172.17.0.5:27618", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 263943, "optime" : { "ts" : Timestamp(1539406655, 1), "t" : NumberLong(4) }, "optimeDurable" : { "ts" : Timestamp(1539406655, 1), "t" : NumberLong(4) }, "optimeDate" : ISODate("2018-10-13T04:57:35Z"), "optimeDurableDate" : ISODate("2018-10-13T04:57:35Z"), "lastHeartbeat" : ISODate("2018-10-13T04:57:38.894Z"), "lastHeartbeatRecv" : ISODate("2018-10-13T04:57:38.892Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1539142727, 1), "electionDate" : ISODate("2018-10-10T03:38:47Z"), "configVersion" : 3 }, { "_id" : 2, "name" : "172.17.0.4:27617", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 264390, "optime" : { "ts" : Timestamp(1539406655, 1), "t" : NumberLong(4) }, "optimeDurable" : { "ts" : Timestamp(1539406655, 1), "t" : NumberLong(4) }, "optimeDate" : ISODate("2018-10-13T04:57:35Z"), "optimeDurableDate" : ISODate("2018-10-13T04:57:35Z"), "lastHeartbeat" : ISODate("2018-10-13T04:57:38.893Z"), "lastHeartbeatRecv" : ISODate("2018-10-13T04:57:38.893Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "172.17.0.5:27618", "syncSourceHost" : "172.17.0.5:27618", "syncSourceId" : 1, "infoMessage" : "", "configVersion" : 3 } ],
瞭解了副本集的配置,接下來進行分片的配置
分片即是通過某種演算法,將資料分散到不同的片區上,但這樣同時會產生一個問題,如果某一個片區出現問題則整個資料都會變的不可用。
所以,需要將分散到不同片區上的資料再以副本集的形式存放,這樣在分片的同時也就具備了容錯的能力,概念上其實和RAID很相似。
mongodb的分片配置包含以下幾個角色:
Config Server:配置分片資訊
Shard:實際儲存分片資料的地方
mongos:分片的路由,也是前端實際連線的例項
這裡盜一張網上的圖:
依然用之前的3個映象,建立需要的資料夾
conf對應Config Server的資料夾
mongos 對應 mongos的資料夾
shard這裡我們分為3片,分別對應 shard1,shard2,shard3
# mkdir -p /opt/mongodb/conf/data # mkdir -p /opt/mongodb/conf/log # mkdir -p /opt/mongodb/mongos/data # mkdir -p /opt/mongodb/mongos/log # mkdir -p /opt/mongodb/shard1/data # mkdir -p /opt/mongodb/shard1/log # mkdir -p /opt/mongodb/shard2/data # mkdir -p /opt/mongodb/shard2/log # mkdir -p /opt/mongodb/shard3/data # mkdir -p /opt/mongodb/shard3/log
編輯每個角色的配置檔案
Config Server
# vi/usr/local/mongdb/conf/conf.conf dbpath=/opt/mongodb/conf/data logpath=/opt/mongodb/conf/log/conf.log pidfilepath=/opt/mongodb/conf/log/conf.pid logappend=true replSet=configs bind_ip=172.17.0.6 port=27019 fork=true maxConns=2000 configsvr=true #Config Server伺服器增加此行
mongos
# vi /usr/local/mongdb/conf/mongos.conf logpath=/opt/mongodb/mongos/log/mongos.log pidfilepath=/opt/mongodb/conf/log/mongos.pid logappend=true bind_ip=172.17.0.6 port=27419 fork=true maxConns=2000 configdb=configs/172.17.0.4:27017,172.17.0.5:27018,172.17.0.6:27019 #Config Server的地址
shard
# vi /usr/local/mongdb/conf/shard1.conf pidfilepath = /opt/mongodb/shard1/log/shard1.pid dbpath = /opt/mongodb/shard1/data logpath = /opt/mongodb/shard1/log/shard1.log logappend = true bind_ip = 172.17.0.6 port = 27119 fork = true replSet=shard1 shardsvr = true maxConns=20000
配置好後,分別啟動Config Server,shard1,shard2,shard3的 mong例項
按照上面介紹的配置副本集方法,分別給Config server,shard1,shard2,shard3配置副本集
最後啟動mongos例項,注意命令是mongos 不是mongod
# mongos -f /usr/local/mongodb/conf/mongos.conf
連線mongos例項啟用分片
# mongo --host 172.17.0.6:27419 >sh.addShard("shard1/172.17.0.4:27117,172.17.0.5:27118,172.17.0.6:27119") >sh.addShard("shard1/172.17.0.4:27217,172.17.0.5:27218,172.17.0.6:27219") >sh.addShard("shard1/172.17.0.4:27317,172.17.0.5:27318,172.17.0.6:27319") >db.runCommand( { enablesharding :"testshard"});//資料庫啟用分片 >db.runCommand( { shardcollection : "testshard.test",key : {id: "hashed"} } ) //表啟用分片,並指定片鍵
到這裡,mongodb的分片叢集就配置完成了。可以在不同的映象中啟用mongos和keepalived配合實現高可用。