MongodbMongodb Replication and Fragmentation

(1) MongDB replication (replica set)
MongDB replication is the process of synchronizing data to multiple servers.
Replication provides redundant backups of data and stores copies of the data on the server to improve data availability and ensure data security.
Replication allows you to recover data from disruptions in hardware and services and is always ready to cope with the risks of data loss and machine damage.
Replication also improves the read capability, increasing the load on the entire system by allowing users to read and write to servers in different places.

1. The characteristics of replication:
        Ensuring data security
        High Availability of Data 7*24
        disaster recovery
        No downtime maintenance required
        Distributed Read Data
 2. Principle of replication
 The Mongodb replica set consists of a set of Mongod instances (processes).Mongodb replication requires at least two nodes, one Primary primary node to process client requests and the other Secondary node to replicate data on the primary node.
Primary records those writes to a log oplog, and other econdaries periodically poll the primary node for this log to manipulate their databases to keep them consistent.
![](https://s1.51cto.com/107?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
Clients read data from the primary node, and data interaction between the primary node and the secondary node ensures data consistency when the client writes data to the primary node.
In MongoDB (over 3.2), master-slave mode has been officially deprecated and replaced by replica sets.

3. Replica Set. In a replica set, there is only one primary node, which can contain one or more secondary nodes. The primary and secondary nodes directly use heartbeat detection to determine whether a node is healthy or alive.All read and write operations are performed on the primary node, and if read and write separation is to be achieved, appropriate processing is required, which will be said at the end.The slave node copies the data of the primary node based on the oplog (that is, the operation log).

Principle: On MongoDB replica sets, any node in a cluster can become a Master node.Once the Master node fails, a new Master node is selected from the rest of the nodes.And guide the remaining nodes to connect to the new Master node.This process is transparent to the application.

In a production environment, a replication set consists of at least three nodes, one of which must be the primary node, one of the slave nodes, and one of the arbitration nodes.Each of these nodes is a corresponding instance of the mongod process, with heartbeats checking each other's state.
primary node: Responsible for database read and write operations.
secondary node: Back up data on the primary node, you can have more than one.
arbiter node: When the primary node fails, a new primary node is selected from the remaining nodes participating in the replication set.

(2) Configuration of mongodb environment
1. Host Configuration
192.168.4.203 db1 ##primary node
192.168.4.97 DB2 ##secondary node
192.168.4.200 db3 ##arbriter node
2. Installation (omitted) refer to https://blog.51cto.com/liqingbiao/2401942

        3. Related operations of the primary node.

           3.1. Configuration of related primary nodes
[root@otrs ~]# vim /usr/local/mongodb/conf/master.conf 
bind_ip = 0.0.0.0
port    = 27017
dbpath  = /data/mongodb/db
logpath = /data/mongodb/mongodb.log
logappend = true
fork    = true
journal=true
maxConns=1000
auth  = true
keyFile = /usr/local/mongodb/mongodb-keyfile
storageEngine=wiredTiger
oplogSize=2048
replSet=RS

keyFile = /usr/local/mongodb/mongodb-keyfile   ##### The full path to the private key of the cluster, valid only for the Replica Set schema (this is not configured when noauth = true)
3.2. Generate a private key on the server and set it to 600 to scp the private key to another remote server. 
[root@otrs ~]# openssl rand -base64 745 > /usr/local/mongodb/mongodb-keyfile
[root@otrs ~]# chmod 600 /usr/local/mongodb/mongodb-keyfile 
[root@otrs ~]# cat /usr/local/mongodb/mongodb-keyfile 
sJy/0RquRGAu7Qk1xT5P7VqDVjHKGdFIu0EQSRa98+pxAEfD43Ix+hrKVhmfk6ag
X8SAwl/2wkgeMFQBznKQNzE/EBFKos6VJgzi47RkUcXI3XV6igXbJNLzsjYsktkZ
ipKDLtfpQrvse4nZy9PRQusg9HpYLlr3tVKYA9TNmAJtUXA36NDOGBAEbHzfEvvc
sh4vmfxFAB+qtMwEer01MC11mKzXGN1nmL9D3hbmqCgC2F8d8RFeqTY5A73b81jT
j16wqQw2PuAPHncy6MaQX0ytNO5uWiYDcOxUwOA/LVbTaP8jOHwcEfpl6FY8NT66
P2GXINkfKMjaTMIrhXJVgMGkJz0O4aJv8RYZaKCpLmiMpNsyxbMLyngvx5AmDWgP
qAHkuQf8O6HcA676hzhBSdDoB8Rr6Yx4NvzQorKq5g/hjmk+9IpDixuI+qjZAwWV
uvPceiONigJqwZnryIkvGm3pwl2SmfieKdTRJ5lbpaEz3N5JVgBlM2L6jxj3egnL
Hn0V+1GH81Iwkw9AXpbn+I9KLrfivI6iuVT6xKu0Zu0ERtUZ442lgIpPIGiiY2HR
M3MgyOLU0SWBcI0/t3+N4L2Kxkm0806Nl3/LdtxaPkGTqcSdJl39i96c8qmZThsn
UPMQrIA7QHtBhal5e2rRQ7N5gbC+aFXCnEfNqbfPN13ljZfvMj+pzRDwfLutXpMF
KSHaAkpF29wYL5nlbnN0CKxKBZDD1gJncR0aYWt2s4z3IP5TOgYER+zVFfhUlS6Y
5JsSgM57wrUDkF3VGvkwGQMs+8g5/3WxgEOzwcJV32QO98HLQR5QE0md108KWpy3
8LZYUgGzADcYepEeqGj/BPspnuQy7n4GzKyWZWK7Q4Sl9TLdVQR8XDUAl8lOtnDk
ar/qYfEHb/Bt7tZb/ANZQyvpyTvEIHZvyPZ5xzAtoDduV+cQRyx+G1X/smHagm1o
yo0HNr25CIaTjk2atQq4USnN2daq5f/OEw==
[root@otrs ~]# scp /usr/local/mongodb/mongodb-keyfile root@192.168.4.97:/usr/local/mongodb/
mongodb-keyfile                                                                               100% 1012     1.0KB/s   00:00   

3.3. Create an authenticated user.Since the profile enables authentication mode, you now need to create users in a non-authentication mode.
#```
############Start the database in unauthenticated mode first
[root@otrs ~]# /usr/local/mongodb/bin/mongod --fork --dbpath=/data/mongodb/db --logpath=/data/mongodb/mongodb.log
about to fork child process, waiting until server is ready for connections.
forked process: 28389
child process started successfully, parent exiting
[root@otrs ~]# netstat -lntp|grep mongod
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 28389/mongod
################# Create a root user and configure administrator privileges
[root@otrs ~]# mongo
MongoDB shell version v4.0.9
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4df82588-c03c-49e5-8839-92dafd7ff8ce") }
MongoDB server version: 4.0.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten]
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten] WARNING: Access control is not enabled for the database.
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten] Read and write access to data and configuration is unrestricted.
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten]

show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
use admin
switched to db admin
db.createUser({ user: "root", pwd: "root", roles: [ { role: "root", db: "admin" } ] });
"user" : "root",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}

3.4,Launch configuration file to verify permissions

[root@otrs ~]# mongod --config /usr/local/mongodb/conf/master.conf
about to fork child process, waiting until server is ready for connections.
forked process: 28698
child process started successfully, parent exiting
[root@otrs ~]# ps -ef|grep mongod
root 28698 1 19 16:19 ? 00:00:01 mongod --config /usr/local/mongodb/conf/master.conf
root 28731 28040 0 16:19 pts/0 00:00:00 grep --color=auto mongod

4. Configure the secondary node
 4.1. New Directory Edit Profile

[root@otrs004097 opt]# mkdir /data/mongodb/standard
[root@otrs004097 opt]# cat /usr/local/mongodb/conf/standard.conf
bind_ip = 0.0.0.0
port = 27017
dbpath = /data/mongodb/standard
logpath = /data/mongodb/mongodb.log
logappend = true
fork = true
journal=true
maxConns=1000
auth = true
keyFile = /usr/local/mongodb/mongodb-keyfile
storageEngine=wiredTiger
oplogSize=2048
replSet=RS

4.2. Start Services

[root@otrs004097 opt]# /usr/local/mongodb/bin/mongod --fork --config /usr/local/mongodb/conf/standard.conf
[root@otrs004097 opt]# netstat -lntp|grep mongod
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 1045/mongod

5. Configure arbriter nodes

[root@NginxServer01 mongodb]# cat /usr/local/mongodb/conf/arbiter.conf
bind_ip = 0.0.0.0
port = 27017
dbpath = /data/mongodb/arbiter
logpath = /data/mongodb/mongodb.log
logappend = true
fork = true
journal=true
maxConns=1000
auth = true
keyFile = /usr/local/mongodb/mongodb-keyfile
storageEngine=wiredTiger
oplogSize=2048
replSet=RS
[root@NginxServer01 mongodb]# /usr/local/mongodb/bin/mongod --config /usr/local/mongodb/conf/arbiter.conf
about to fork child process, waiting until server is ready for connections.
forked process: 26321
child process started successfully, parent exiting
[root@NginxServer01 mongodb]# netstat -lntp|grep mongod
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 26321/mongod

6. On the primary node server, initialize the replication set primary node, add the secondary node and the arbriter node.
6.1, Initialize primary node

[root@otrs ~]# mongo -uroot -p
MongoDB shell version v4.0.9
Enter password:
############### Check the status of the replica set, and find that it is not initialized, then initialize it.
rs.status()
{
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}

Configuration of config with
config={"_id":"RS","members":[ {"_id":0,"host":"192.168.4.203:27017"},{"_id":1,"host":"192.168.4.97:27017"}]}
{
"_id" : "RS",
"members" : [
{
"_id" : 0,
"host" : "192.168.4.203:27017"
},
{
"_id" : 1,
"host" : "192.168.4.97:27017"
}
]
}

rs.initiate(config); ############Initialization
{ "ok" : 1 }

6.2. View the status of the replica set

RS:SECONDARY> rs.status()
{
    "set" : "RS",
    "date" : ISODate("2019-06-14T09:01:19.722Z"),
    "myState" : 2,
    "term" : NumberLong(0),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(0, 0),
            "t" : NumberLong(-1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1560502874, 1),
            "t" : NumberLong(-1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1560502874, 1),
            "t" : NumberLong(-1)
        }
    },
    "lastStableCheckpointTimestamp" : Timestamp(0, 0),
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.4.203:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 2525,
            "optime" : {
                "ts" : Timestamp(1560502874, 1),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("2019-06-14T09:01:14Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "could not find member to sync from",
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "192.168.4.97:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 5,
            "optime" : {
                "ts" : Timestamp(1560502874, 1),
                "t" : NumberLong(-1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1560502874, 1),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("2019-06-14T09:01:14Z"),
            "optimeDurableDate" : ISODate("2019-06-14T09:01:14Z"),
            "lastHeartbeat" : ISODate("2019-06-14T09:01:19.645Z"),
            "lastHeartbeatRecv" : ISODate("2019-06-14T09:01:19.489Z"),
            "pingMs" : NumberLong(1),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1
}

6.3. Viewed by the initialization results, both are secondary nodes, which will become primary nodes when synchronization is completed because database synchronization is not complete.Wait two minutes to check again

             RS:PRIMARY> rs.status()
{
    "set" : "RS",
    "date" : ISODate("2019-06-14T09:11:17.382Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1560503477, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1560503477, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1560503477, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1560503477, 1),
            "t" : NumberLong(1)
        }
    },
    "lastStableCheckpointTimestamp" : Timestamp(1560503427, 1),
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.4.203:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 3123,
            "optime" : {
                "ts" : Timestamp(1560503477, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2019-06-14T09:11:17Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "electionTime" : Timestamp(1560502885, 1),
            "electionDate" : ISODate("2019-06-14T09:01:25Z"),
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "192.168.4.97:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 603,
            "optime" : {
                "ts" : Timestamp(1560503467, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1560503467, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2019-06-14T09:11:07Z"),
            "optimeDurableDate" : ISODate("2019-06-14T09:11:07Z"),
            "lastHeartbeat" : ISODate("2019-06-14T09:11:15.995Z"),
            "lastHeartbeatRecv" : ISODate("2019-06-14T09:11:16.418Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.4.203:27017",
            "syncSourceHost" : "192.168.4.203:27017",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1,
    "operationTime" : Timestamp(1560503477, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1560503477, 1),
        "signature" : {
            "hash" : BinData(0,"iR63R/X7QanbrWvuDJNkpdPgcVY="),
            "keyId" : NumberLong("6702308864978583553")
        }
    }
}

6.4. View synchronization logs on the secondary server

 [root@otrs004097 opt]# tail  -f /data/mongodb/mongodb.log 
 2019-06-14T17:01:15.870+0800 I REPL     [replexec-0] This node is 192.168.4.97:27017 in the config
2019-06-14T17:01:15.870+0800 I REPL     [replexec-0] transition to STARTUP2 from STARTUP
2019-06-14T17:01:15.871+0800 I REPL     [replexec-0] Starting replication storage threads
2019-06-14T17:01:15.872+0800 I REPL     [replexec-2] Member 192.168.4.203:27017 is now in state SECONDARY
2019-06-14T17:01:15.872+0800 I STORAGE  [replexec-0] createCollection: local.temp_oplog_buffer with generated UUID: 2e5c6683-a67b-4a16-bd9b-8672ee4db900
2019-06-14T17:01:15.880+0800 I REPL     [replication-0] Starting initial sync (attempt 1 of 10)
2019-06-14T17:01:15.881+0800 I STORAGE  [replication-0] Finishing collection drop for local.temp_oplog_buffer (2e5c6683-a67b-4a16-bd9b-8672ee4db900).
2019-06-14T17:01:15.882+0800 I STORAGE  [replication-0] createCollection: local.temp_oplog_buffer with generated UUID: c25fa3cf-cae9-430b-b514-f13c3ab1e247
2019-06-14T17:01:15.889+0800 I REPL     [replication-0] sync source candidate: 192.168.4.203:27017
2019-06-14T17:01:15.889+0800 I REPL     [replication-0] Initial syncer oplog truncation finished in: 0ms
2019-06-14T17:01:15.889+0800 I REPL     [replication-0] ******
2019-06-14T17:01:15.889+0800 I REPL     [replication-0] creating replication oplog of size: 2048MB...
2019-06-14T17:01:15.889+0800 I STORAGE  [replication-0] createCollection: local.oplog.rs with generated UUID: f35caf0a-f2da-4cf7-b16b-34a94f1e25a7
2019-06-14T17:01:15.892+0800 I STORAGE  [replication-0] Starting OplogTruncaterThread local.oplog.rs
2019-06-14T17:01:15.892+0800 I STORAGE  [replication-0] The size storer reports that the oplog conta××× 0 records totaling to 0 bytes
2019-06-14T17:01:15.892+0800 I STORAGE  [replication-0] Scanning the oplog to determine where to place markers for truncation
2019-06-14T17:01:15.905+0800 I REPL     [replication-0] ******
2019-06-14T17:01:15.905+0800 I STORAGE  [replication-0] dropAllDatabasesExceptLocal 1
    6.5, Add arbriter arbitration node
RS:PRIMARY> rs.addArb("192.168.4.45:27017")
{
"ok" : 1,
"operationTime" : Timestamp(1560504144, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1560504144, 1),
"signature" : {
    "hash" : BinData(0,"um2WmD60Gh9q/43qUff8yN2abIw="),
    "keyId" : NumberLong("6702308864978583553")
}
}
}

View the status of the replica set
RS:PRIMARY> rs.status()
{
"set" : "RS",
"date" : ISODate("2019-06-14T09:40:04.334Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1560505167, 1),
"members" : [
{
"_id" : 0,
"name" : "192.168.4.203:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4850,
"optime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-06-14T09:39:57Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1560502885, 1),
"electionDate" : ISODate("2019-06-14T09:01:25Z"),
"configVersion" : 2,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.4.97:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2330,
"optime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-06-14T09:39:57Z"),
"optimeDurableDate" : ISODate("2019-06-14T09:39:57Z"),
"lastHeartbeat" : ISODate("2019-06-14T09:40:02.717Z"),
"lastHeartbeatRecv" : ISODate("2019-06-14T09:40:02.767Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.4.203:27017",
"syncSourceHost" : "192.168.4.203:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 2
},
{
"_id" : 2,
"name" : "192.168.4.45:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 11,
"lastHeartbeat" : ISODate("2019-06-14T09:40:02.594Z"),
"lastHeartbeatRecv" : ISODate("2019-06-14T09:40:04.186Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 2
}
],
"ok" : 1,
"operationTime" : Timestamp(1560505197, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1560505197, 1),
"signature" : {
"hash" : BinData(0,"Ff00RyXUvxDPc5nzFQYXGZIlnBc="),
"keyId" : NumberLong("6702308864978583553")
}
}
}

Remarks:
"Health": 1, ---- 1, indicates a state of health

"StateStr": "PRIMARY", ---- master

"StateStr": "SECONDARY", ---- from

"StateStr": "ARBITER", ---- Arbitration

7. View changes in data for primary, secondary, and arbriter nodes
7.1. Primary node Parimary operation, create tables and insert data.
8.RS:PRIMARY> use lqb
switched to db lqb
RS:PRIMARY> db.object.×××ert([{"language":"C"},{"language":"C++"}])
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 2,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
RS:PRIMARY> db.object.find()
{ "_id" : ObjectId("5d0370424e7535b767bb7098"), "language" : "C" }
{ "_id" : ObjectId("5d0370424e7535b767bb7099"), "language" : "C++" }

7.2. Operation from node.View data synchronized to
RS:SECONDARY> use lqb;
switched to db lqb
RS:SECONDARY> rs.slaveOk();
RS:SECONDARY> db.object.find()
{ "_id" : ObjectId("5d0370424e7535b767bb7099"), "language" : "C++" }
{ "_id" : ObjectId("5d0370424e7535b767bb7098"), "language" : "C" }

7.3. Arbitration node aribriter.Arbitration node will not store data
RS:ARBITER> use lqb;
switched to db lqb
RS:ARBITER> show tables
Warning: unable to run listCollections, attempting to approximate collection names by parsing connectionStatus
RS:ARBITER> db.object.find()
Error: error: {
"ok" : 0,
"errmsg" : "not authorized on lqb to execute command { find: \"object\", filter: {}, lsid: { id: UUID(\"d2d7e624-8f30-468a-a3b0-79728b0cabbd\") }, $readPreference: { mode: \"secondaryPreferred\" }, $db: \"lqb\" }",
"code" : 13,

8. Conclusion:
Conclusion 1: When the primary node hangs up, the secondary node will become the primary node, and when the original primary node recovers, it will become the secondary node, and the primary node will not change.
Conclusion 2: Primary node can write, slave node can not write.
Conclusion 3: If the role of the master-slave node changes, the original reading and writing will also change.

Keywords: Linux MongoDB Database shell Session

Added by volant on Fri, 14 Jun 2019 20:44:45 +0300