Mongo Personal Notes

Mongo Document Database (Bson = JSON binarization on Gridfs disks)
Features: The internal execution engine is a JS interpreter. Documents are stored in a bson structure. When querying, they are converted into json objects and can be manipulated by familiarizing themselves with JS grammar.

2. The biggest difference between mongo and traditional databases is:
1. Traditional database: Structured database, defined table structure, each row of data is the same, column data type is the same.
2. Document database: Each document under the table has no fixed structure and can be its own unique attributes and values.

III. Installation
Download: http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.4.8.tgz
Unzip: tar-zxvf mongodb-linux-x86_64-2.4.8.tgz
No compilation: copy to / usr/local/
Directory Introduction: bin
bsondump derives bson structure
mongo client
mongod server
mongodump Overall Database Export (Binary)
mongoexport exports json and csv structures
mongorestore Overall Database Import (Binary)
mongos router

Start service: / usr / local / mongodb / bin / mongod -- dbpath = / data / mongod17 / logpath = / data / mongod17 / mongod17. log -- fork -- smallfiles -- port 27017
--dbpath=/data/mongodb 
--logpath=/data/mongodb/logs/mongodb.log 
--auth access password
 -- port=27017 access port
 fork Backend Running
 smallfiles run at a minimum of 400M
 Du-h View Space, which takes up 3-4G of Disk Space
 killall mogon delete process

Common orders
show dbs; view the current database
Use database Name; select database
show collections; view the current data table
db.help(); see help

How to create libraries (implicit creation, insertion without creation)
db.createCollection('user');

Insert Document Table
db.user.insert({_id:1,name:'lisi',age:22});
db.user.insert({_id:2,name:'wangbaojian',hobby:['computer','study']});
db.user.find();

Delete libraries and tables
 db.dbname.drop(); delete data tables
 db.dropDatabase(); To delete the database, you need to switch first.

5. Detailed Command CURD
Add a document
db.stu.insert({_id:'001',sn:'001',name:'xiaoming'});

Adding multiple documents
db.user.insert([
{_id:1,name:'zhangsan',sex:'1',xueli:'dazhuan'},
{_id:2,name:'lisi',sex:'1',xueli:'dazhuan'},
{_id:3,name:'wangwu',sex:'0',xueli:'benke'}
]);

remove document
db.user.remove();
Note: 1. Query expressions are still json objects;
      2. Matching lines of query expressions are deleted.
      3. If there is no query expression, all documents in collections are deleted.
db.user.remove({sex:'1'},true/false); if the second parameter is true, only one line is deleted.

Modify Documents
 db.user.update({name:'zhansan'},{name:'zhangsantongxue'}); in addition to ID, the old document replaces the new document.
db.user.update({name:'lisi'},{$set:{name:'lisitongxue'}});
db.user.update{{name:'lisi'},$set:{},$unset:{},$rename:{},$incr:{}}
Set modifies the value of a column > db. user. update ({name:'lisi'}, {set:{name:'lisitongxue'}, true); when it modifies, it hits multiple lines and changes only one line, unless the third parameter is true.
$unset deletes a column > db.stu.update ({id:3}, {unset:{shcool:"zhejiang"});
Rename renames a column > db.stu.update ({id:3}, {rename:{name:'xname'});
incr increases the value of a column > db. stu. update ({id:4}, {inc:{sex:10});
> db.user.update ({name:'lisi'}, {set:{name:'lisitongxue'}, {upsert: true}); if there are modifications, no additions are made.
setOnInsert can add some fields when the modification value is successful

consult your documentation
 db.user.find(); query all rows
 db.user.find({}, {}) query expression, query column value 0/1

6. Query expression
db.shop.find().count();

> comparison operator

Commodities with 13 primary keys
db.shop.find({goods_id:13});

All commodities not part 3 ($ne)
db.shop.find({goods_id:{$ne:3}},{goods_id:1,_id:0,goods_name:1})

Other judgement keywords
 $ne is not equal to
 More than $gt
 Less than $lt
 $gte is greater than or equal to
 $lte is less than or equal to

Included values
$in   in     
db.shop.find({goods_id:{$in:[10,11]}},{_id:0,goods_id:1,goods_name:1})

Numbers not included
$nin  not in
db.shop.find({goods_id:{$nin:[1,3]}},{_id:0,goods_id:1,goods_name:1});

All values of exponential group
$all  
db.user.find({hobby:{$all:[1]}},{_id:0,name:1,passwd:1,$hobby:1});

Logical Operator >

and
$and  and
db.shop.find({$and:[
{goods_id:{$gte:1}},
{goods_id:{$lte:10}}
]},{_id:0,goods_id:1,goods_name:1}) 

db.shop.find({$and:[
{goods_id:{$ne:1}},
{goods_id:{$ne:3}}
]},{_id:0,goods_id:1,goods_name:1})

or
$or   or
db.shop.find({$or:[
{goods_id:12},
{goods_id:1}
]},{_id:0,goods_id:1,goods_name:1})

wrong
$not  not

All enumeration conditions are not true    
$nor  
db.shop.find({$nor:[ 
{goods_id:12}, 
{goods_id:1} 
]},{_id:0,goods_id:1,goods_name:1})

The existence of a column is true
$exists
db.shop.find(
{goods_id:{$exists:1}},
{_id:0,goods_id:1,goods_name:1})

Full Remainder Conditions Are True
$mod
db.shop.find({goods_id:
{$mod:[5,0]}
},{_id:0,goods_id:1,goods_name:1})

Data is true for a certain type
$type
db.shop.find({goods_name:
{$type:2}
},{_id:0,goods_id:1,goods_name:1})

7. Cursor operation
insert data
for(var i=0;i<10000;i++){db.bar.insert({_id:i,title:'hellow',content:'word'+i})};

declare cursor
var mycusor = db.bar.find({_id:{$lte:5}});
print(mycusor.next());

while Printing case
while(mycusor.hasNext()){
      printjson(mycusor.next());
}

for loop
for(var mycusor = db.bar.find({_id:{$lte:5}});mycusor.next();){
     printjson(mycusor.next());
}

forEach loop,callback
var mycusor = db.stu.find();
mycusor.forEach(function(obj){printjson(obj)});

//Paging application skip() limit()
//n rows per page, current page page, skip (page-1) * rows
var mycusor = db.stu.find().skip(800).limit(10);
while(mycusor.hasNext()){
      printjson(mycusor.next());
}

//Return results directly (resource-intensive)
var mycusor = db.stu.find().skip(800).limit(10);
mycusor.toArray();

VII. Index Operation
Explain the implementation plan
db.find(query).explain(); >>db.stu.find().skip(0).limit(10).explain();
"cursor": "Basic cursor", no index column
"Nscanned Objects": 10, theoretical scan line
View the current index
db.stu.getIndexes();

Create an index
 Db.stu.ensureIndex ({title:1}); -/+1 positive/reverse order

Delete index
 db.stu.dropIndex({title:1}); delete the specified
 db.stu.dropIndexes(); delete all

Multi-column index
db.stu.ensureIndex({a:1,b:1...});

Subdocument Index and Query
db.user.insert({_id:3,name:'lili',hobby:{yundong:'yumaoqiu',yule:'kandianshi'}})
db.user.find({'hobby.yule':'kandianshi'}); query subdocuments  
db.stu.ensureIndex('hobby.yule':1);

Indexing nature
 Normal index (including null values)

unique index
db.user.ensureIndex({name:1},{unique:true});

Sparse index (not empty)
db.user.ensureIndex({name:1},{sparse:true});

Hash index (theoretical 1 hit, but not suitable for scope. 2.4 added functionality)
db.user.ensureIndex({name:'hashed'});

Reconstructing the index (multiple modifications create holes)
db.stu.reindex;

Delete index
 db.stu.dropIndexes(); delete all

VIII. User Management

1,switch to admin data base
   use admin

2,Add Super Administrator
   db.addUser('sa','shunjian','false');User name, password, read-only or not
   //Adding users to other libraries requires switching to other libraries using administrator identity
   db.addUser('web','123456','false');

3,Close all processes killall mongod 

4,Authentication of privileges--auth
/usr/local/mongodb/bin/mongod --dbpath=/data/mongod17/ --logpath=/data/mongod17/mongod17.log --auth --port=27017 --fork --smallfiles

5,Login Authentication
   use admin
   db.auth('sa','shunjian');

6,Modify user password
   db.changeUserPassword('sa','newpasswd');

7,Modify user password
   db.removeUser('sa');

8,User Rights (Segmented Role Management),dba Work already)

9. Import and Export
Derived parameters
- h host
- port Port
- u username
- p password

 Mongoexport-d [library]-c [table]-f column 1, column 2-Q [condition]-o. / stu.json
 - d library name
 - c Table Name
 - f column name
 - q condition
 - o Path
 - cvs export csv format (easy to exchange data with traditional databases)

 Export json
 ./mongoexport -h 192.168.0.174 -port 27017 -u www -p 321 -d test -c stu -f _id,title,content -q '{_id:{$lte:1000}}' --json -o ./stu.json  

 > Import json
 ./mongoimport -h 192.168.0.174 -port 27017 -u www -p 321 -d test -c stu --type json --file ./stu.json

 Derived csv
 ./mongoexport -h 192.168.0.174 -port 27017 -u www -p 321 -d test -c stu -f _id,title,content -q '{_id:{$lte:1000}}' --csv -o ./stu.csv

 Import csv
 ./mongoimport -h 192.168.0.174 -port 27017 -u www -p 321 -d test -c stu -f _id,title,content --headerline --type csv --file ./stu.csv

 Derived binary
 ./mongodump -h 192.168.0.174 -port 27017 -u www -p 321 -d test -c stu -o ./

 Import binary
 ./mongorestore -d test -c stu --directoryperdb /usr/local/mongodb/bin/test/

10. replication set
1. Multiple servers maintain the same copy
2. schematic diagram|
|
primary(write)
| | | |
secondary(read) <–> secondary(read)

3,Create directories(Data, Logs)
   rm -rf /data/mongodb* && mkdir -p /data/mongodb17 /data/mongodb18 /data/mongodb19 /data/mongodb17/logs/ /data/mongodb18/logs/ /data/mongodb19/logs/

4,Start three instances(Declare that an instance belongs to a replicate set)--replset
   /usr/local/mongodb/bin/mongod --dbpath=/data/mongodb17 --logpath=/data/mongodb17/logs/mongodb17.log --port=27017 --fork --smallfiles --replSet rs2
   /usr/local/mongodb/bin/mongod --dbpath=/data/mongodb18 --logpath=/data/mongodb18/logs/mongodb18.log --port=27018 --fork --smallfiles --replSet rs2
   /usr/local/mongodb/bin/mongod --dbpath=/data/mongodb19 --logpath=/data/mongodb19/logs/mongodb19.log --port=27019 --fork --smallfiles --replSet rs2

5,To configure
var rsconf = {
_id:'rs2',
members:[
    {
        _id:0,
        host:'192.168.0.174:27017'
    },
    {
        _id:1,
        host:'192.168.0.174:27018'
    },
    {
        _id:2,
        host:'192.168.0.174:27019'
    }
]}
6,Initialization according to configuration
rs.initiate(rsconf);

5,View status
rs.status(); 

8,Delete Nodes
rs.remove('192.168.0.174:27018');

9,Adding Nodes
//Configuration (Execution Step 5)
rs.reconfig(rsconf);

10,Make the slave server readable
rs.slaveOk();

11,Close the server
db.shutdownServer();

12,Gang Group rs.help();

13,Script
#!/bin/bash
NA="rs2"
IP="192.168.0.174"

if [ "$1" == "reset" ]; then
    killall mongod
    rm -rf /data/m*
    mkdir -p /data/mongodb17 /data/mongodb18 /data/mongodb19 /data/mongodb17/logs/ /data/mongodb18/logs/ /data/mongodb19/logs/
elif [ "$1" == "install" ]; then
    /usr/local/mongodb/bin/mongod --dbpath=/data/mongodb17 --logpath=/data/mongodb17/logs/mongodb17.log --port=27017 --fork --smallfiles --replSet ${NA}
    /usr/local/mongodb/bin/mongod --dbpath=/data/mongodb18 --logpath=/data/mongodb18/logs/mongodb18.log --port=27018 --fork --smallfiles --replSet ${NA}
    /usr/local/mongodb/bin/mongod --dbpath=/data/mongodb19 --logpath=/data/mongodb19/logs/mongodb19.log --port=27019 --fork --smallfiles --replSet ${NA}
    /usr/local/mongodb/bin/mongo << EOF
use admin
var rsconf = {
     _id:"rs2",
     members:[
             {_id:0,host:"${IP}:27017"},
             {_id:1,host:"${IP}:27018"},
             {_id:2,host:"${IP}:27019"}
]}
rs.initiate(rsconf);
EOF
else
     echo "this is error"
fi 

11. Data shard Fragmentation (mongos)

>>Schematic diagram
                                             shard1(Piece 1)


 mongos               configsvr
 //Router (not storing data, storing slice information on data) 



                                             shard2(Piece two)
>>Explain
1,need N And N>=2 individual mongod Service Chip Node.
2,need configsvr Maintain mata Original information.
3,To start mongos Routing.
4,To set the rules of data fragmentation(configsvr Ability to maintain).
5,Go first shard Insert data, move when unbalanced chunk,Causes disks IO Increase.
#!/bin/bash
IP="192.168.0.174"
if [ "$1" == "reset" ]; then
killall mongod
rm -rf /data/m*
mkdir -p /data/mongodb17/logs /data/mongodb18/logs /data/mongodb99/logs /data/mongodb30000/logs
elif [ "$1" == "install" ]; then
/usr/local/mongodb/bin/mongod --dbpath=/data/mongodb17 --logpath=/data/mongodb17/logs/mongodb17.log --port 27017 --smallfiles --fork
/usr/local/mongodb/bin/mongod --dbpath=/data/mongodb18 --logpath=/data/mongodb18/logs/mongodb18.log --port 27018 --smallfiles --fork
/usr/local/mongodb/bin/mongod --dbpath=/data/mongodb99 --logpath=/data/mongodb99/logs/mongodb99.log --port 27099 --smallfiles --configsvr --fork
/usr/local/mongodb/bin/mongos --logpath=/data/mongodb30000/logs/mongodb30000.log --port 30000 --fork --configdb ${IP}:27099
/usr/local/mongodb/bin/mongo -port 30000 << EOF
     sh.addShard("${IP}:27017")
     sh.addShard("${IP}:27018")
     sh.enableSharding('test')
     sh.shardCollection('test.user',{userid:1})
     use config
     db.settings.save({_id:'chunksize',value:2})
 EOF
else
     echo "this is error"
fi   
 5,Add rules to fragment
 sh.enableSharding("test")
 sh.shardCollection("test.user",{userid:1})

 6,modify chunk The default size is 64 m
 use config
 db.settings.find();
 db.settings.save({_id:'chunksize',value:1})

36
Twelve. Manual Partitioning
sh.enableSharding("test")
sh.shardCollection("test.user",{userid:1})
for(var i=1;i<=79;i++){ sh.splitAt('test.user',{userid:i*1000}) }

Eighth Day Course

I. Combination of Reproduction Set and Fragmentation
mongos(A)
port:30000
|
|
configsvr(A)
port:27020
|
|
B replication set RS3 C replication set RS4

27017 >>27017
27018 27018
27019 27019

Keywords: MongoDB JSON Database Linux

Added by NorthWestSimulations on Thu, 04 Jul 2019 02:38:01 +0300