Thursday, June 2, 2016

Point in time recovery - MongoDB

Start the mongod server


mongod -f /home/m202/week2/mongod.conf
donghua@vmxdb01:~/week2$ ls -l backupDB/
BlogColl.bson           BlogColl.metadata.json  system.indexes.bson


Restore the backup


donghua@vmxdb01:~/week2$ mongorestore --collection BlogColl --db backupDB backupDB/BlogColl.bson --port 30001
2016-06-02T13:38:15.341+0100    checking for collection data in backupDB/BlogColl.bson
2016-06-02T13:38:15.345+0100    reading metadata file from backupDB/BlogColl.metadata.json
2016-06-02T13:38:15.346+0100    restoring backupDB.BlogColl from file backupDB/BlogColl.bson
2016-06-02T13:38:18.343+0100    [#########...............]  backupDB.BlogColl  20.8 MB/52.5 MB  (39.7%)
2016-06-02T13:38:21.344+0100    [############............]  backupDB.BlogColl  28.0 MB/52.5 MB  (53.4%)
2016-06-02T13:38:24.356+0100    [################........]  backupDB.BlogColl  36.4 MB/52.5 MB  (69.4%)
2016-06-02T13:38:27.352+0100    [#####################...]  backupDB.BlogColl  46.9 MB/52.5 MB  (89.3%)
2016-06-02T13:38:29.801+0100    restoring indexes for collection backupDB.BlogColl from metadata
2016-06-02T13:38:29.802+0100    finished restoring backupDB.BlogColl (604800 documents)
2016-06-02T13:38:29.802+0100    done
BackupTest:PRIMARY> use local
switched to db local
BackupTest:PRIMARY> show collections
me
oplog.rs
startup_log
system.indexes
system.replset
BackupTest:PRIMARY> use backupDB
switched to db backupDB
BackupTest:PRIMARY> show collections
BlogColl
system.indexes


create a mongodump of your oplog collection via this command:

donghua@vmxdb01:~/week2$ mongodump -d local -c oplog.rs -o oplogD --port 30001
2016-06-02T13:48:57.159+0100    writing local.oplog.rs to oplogD/local/oplog.rs.bson
2016-06-02T13:49:00.161+0100    [###############.........]  local.oplog.rs  736424/1111069  (66.3%)
2016-06-02T13:49:01.845+0100    writing local.oplog.rs metadata to oplogD/local/oplog.rs.metadata.json
2016-06-02T13:49:01.847+0100    done dumping local.oplog.rs (1111069 documents)


Find the logical corruption point in time

donghua@vmxdb01:~/week2$ mkdir oplogR
donghua@vmxdb01:~/week2$ mv oplogD/local/oplog.rs.bson oplogR/oplog.bson
donghua@vmxdb01:~/week2$ bsondump  oplogR/oplog.bson|grep drop
{"h":{"$numberLong":"-4262957146204779874"},"ns":"backupDB.$cmd","o":{"drop":"BlogColl"},"op":"c","ts":{"$timestamp":{"t":1398778745,"i":1}},"v":2}
2016-06-02T13:55:11.407+0100    1111069 objects found
// Alternatively you can query against the original oplog
donghua@vmxdb01:~/week2$ mongo localhost:30001/local
MongoDB shell version: 3.0.5
connecting to: localhost:30001/local
BackupTest:PRIMARY> db.oplog.rs.find({'o.drop':"BlogColl"})
{ "ts" : Timestamp(1398778745, 1), "h" : NumberLong("-4262957146204779874"), "v" : 2, "op" : "c", "ns" : "backupDB.$cmd", "o" : { "drop" : "BlogColl" } }


Replay this oplog on the restored stand-alone server BUT you will stop before this offending update operation.


donghua@vmxdb01:~/week2$ mongorestore -h localhost --port 30001 --oplogReplay --oplogLimit 1398778745:1 oplogR


2016-06-02T13:58:00.996+0100    building a list of dbs and collections to restore from oplogR dir
2016-06-02T13:58:00.998+0100    replaying oplog
2016-06-02T13:58:04.016+0100    [##......................] oplog        16.0 MB/174.0 MB (9.2%)
2016-06-02T13:58:07.039+0100    [####....................] oplog        32.0 MB/174.0 MB (18.4%)
2016-06-02T13:58:10.012+0100    [#####...................] oplog        40.0 MB/174.0 MB (23.0%)
2016-06-02T13:58:13.059+0100    [#######.................] oplog        56.0 MB/174.0 MB (32.2%)
2016-06-02T13:58:16.071+0100    [#########...............] oplog        65.7 MB/174.0 MB (37.8%)
2016-06-02T13:58:19.013+0100    [#########...............] oplog        72.0 MB/174.0 MB (41.4%)
2016-06-02T13:58:21.586+0100    done

donghua@vmxdb01:~/week2$ mongo localhost:30001/backupDB
MongoDB shell version: 3.0.5
connecting to: localhost:30001/backupDB
BackupTest:PRIMARY> db.BlogColl.find().count()
614800

No comments:

Post a Comment