...
I am running MongoDB v3.4.6 2019-03-01T15:14:52.797+0800 I CONTROL [main] ***** SERVER RESTARTED ***** 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] MongoDB starting : pid=22336 port=22222 dbpath=/home/mongodb/datas/shard1 64-bit host=NODE-101 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] db version v3.4.6 2019-03-01T15:14: 52.810+0800 I CONTROL [initandlisten] git version: c55eb86ef46ee7aede3b1e2a5d184a7df4bfb5b5 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2019-03-01T15:14:52.810+ 0800 I CONTROL [initandlisten] allocator: tcmalloc 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] modules: none 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] build environment: 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] distmod: rhel70 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] distarch: x86_64 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] target_arch: x86_64 2019-03-01T15:14:52.810+0800 I CONTROL [initandlisten] options: { config: "/home/mongodb/confs/shard1.conf", net: { bindIp: "DB-NODE1", port: 22222 } , operationProfiling: { mode: "slowOp" }, processManagement: { fork: true, pidFilePath: "/home/mongodb/tmp/shard1.pid" }, replication: { replSetName: "shard1" }, security: { authorization: "enabled ", keyFile: "/home/mongodb/confs/mongod.key" }, setParameter: { enableLocalhostAuthBypass: "false" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/home/mongodb/datas /shard1", engine: "wiredTiger", journal: { enabled: true } , wiredTiger: { engineConfig: { cacheSizeGB: 2.0 } } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: "/home/mongodb/logs/shard1.log" } } 2019-03-01T15:14:52.810+0800 W - [initandlisten] Detected unclean shutdown - /home/mongodb/datas/shard1/mongod.lock is not empty. 2019-03-01T15:14:52.825+0800 W STORAGE [ initandlisten] Recovering data from the last clean checkpoint. 2019-03-01T15:14:52.825+0800 I STORAGE [initandlisten] 2019-03-01T15:14:52.825+0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2019-03-01T15:14:52.825+0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2019-03-01T15:14:52.825+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=2048M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast) ,log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2019 -03-01T15:14:52.828+0800 E STORAGE [initandlisten] WiredTiger error (-31802) [1551424492:828557][22336:0x7fb1ecc0de40], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger. wt: WT_ERROR: non-specific WiredTiger error 2019-03-01T15:14:52.828+0800 E STORAGE [initandlisten] WiredTiger error (0) [1551424492:828572][22336:0x7fb1ecc0de40], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata 2019-03-01T15:14:52.828+0800 E STORAGE [initandlisten] WiredTiger error (0) [1551424492:828577][22336:0x7fb1ecc0de40], file:WiredTiger.wt, connection: This may be due to the database files being encrypted , being from an older version or due to corruption on disk 2019-03-01T15:14:52.828+0800 E STORAGE [initandlisten] WiredTiger error (0) [1551424492:828582][22336:0x7fb1ecc0de40], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options 2019-03-01T15:14:52.828+0800 I - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 269 2019-03-01T15:14:52.829+0800 I STORAGE [initandlisten] exception in initAndListen: 28595 -31802: WT_ERROR: non-specific WiredTiger error, terminating 2019-03-01T15:14:52.829+0800 I NETWORK [initandlisten] shutdown : going to close listening sockets... 2019-03-01T15:14:52.829+0800 I NETWORK [initandlisten] shutdown: going to flush diaglog... 2019-03-01T15:14:52.829+0800 I CONTROL [initandlisten] now exiting 2019-03-01T15:14:52.829+0800 I CONTROL [initandlisten] shutting down with code:100
daniel.hatcher commented on Fri, 8 Mar 2019 21:36:59 +0000: Hello, I see that these are the same files as in SERVER-40007 so I attached the repair to that ticket. Thank you, Danny hexiaojie commented on Fri, 8 Mar 2019 06:28:27 +0000: That's great. It's OK. Can you kindly help me to repair another one(repair-other.rar) daniel.hatcher commented on Thu, 7 Mar 2019 15:42:53 +0000: Hello, Unfortunately, I was able to repair the files via an internal script that is not currently available to the public. In order to avoid these problems in the future, we recommend configuring replica sets across multiple servers and taking regular backups of your data. Thank you, Danny hexiaojie commented on Thu, 7 Mar 2019 05:30:21 +0000: Thank you,The problem has been solved,can you share the fix ? daniel.hatcher commented on Wed, 6 Mar 2019 19:22:45 +0000: Hello, Please extract repair-attempt.tar and replace these files in your $dbpath. Thank you, Danny hexiaojie commented on Wed, 6 Mar 2019 08:53:06 +0000: WiredTiger.wtWiredTiger.turtle Hello, the members not in the replica set all on the same server,But it happend ,so I want to know how to repair this file, can you share? Thank you, daniel.hatcher commented on Tue, 5 Mar 2019 20:52:13 +0000: Hello, If you provide the WiredTiger.wt and WiredTiger.turtle files I can attempt a repair. However, having a replica set should avoid the need for this. Were the members in the replica set all on the same server and that server crashed? Thank you, Danny hexiaojie commented on Tue, 5 Mar 2019 01:25:58 +0000: This is correct, but other nodes in my database have the same problem, so I can't sync from a healthy node in the replica set. I want to repair the wiredtig.wt file, but I don't know how to fix this file.So I need some guidance daniel.hatcher commented on Mon, 4 Mar 2019 17:14:37 +0000: Hello, It appears that your database files may have become corrupted. I see that this node is part of a replica set; the best solution would be to perform an initial sync from a healthy node in the replica set. Would this be possible? Thank you, Danny