...
Hello We're facing similar issue and neither adding the index nor upgrading from 2.6.3 to 2.6.8 seem to fixed the problem. The error is of the following type: 2015-05-27T18:18:50.192-0500 [conn35] warning: ClientCursor::staticYield can't unlock b/c of recursive lock ns: top: { opid: 14571, active: true, secs_running: 0, microsecs_running: 823961, op: "query", ns: "henson_qa", query: { findandmodify: "expirationwork", query: { $and: [ { s: 0 }, { $or: [ { r.w: { $exists: false } }, { r.w: 58 } ] }, { r.b: { $ne: 58 } } ] }, sort: { p: 1 }, update: { $set: { s: 1, lut: 1432768729365 } }, new: true }, client: "10.x.x.x:58980", desc: "conn35", threadId: "0x7f4326c63700", connectionId: 35, locks: { ^: "w", ^henson_qa: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { r: 0, w: 6 } } } 2015-05-27T18:18:50.193-0500 [conn35] warning: ClientCursor::staticYield can't unlock b/c of recursive lock ns: top: { opid: 14571, active: true, secs_running: 0, microsecs_running: 824538, op: "query", ns: "henson_qa", query: { findandmodify: "expirationwork", query: { $and: [ { s: 0 }, { $or: [ { r.w: { $exists: false } }, { r.w: 58 } ] }, { r.b: { $ne: 58 } } ] }, sort: { p: 1 }, update: { $set: { s: 1, lut: 1432768729365 } }, new: true }, client: "10.x.x.x:58980", desc: "conn35", threadId: "0x7f4326c63700", connectionId: 35, locks: { ^: "w", ^henson_qa: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { r: 0, w: 6 } } } Tried a bunch of combination of different indexes including the one you suggested. The actual index that the plan is honoring is only when there's a index on {"r.w" : 1} . No hint is required for this. When added the index for {s:1, r.w:1} and tried to impose it with the hint the query plan turned bad plus it never resolved the original problem This is how the plan is for that query in the log now (which looks good but logging for those warnings hasn't stopped) PRIMARY> db.expirationwork.find({ $and: [ { s: 0 }, { $or: [ { "r.w": { $exists: false } }, { "r.w": 60 } ] }, { "r.b": { $ne: 60 } } ] }).sort({ p : 1}).explain() { "cursor" : "BtreeCursor r.w_1", "isMultiKey" : true, "n" : 0, "nscannedObjects" : 1, "nscanned" : 2, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 2, "scanAndOrder" : true, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "r.w" : [ [ null, null ], [ 60, 60 ] ] }, "server" : "xxx-mongo6:27056", "filterSet" : false } This is the plan when I tried to impose the composite index with the hint: PRIMARY> db.expirationwork.find({ $and: [ { s: 0 }, { $or: [ { "r.w": { $exists: false } }, { "r.w": 60 } ] }, { "r.b": { $ne: 60 } } ] }).sort({ p : 1}).hint({"s" : 1,"r.w" : 1}).explain() { "cursor" : "BtreeCursor s_1_r.w_1", "isMultiKey" : true, "n" : 1, "nscannedObjects" : 317212, "nscanned" : 958844, "nscannedObjectsAllPlans" : 317212, "nscannedAllPlans" : 958844, "scanAndOrder" : true, "indexOnly" : false, "nYields" : 7491, "nChunkSkips" : 0, "millis" : 23827, "indexBounds" : { "s" : [ [ 0, 0 ] ], "r.w" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "xxx-mongo6:27056", "filterSet" : false } PRIMARY> db.version() 2.6.8 Note that at this point the MongoDB version has also been upgraded to 2.6.8 from the previous 2.6.3 when we first encountered this issue. Should I create a separate JIRA ticket? Any suggestions would be greatly appreciated as the logs for this instance alone are filling up the disk space at roughly 5G / hour. Attached is a sample of the log on the Primary server. Please suggest a fix. Thank you.
manan@indeed.com commented on Thu, 18 Jun 2015 17:53:16 +0000: Hi Ramon, For now there's a workaround in place. Down the line we're going to upgrade this to 3.0 WT. Thank you very much for your support. ramon.fernandez commented on Thu, 18 Jun 2015 17:43:49 +0000: manan@indeed.com, apologies for the long delay. 3.0 eliminates this problem, as findAndModify() queries can now yield, so you may want to consider upgrading to 3.0.4 (the latest release at the moment). I understand changing the storage engine is a big commitment, so you can upgrade to v3.0 with MMAPv1 now, and later evaluate the move to WT. As far as I can tell, if your queries are already using the best index then the warnings in the log are unavoidable. The good news is that SERVER-18620 reduces the frequency of these warnings, so if you're stuck in 2.6 things will get better once 2.6.11 is out. As per the SERVER versions page, 2.6.11 is tentatively scheduled for July 7th. If neither upgrading nor waiting for 2.6.11 are an option, you can consider more frequent log rotation or logging to syslog and use filters to drop these messages. I'm going to close this ticket now. I'd invite you to consider testing 3.0.4 with MMAPv1 and upgrade if it meets your needs. Regards, Ramón. manan@indeed.com commented on Fri, 29 May 2015 22:10:44 +0000: Hi Ramon Thanks so much for responding. As I mentioned, the optimizer is already using the best index without hint anyway - in this case the best index being {r.w : 1} . Note that we added this index on production only yesterday and we're going to keep it as it seems to be helping the findAndModify query. As you can see in the second plan output that forcing a different index only worsened the plan and I don't see any reason to use that other composite index. Now the only other alternative I could think of is to eliminate findAndModify and write the query differently if that helps with the logging of warnings. Down the line we shall upgrade this to MongoDB 3.0 WT engine but this may take some time (Not sure though if 3.0 would address this issue at all). ramon.fernandez commented on Fri, 29 May 2015 21:31:04 +0000: Thanks for opening a separate ticket manan@indeed.com . As mentioned in SERVER-15583, if you have an index that makes these findAndModify queries perform well the suggested workaround is to hint on this index. An alternative way of accomplish this is to use index filters, but using hint() should simpler.