...
We're getting a duplicate key error exception in a situation where one should never be occurring. It occurs infrequently, but consistently. Our collection has a unique key defined on a combination of fields. We then performa `replaceOne()` with `upsert(true)`. Generally this works fine, but a tiny percentage of the writes result in an error being thrown by mongo stating: `com.mongodb.MongoWriteException: E11000 duplicate key error index` Our mongo instance is a single installation with no clustering and using the latest driver / server- 3.2.1. Our client is using the default connection pool and is sending a hundreds of requests per second to the database. That said, our requests are unique in each is writing a different record to the database each time. It's not the case where we'd be sending a record with the same complex key via multiple threads at the same time. It's worth noting that when this error occurs and an attempt made to re-upsert the exact same record record, it has succeeded every time. We're using the Java driver, but research has shown other people have run into this same error on StackOverflow: http://stackoverflow.com/questions/29305405/mongodb-impossible-e11000-duplicate-key-error-dup-key-when-upserting
ramon.fernandez commented on Wed, 18 May 2016 16:40:48 +0000: Apologies for the radio silence TheAndruu. This is to let you know that I've opened DOCS-7904 to enhance the documentation as per your suggestion. Note also that the DOCS project is open to the public, so please feel free to open tickets there when you see areas of the documentation that can be improved. Thanks, Ramón. theandruu commented on Sat, 13 Feb 2016 17:45:45 +0000: Here's one example: the replaceOne upsert behavior documentation: https://docs.mongodb.org/v3.2/reference/method/db.collection.replaceOne/ It mentions no warning as to the use of unique indices to ensure uniqueness. This error is not occurring due to a race condition of the same record being written by two threads at the same time. It's occurring when a record has already been persisted and propagated fully. Specifically from the documentation, this line is misleading as it's not a guarantee the operation will succeed despite this condition being met, with no warnings anywhere on the page: "Setting upsert: true would insert the document if no match was found. See Replace with Upsert" ramon.fernandez commented on Sat, 13 Feb 2016 17:10:14 +0000: Digging through the documentation I can see that this behavior has been documented since 2013 (see this commit): update() with upsert set to true may insert duplicate documents unless one uses unique indexes. The same behavior applies to findAndModify(). That being said, if you've found places in our documentation that you think may lead users to believe that certain operations are atomic when in fact they're not, could you please link them here? I definitely see value in reviewing and improving them for other users. Thanks, Ramón. theandruu commented on Sat, 13 Feb 2016 16:25:04 +0000: Understandable if there's already a ticket open on this issue. Thought it's unfair to say this behavior is expected. According to the spec, this condition should never occur. Luckily we were overly cautious and used unique indices as a additional layer of protection. If not, we'd be running into serious issues as a result of mongo behaving contrary to expectations. There are at least 12 issues recorded in JIRA on this issue, dating to July 2013. Is there a chance this will either get fixed or at least documented better to explain it's not an atomic operation? It's a serious flaw to allow the upsert operation appear as atomic when it in fact would be inserting duplicate records if not for a unique index. ramon.fernandez commented on Sat, 13 Feb 2016 13:44:34 +0000: Hi TheAndruu, the behavior you're seeing is expected, and your solution (retrying at the application level) is the correct one. insert when possible" class="issue-link" data-issue-key="SERVER-14322">SERVER-14322 is opened to discuss whether the server should do the retry, so feel free to watch insert when possible" class="issue-link" data-issue-key="SERVER-14322">SERVER-14322 for updates and/or vote for it. Regards, Ramón.
// Unique index on the collection Document index = new Document("brickId", 1); index.append("date", 1); index.append("brickCategory", 1); index.append("mortarCategory", 1); collection.createIndex(index, new IndexOptions().unique(true)); // Document locating the record to replace Document searchDoc = new Document("brickId", "569f6ffa8ddccfb6412d9dc2"); searchDoc.append("date", new Date(1296432000000)); searchDoc.append("brickCategory", "Red"); searchDoc.append("mortarCategory", "Steel"); return searchDoc; // Document being inserted (or replaced) Document doc = new Document(); doc.append(brickId, "569f6ffa8ddccfb6412d9dc2"); doc.append("brickCategory", "Red"); doc.append("mortarCategory", "Steel"); doc.append("amount, 100); doc.append("date", new Date()); // Code used to do replace with upsert(true) call UpdateResult result = getCollection().replaceOne(searchDoc, doc, new UpdateOptions().upsert(true)); // Error bubbling up from mongo com.mongodb.MongoWriteException: E11000 duplicate key error index: mydb.Costs.$brickId_1_date_1_brickCategory_1_mortarCategory_1 dup key: { : "569f6ffa8ddccfb6412d9dc2", : new Date(1296432000000), : "Red", : "Steel" } at com.mongodb.MongoCollectionImpl.executeSingleWriteRequest(MongoCollectionImpl.java:523) ~[MongoCollectionImpl.class:na] at com.mongodb.MongoCollectionImpl.replaceOne(MongoCollectionImpl.java:344) ~[MongoCollectionImpl.class:na] Other steps to reproduce: http://stackoverflow.com/questions/29305405/mongodb-impossible-e11000-duplicate-key-error-dup-key-when-upserting