
OPERATIONAL DEFECT DATABASE
...

...
Hi, all I just got a weird error sent through from our applcation: when i updated with two processes, it was complaining of a duplicate key error on a collection with a unique index on it, but the operation in question was an upsert. case code(test_mongo_update.py): Bar.python import time from bson import Binary from pymongo import MongoClient, DESCENDING bucket = MongoClient('127.0.0.1', 27017)['test']['foo'] bucket.drop() bucket.update({'timestamp': 0}, {'$addToSet': {'_exists_caps': 'cap15'}}, upsert=True, safe=True, w=1, wtimeout=10) bucket.create_index([('timestamp', DESCENDING)], unique=True) while True: timestamp = str(int(1000000 * time.time())) bucket.update({'timestamp': timestamp}, {'$addToSet': {'_exists_foos': 'fooxxxxx'}}, upsert=True, safe=True, w=1, wtimeout=10) When i run script with two processes, Pymongo Exception: Bar.python Traceback (most recent call last): File "test_mongo_update.py", line 11, in bucket.update({'timestamp': timestamp}, {'$addToSet': {'_exists_foos': 'fooxxxxx'}}, upsert=True, safe=True, w=1, wtimeout=10) File "build/bdist.linux-x86_64/egg/pymongo/collection.py", line 552, in update File "build/bdist.linux-x86_64/egg/pymongo/helpers.py", line 202, in _check_write_command_response pymongo.errors.DuplicateKeyError: E11000 duplicate key error collection: test.foo index: timestamp_-1 dup key: { : "1439374020348044" } Env: mongodb 3.0.5, WiredTiger single mongodb instance pymongo 2.8.1 centos6.6 mongo.conf Bar.ini systemLog: destination: file logAppend: true logRotate: reopen path: /opt/lib/log/mongod.log # Where and how to store data. storage: dbPath: /opt/lib/mongo journal: enabled: true engine: "wiredTiger" directoryPerDB: true # how the process runs processManagement: fork: true # fork and run in background pidFilePath: /opt/lib/mongo/mongod.pid # network interfaces net: port: 27017 bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces. setParameter: enableLocalhostAuthBypass: false Any thoughts on what could be going wrong here? PS: I retried the same case in MMAPV1 storage engine, it works fine, why? I've asked this same issue in StackOverflow (http://stackoverflow.com/questions/31962539/duplicate-key-error-on-upsert-with-multi-processesmongo-3-0-4-wiredtiger) and in the mongodb-user mailing list I found something related here: https://jira.mongodb.org/browse/SERVER-18213 but after this bug fix, it cases this error, so it looks like this bug is not fixed completely. Cheers
scotthernandez commented on Thu, 13 Aug 2015 10:46:45 +0000: This is a duplicate of SERVER-19600 and its linked issues. Please read those for more information.
1. run test_mongo_update.py 2. run test_mongo_update.py as other process it will raise duplicate key error
Click on a version to see all relevant bugs
MongoDB Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.