...
When restoring a database, onto the same system from which the backup was taken, we get the following error: ``` Error creating index Database.Collection_15: 17280 err: "Btree::insert: key too large to index, failing Database.Collection_15.$Data.123.X_1 2536 { : "the following challenges and process pain points have bee..." ``` We've verified this on windows and linux. Versions 2.6.1 and 2.6.3 were both tested.
thomasr commented on Wed, 20 Aug 2014 17:44:23 +0000: Hi Chad, I understand that this is an issue, and that the change we made for 2.6 is backwards incompatible. This was a deliberate choice, however, as the previous implementation would silently fail to index the data and return incorrect, and also inconsistent results, depending on what index was used for a query. The change is documented on our page Compatibility Changes in MongoDB 2.6. Here are some workarounds to get you past this issue: You can remove the index (or multiple indices) on the fields that contain data larger than 1024 bytes from the .metadata.json file in the dump. The format is in json, and you can just edit it and remove the index that's causing issues, then run the restore. You could downgrade to 2.4, restore the data, then drop the index and upgrade back to 2.6. You can run mongod with a parameter failIndexKeyTooLong that disables the current 2.6 behavior to reject too large values. You can now restore the data, then drop the index, or modify the data to fit within the limits. Notice that this option is meant to be used temporarily to fix issues like the one you're facing. We do not recommend using this flag on an ongoing basis. For any of the above choices, if you have removed the index and are looking for a replacement without modifying your data, you could consider using a hashed index, if you only care for equality matches and range queries or sorting on that field is not required. I hope these pointers give you some options to work around the issue and get your data back into a 2.6 instance. We also have an open feature request to allow inserts with arbitrary length, at SERVER-3372. Your suggestion to allow a prefix of a field to be indexed has been raised before (SERVER-3260) and the overarching ticket to allow general expression indices is SERVER-14784, which would cover this use case as well. Please feel free to vote for these tickets to increase their visibility. Regards, Thomas sallgeud commented on Wed, 20 Aug 2014 16:42:25 +0000: I think if this is "works as designed" then you need to redesign your indexing. This was not an issue in 2.4 An Insert or Update SHOULD NOT FAIL on an index error. If we index a field which may or may not ever be over 1k, we would still expect it to either 1) index the first 1k, or 2) fail on the index, not the insert. You also have the option of adding some functionality to your indexing to limit it to a specific length like most databases do. This would actually potentially save large amounts of disk and memory usage for those of us who can reliably say that anything past the first 50-100 characters is useless and who may want to index that way: CREATE INDEX part_of_name ON customer (name(10)); – mysql index on the first 10 characters of name thomasr commented on Wed, 13 Aug 2014 17:05:42 +0000: Hi Chad, This is not an error, but expected. This page on Compatibility Changes in 2.6 describes the new behavior regarding index key size enforcements. You can no longer insert values larger than 1024 bytes for indexed fields. As there seems to be no bug here I'm resolving this ticket. Regards, Thomas sallgeud commented on Fri, 8 Aug 2014 17:42:53 +0000: Our issue now, which may be one to take up with the C# team is that when inserting a document that has a too-large value for that index, we get an exception thrown and the document does not save. Does this mean we'd have to put an absolute limit of 1024 characters on our users data entry for fields we index? thomasr commented on Wed, 6 Aug 2014 17:29:37 +0000: Thanks Chad. Please let us know if this is still an issue after recreating the indices (you may have to modify the documents that have too large index values). sallgeud commented on Tue, 5 Aug 2014 20:33:30 +0000: We actually ran the upgrade checks and deleted a number of indexes prior to upgrading from 2.4 to 2.6. upgradeCheck showed 2 indexes that were problems (which were also found by doing a reIndex). However, those indexes previously passed the upgradeCheck within minutes of the upgrade. The data in question didn't appear to be added in the ensuing minutes, however, I'm unable to tell which records are specifically the issue. I'll attempt a recreation of our indexes in this environment and ensure the backup/restore works. thomasr commented on Tue, 5 Aug 2014 19:04:35 +0000: Then I suspect you may have documents in that collection containing indexed fields that exceed the maximum size and are not stored in the appropriate index. These documents would have had to be inserted before upgrading to 2.6, since 2.6 now refuses to insert such documents. Version 2.4 on the other hand inserted the documents but silently failed to index the large fields. To confirm this, can you please run the upgradeCheck shell helper on the database mentioned in the error: use Database db.upgradeCheck() You can also use the db.upgradeCheckAllDBs script to check all available databases in a single run. Can you let me know what the upgradeCheck script(s) return? Please note that the upgradeCheck scripts scan all the collections and have some impact on performance and your working set. It's advisable to run it on secondary nodes (with a preceeding rs.slaveOk()). Thomas sallgeud commented on Tue, 5 Aug 2014 18:35:10 +0000: Nope. The backup was done with 2.6.3 and failed restoring into 2.6.3. PS M:\mongo2.6\bin> .\mongodump.exe --version version 2.6.3 PS M:\mongo2.6\bin> .\mongo.exe MongoDB shell version: 2.6.3 connecting to: test Server has startup warnings: 2014-06-25T15:10:43.231-0500 ** WARNING: --rest is specified without --httpinterface, 2014-06-25T15:10:43.231-0500 ** enabling http interface > db.version() 2.6.3 thomasr commented on Tue, 5 Aug 2014 17:20:44 +0000: Hi Chad, What version were you running when you took the backup? Was it 2.4.x or before by any chance? MongoDB 2.6 now enforces that index keys must be less than 1024 bytes, see MongoDB Limits. It looks like some of your indexed fields exceed that limit. Thanks, Thomas
Add data into already indexed field that is too large. Backup. Attempt to restore.
Click on a version to see all relevant bugs
MongoDB Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.