Loading...
Loading...
All Data Domains contain a pool of storage known as the "active tier": This is the area of disk where newly ingested data resides. On most Data Domains, files remain here until expired/deleted by a client backup application. On Data Domains configured with Long-Term Retention (LTR), the data movement process may periodically run to migrate old files from the active tier to the cloud tier. The only way to reclaim space in the active tier from deleted or migrated files is to run the garbage collection/clean process (GC). Current utilization of the active tier can be displayed using the ' filesys show space ' or ' df ' commands: # dfActive Tier: Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB* ---------------- -------- -------- --------- ---- -------------- /data: pre-comp - 33098.9 - - - /data: post-comp 65460.3 518.7 64941.6 1% 0.0 /ddvar 29.5 19.7 8.3 70% - /ddvar/core 31.5 0.2 29.7 1% - ---------------- -------- -------- --------- ---- -------------- If configured, details of the cloud tier appear below the active tier. Utilization of the active tier must be carefully managed. Otherwise, the following may occur: The active tier may start to run out of available space, causing alerts such as this: EVT-SPACE-00004: Space usage in Data Collection has exceeded 95% threshold. If the active tier becomes 100% full no new data can be written to the DD, which may cause backups/replication to fail. This may cause alerts such as this: CRITICAL: MSG-CM-00002: /../vpart:/vol1/col1/cp1/cset: Container set [container set ID] out of space In some circumstances, the active tier becoming full may cause the Data Domain File System (DDFS) to become read-only, at which point existing files cannot be deleted. This article attempts to: Explain why the active tier may become full Describe a simple set of checks that can be performed to determine the cause of high utilization of the active tier and corresponding remedial steps This article does not attempt to: Provide an exhaustive review of capacity issues (there are some situations where the active tier of a Data Domain becomes highly utilized or full for a reason not discussed in this document). Cover high utilization of cloud tier
The active tier of a Data Domain can experience higher than expected utilization for several reasons: Client backup applications are not expiring and deleting backup files or save sets due to an incorrect retention policy or backup application configuration. Replication lag causing a large amount of old data to be kept on the active tier pending replication to targets. Data being written to the active tier has a lower than expected overall compression ratio. The system has not been sized correctly - that is, it is too small for the amount of data which is being attempted to be stored on it Backups consist of many small files. These files consume more space than is expected when initially written. However, this space should be reclaimed during GC. Data movement is not being run regularly on systems configured with LTR, leaving old files on the active tier that should be migrated to the cloud tier. GC is not being run regularly. Excessive or old mtree snapshots existing on the DD may prevent clean from reclaiming space from deleted data.
Step 1 - Determine whether an active tier clean must be run. The Data Domain Operating System (DDOS) attempts to maintain a counter called "Cleanable GiB" for the active tier. This is an estimation of how much physical (post-comp) space could potentially be reclaimed in the active tier by running GC. View this counter using the ' filesys show space '/' df ' commands: Active Tier: Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB* ---------------- -------- --------- --------- ---- -------------- /data: pre-comp - 7259347.5 - - - /data: post-comp 304690.8 251252.4 53438.5 82% 51616.1 <=== NOTE /ddvar 29.5 12.5 15.6 44% - ---------------- -------- --------- --------- ---- -------------- If either: The value for "Cleanable GiB" is large, or DDFS has become 100% full (and is therefore read-only), GC should be performed and allowed to run to completion before continuing with any further steps in this document. Use the command ' filesys clean start ' command to start GC: # filesys clean start Cleaning started. Use 'filesys clean watch' to monitor progress. To confirm that cleaning has started as expected, use the ' filesys status ' command: # filesys status The filesystem is enabled and running. Cleaning started at 2017/05/19 18:05:58: phase 1 of 12 (pre-merge) 50.6% complete, 64942 GiB free; time: phase 0:01:05, total 0:01:05 If clean is not able to start, contact your contracted support provider for further assistance. This may indicate that the system has encountered a "missing segment error" causing clean to be disabled. If clean is already running, the following message appears when the ' filesys clean start ' command is run: **** Cleaning already in progress. Use 'filesys clean watch' to monitor progress. No space in the active tier is reclaimed until clean reaches its copy phase (phase 5). For more information, see: An overview of Data Domain File System clean/garbage collection phases Clean may not reclaim the amount of space indicated by "Cleanable GiB" as this value is essentially an estimation. For more information, see: Cleanable Size is an Estimate Clean may not reclaim all potential space in a single run. This is because on Data Domains containing large datasets clean works against the portion of the file system containing the most superfluous data in order to give the best return in free space for time taken for clean to run. In some scenarios clean may have to be run multiple times before all potential space is reclaimed. If the value for "Cleanable GiB" is large, this indicates either that clean has not been running at regular intervals or that expired data is being held by a snapshot (see Step 4 in this article). Check that a clean schedule has been set: # filesys clean show schedule If necessary, set an active tier clean schedule - for example to run every Tuesday at 6 AM: # filesys clean set schedule Tue 0600 Filesystem cleaning is scheduled to run "Tue" at "0600". Use the ' filesys show space ' or ' df ' commands once clean has completed to determine whether utilization issues have been resolved. If usage is still high, go on to the remaining steps in this article. Step 2 - Check for large amounts of replication lag against source replication contexts Native Data Domain replication is designed around the concept of "replication contexts." For example, when data must be replicated between systems: Replication contexts are created on source and destination Data Domains. The contexts are initialized. Once initialization is complete, replication periodically sends updates (deltas) from source to destination to keep data on the systems synchronized. If a replication context lags, it can cause old data to be held on disk on the source system. However, lagging replication contexts cannot cause excessive utilization on the destination system. Mtree replication contexts (used when replicating any mtree other than /data/col1/backup between systems): Mtree replication uses snapshots created on source and destination systems to determine differences between systems. These snapshots allow the replication context to determine which files must be sent from source to destination. If an mtree replication context is lagging, the corresponding mtree may have old snapshots created against it on source and destination systems. If files on the source Data Domain existed when a given mtree replication snapshot was created on the system, clean cannot reclaim space on disk used by these files. Collection replication contexts (used when replicating the entire contents of one Data Domain to another system): Collection replication performs "block based" replication of all data on a source system to a destination system. If a collection replication is lagging, clean on the source system cannot operate optimally. In this scenario, an alert is generated on the source indicating that a partial clean is being performed to avoid using synchronization with the destination system. Clean is therefore unable to reclaim as much space as expected on the source Data Domain. To determine if replication contexts are lagging, perform the following steps: Determine the hostname of the current system: sysadmin@dd-1# hostname The Hostname is: dd-1.datadomain Determine the date and time on the current system: sysadmin@dd-1# date Fri May 19 19:04:06 IST 2017 List replication contexts configured on the system along with their ' sync'ed-as-of-time '. Contexts of interest are those where the "destination" does NOT contain the hostname of the current system (which indicates that the current system is the source) and the ' sync'ed-as-of-time ' is old: sysadmin@dd-1# replication status CTX Destination Enabled Connection Sync'ed-as-of-time Tenant-Unit --- -------------------------------- ------- ------------ ------------------ ----------- 3 mtree://dd-1/data/col1/DFC no idle Thu Jan 8 08:58 - <=== NOT INTERESTING - CURRENT SYSTEM IS THE DESTINATION 9 mtree://dd-2/data/col1/BenMtree no idle Mon Jan 25 14:48 - <=== INTERESTING - LAGGING AND CURRENT SYSTEM IS THE SOURCE 13 mtree://dd-2/data/col1/dstfolder no disconnected Thu Mar 30 17:55 - <=== INTERESTING - LAGGING AND CURRENT SYSTEM IS THE SOURCE 17 mtree://dd-2/data/col1/oleary yes idle Fri May 19 18:57 - <=== NOT INTERESTING - CONTEXT IS UP TO DATE 18 mtree://dd-1/data/col1/testfast yes idle Fri May 19 19:18 - <=== NOT INTERESTING - CONTEXT IS UP TO DATE --- -------------------------------- ------- ------------ ------------------ ----------- Break any contexts that have the current system as their source and are showing significant lag. Contexts that are no longer required should also be broken. This can be performed by running the following command on the source and destination system: # replication break <destination> For example, to break the "interesting" contexts shown above, the following commands would be run on source and destination: sysadmin@dd-1# replication break mtree://dd-2/data/col1/BenMtree sysadmin@dd-2# replication break mtree://dd-2/data/col1/BenMtreesysadmin@dd-1# replication break mtree://dd-2/data/col1/dstfolder sysadmin@dd-2# replication break mtree://dd-2/data/col1/dstfolder Note: Active tier clean must be performed to reclaim potential space in the active tier once contexts are broken. Mtree replication snapshots may remain on disk after their contexts are broken. Ensure that Step 5 in this article is followed to expire any unneeded snapshots before running clean. If the source/destination mtree is configured to migrate data to cloud tier care should be taken when breaking corresponding mtree replication contexts as these contexts may not be able to be re-created or initialized again in the future When an mtree replication context is initialized, an mtree snapshot is created on the source system containing details of all files in the mtree (regardless of tier). This snapshot is then replicated in full to the active tier of the destination. As a result, if the active tier of the destination does not have sufficient free space to ingest all the mtree's data from the source, the initialization cannot complete. For more information about this issue, contact your contracted support provider. If a collection replication context is broken, the context cannot be re-created or initialized without first destroying the instance of DDFS on the destination Data Domain and losing all data on this system. As a result, a subsequent initialization can take considerable time and network bandwidth as all data from the source must be replicated to the destination again. Step 3 - Check for mtrees that are no longer needed. The contents of the DDFS are logically divided into mtrees. It is common for individual backup applications and clients to write to an individual mtrees. If a backup application is decommissioned, it can no longer write data to or delete data from the Data Domain. This may leave old or superfluous mtrees on the system. Data in these mtrees continues to exist indefinitely using space on disk on the Data Domain. Any such superfluous mtrees should be deleted. For example: Obtain a list of mtrees on the system: # mtree list Name Pre-Comp (GiB) Status ------------------------------------------------------------- -------------- ------- /data/col1/Budu_test 147.0 RW /data/col1/Default 8649.8 RW /data/col1/File_DayForward_Noida 42.0 RW/RLCE /data/col1/labtest 1462.7 RW /data/col1/oscar_data 0.2 RW /data/col1/test_oscar_2 494.0 RO/RD ------------------------------------------------------------- -------------- ------- Any mtrees that are no longer required should be deleted with the ' mtree delete ' command: # mtree delete <mtree name> For example: # mtree delete /data/col1/Budu_test ... MTree "/data/col1/Budu_test" deleted successfully. Space consumed on disk by the deleted mtree will be reclaimed the next time that active tier clean is run. Additional considerations: Mtrees that are destinations for mtree replication (identified by a status of ' RO/RD ' in the output of ' mtree list ') should have their corresponding replication context broken before deleting the mtree. Mtrees that are used as DD Boost logical storage units (LSUs) or as virtual tape library (VTL) pools may not be able to be deleted using the ' mtree delete ' command. Refer to the Data Domain Administration Guide for further details on deleting such mtrees Mtrees which are configured for retention lock (indicated by a status of ' RLCE ' or ' RLGE ' in the output of ' mtree list ') cannot be deleted. Instead, individual files within the mtree must have any retention lock reverted and be deleted individually. Refer to the Data Domain Administration Guide for further details. Step 4 - Check for unneeded mtree snapshots A Data Domain snapshot represents a point in time snapshot of the corresponding mtree. As a result: Any files that exist within the mtree when the snapshot is created are referenced by the snapshot. As long as the snapshot exists, cleaning cannot reclaim physical space from the files it references even if they are deleted. This is because the data must stay on the system in case the copy of the file in the snapshot is later accessed. Perform the following steps to determine whether any mtrees have unneeded snapshots: Obtain a list of mtrees on the system using the ' mtree list ' command as shown in Step 3. List snapshots for each mtree using the ' snapshot list ' command: # snapshot list mtree <mtree name> When run against an mtree with no snapshots the following is displayed: # snapshot list mtree /data/col1/Default Snapshot Information for MTree: /data/col1/Default ---------------------------------------------- No snapshots found. When run against an mtree with snapshots the following is displayed: # snapshot list mtree /data/col1/labtest Snapshot Information for MTree: /data/col1/labtest ---------------------------------------------- Name Pre-Comp (GiB) Create Date Retain Until Status ------------------------------------ -------------- ----------------- ----------------- ------- testsnap-2016-03-31-12-00 1274.5 Mar 31 2016 12:00 Mar 26 2017 12:00 expired testsnap-2016-05-31-12-00 1198.8 May 31 2016 12:00 May 26 2017 12:00 testsnap-2016-07-31-12-00 1301.3 Jul 31 2016 12:00 Jul 26 2017 12:00 testsnap-2016-08-31-12-00 1327.5 Aug 31 2016 12:00 Aug 26 2017 12:00 testsnap-2016-10-31-12-00 1424.9 Oct 31 2016 12:00 Oct 26 2017 13:00 testsnap-2016-12-31-12-00 1403.1 Dec 31 2016 12:00 Dec 26 2017 12:00 testsnap-2017-01-31-12-00 1421.0 Jan 31 2017 12:00 Jan 26 2018 12:00 testsnap-2017-03-31-12-00 1468.7 Mar 31 2017 12:00 Mar 26 2018 12:00 REPL-MTREE-AUTO-2017-05-11-15-18-32 1502.2 May 11 2017 15:18 May 11 2018 15:18 ----------------------------------- -------------- ----------------- ----------------- ------ Where snapshots exist, use the output from ' snapshot list mtree <mtree name> ' to locate snapshots that: Are not expired (see status column) Were created a significant amount of time in the past (for example, snapshots created in 2016 from the above list) These snapshots should be expired so that they can be removed when clean runs and the space they are holding on disk freed: # snapshot expire <snapshot name> mtree <mtree name> For example: # snapshot expire testsnap-2016-05-31-12-00 mtree /data/col1/labtest Snapshot "testsnap-2016-05-31-12-00" for mtree "/data/col1/labtest" will be retained until May 19 2017 19:31. When the ' snapshot list ' command is run again, these snapshots are now listed as expired: # snapshot list mtree /data/col1/labtest Snapshot Information for MTree: /data/col1/labtest ---------------------------------------------- Name Pre-Comp (GiB) Create Date Retain Until Status ------------------------------------ -------------- ----------------- ----------------- ------- testsnap-2016-03-31-12-00 1274.5 Mar 31 2016 12:00 Mar 26 2017 12:00 expired testsnap-2016-05-31-12-00 1198.8 May 31 2016 12:00 May 26 2017 12:00 expired testsnap-2016-07-31-12-00 1301.3 Jul 31 2016 12:00 Jul 26 2017 12:00 testsnap-2016-08-31-12-00 1327.5 Aug 31 2016 12:00 Aug 26 2017 12:00 testsnap-2016-10-31-12-00 1424.9 Oct 31 2016 12:00 Oct 26 2017 13:00 testsnap-2016-12-31-12-00 1403.1 Dec 31 2016 12:00 Dec 26 2017 12:00 testsnap-2017-01-31-12-00 1421.0 Jan 31 2017 12:00 Jan 26 2018 12:00 testsnap-2017-03-31-12-00 1468.7 Mar 31 2017 12:00 Mar 26 2018 12:00 REPL-MTREE-AUTO-2017-05-11-15-18-32 1502.2 May 11 2017 15:18 May 11 2018 15:18 ----------------------------------- -------------- ----------------- ----------------- ------- Additional considerations: It is not possible to determine how much physical data an individual snapshot or set of snapshots holds on disk. The only value for "space" associated with a snapshot is an indication of the pre-compressed (logical) size of the mtree when the snapshot was created. Snapshots named ' REPL-MTREE-AUTO-YYYY-MM-DD-HH-MM-SS ' are managed by mtree replication. These should not need to be manually expired under normal circumstances. Replication automatically expires these snapshots when they are no longer needed. If such snapshots are extremely old, the corresponding replication context is likely showing significant lag (as described in Step 2). Snapshots named ' REPL-MTREE-RESYNC-RESERVE-YYYY-MM-DD-HH-MM-SS ' are created by mtree replication when a context is broken. These can be used to avoid a full resynchronization of replication data if the broken context is later re-created. If replication will not be reestablished, these contexts can be manually expired as described above. Expired snapshots will continue to exist on the system until the next time GC is run. At this point, they are physically deleted and no longer appear in the output of ' snapshot list mtree <mtree name> '. Clean can then reclaim any space these snapshots were using on disk. Step 5 - Check for an unexpected number of old files on the system Autosupports from the Data Domain contain histograms showing a breakdown of files on the Data Domain by age. For example: File Distribution ----------------- 448,672 files in 5,276 directories Count Space ----------------------------- -------------------------- Age Files % cumul% GiB % cumul% --------- ----------- ----- ------- -------- ----- ------- 1 day 7,244 1.6 1.6 4537.9 0.1 0.1 1 week 40,388 9.0 10.6 63538.2 0.8 0.8 2 weeks 47,850 10.7 21.3 84409.1 1.0 1.9 1 month 125,800 28.0 49.3 404807.0 5.0 6.9 2 months 132,802 29.6 78.9 437558.8 5.4 12.3 3 months 8,084 1.8 80.7 633906.4 7.8 20.1 6 months 5,441 1.2 81.9 1244863.9 15.3 35.4 1 year 21,439 4.8 86.7 3973612.3 49.0 84.4 > 1 year 59,624 13.3 100.0 1265083.9 15.6 100.0 --------- ----------- ----- ------- -------- ----- ------- This can be useful to determine if there are files on the system that have not been expired/deleted as expected by the client backup application. For example, assume that a backup application writing to the above system has a maximum retention period of six months. It is immediately obvious that the backup application is not expiring/deleting files as expected, as there are approximately 80,000 files older than six months on the Data Domain. Note: It is the responsibility of the backup application to perform all file expiration and deletion. A Data Domain never deletes files automatically. Unless instructed by the backup application to explicitly delete a file, the file continues to exist on the Data Domain using space indefinitely. As a result, the backup application vendor's support team should investigate issues like this should first. If required, Data Domain support can provide additional reports to: Give the name and modification time of all files on a Data Domain ordered by age so the name and location of any old data can be determined. Split out histograms of file age into separate reports for the active and cloud tiers (where the LTR feature is enabled). To perform this: Collect an sfs_dump from the Data Domain . Open a service request with your contracted support provider. Run active tier GC in order to physically reclaim space once unneeded files are deleted. Step 6 - Check for backups which include many small files. Due to the design of DDFS small files (essentially any file which is smaller than approximately 10 MB in size) can consume excessive space when initially written to the Data Domain. This is due to the Stream Informed Segment Layout (SISL) architecture causing small files to consume multiple individual 4.5 MB blocks of space on disk. For example, a 4 KB file may consume up to 9 MB of physical disk space when initially written. This excessive space is then reclaimed when GC is run, as data from small files is then aggregated into a smaller number of 4.5 MB blocks. However, smaller Data Domain models may show excessive utilization and fill up when such backups are run. Autosupports contain histograms of files broken down by size. For example: Count Space ----------------------------- -------------------------- Size Files % cumul% GiB % cumul% --------- ----------- ----- ------- -------- ----- ------- 1 KiB 2,957 35.8 35.8 0.0 0.0 0.0 10 KiB 1,114 13.5 49.3 0.0 0.0 0.0 100 KiB 249 3.0 52.4 0.1 0.0 0.0 500 KiB 1,069 13.0 65.3 0.3 0.0 0.0 1 MiB 113 1.4 66.7 0.1 0.0 0.0 5 MiB 446 5.4 72.1 1.3 0.0 0.0 10 MiB 220 2.7 74.8 1.9 0.0 0.0 50 MiB 1,326 16.1 90.8 33.6 0.2 0.2 100 MiB 12 0.1 91.0 0.9 0.0 0.2 500 MiB 490 5.9 96.9 162.9 0.8 1.0 1 GiB 58 0.7 97.6 15.6 0.1 1.1 5 GiB 29 0.4 98.0 87.0 0.5 1.6 10 GiB 17 0.2 98.2 322.9 1.7 3.3 50 GiB 21 0.3 98.4 1352.7 7.0 10.3 100 GiB 72 0.9 99.3 6743.0 35.1 45.5 500 GiB 58 0.7 100.0 10465.9 54.5 100.0 > 500 GiB 0 0.0 100.0 0.0 0.0 100.0 --------- ----------- ----- ------- -------- ----- ------- If there is evidence of backups writing large numbers of small files, the system may experience significant temporary increases in utilization between each GC run. In this scenario, it is preferable to change backup methodology to combine all small files into a single larger archive before writing them to the Data Domain. A good example is an uncompressed tar file. Any such archive should not be compressed or encrypted, as this damages the compression and deduplication ratios of that data. Step 7 - Check for lower than expected deduplication ratio The main purpose of a Data Domain is to deduplicate and compress data that is sent to the device. The ratio of deduplication and compression is highly dependent on the use case of the system and the type of data which it holds. However, there is often an "expected" overall compression ratio based on results obtained through proof of concept testing or similar. To determine the current overall compression ratio of the system, and therefore whether it is meeting expectations, run the ' filesys show compression ' command. For example: # filesys show compressionFrom: 2017-05-03 13:00 To: 2017-05-10 13:00Active Tier: Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp (GiB) (GiB) Factor Factor Factor (Reduction %) ---------------- -------- --------- ----------- ---------- ------------- Currently Used:* 20581.1 315.4 - - 65.3x (98.5) Written: Last 7 days 744.0 5.1 80.5x 1.8x 145.6x (99.3) Last 24 hrs ---------------- -------- --------- ----------- ---------- ------------- * Does not include the effects of pre-comp file deletes/truncates In the above example, the system is achieving an overall compression ratio of 65.3x for the active tier, which is extremely good. If, however, this value shows that the overall compression ratio is not meeting expectations, then further investigation is likely to be required. Investigating lower than expected compression ratio is a complex subject which can have many root causes. For more information about investigating further, see the following article: Troubleshooting poor deduplication and compression ratio of files on DDs . Step 8 - Check whether the system is a source for collection replication. When using collection replication with a source system that is larger than the destination, the size of the source system is artificially limited to match that of the destination. That is, there is an area of disk on the source that is marked as unusable. The reason for this is that collection replication requires the destination to be a block level copy of the source. However, if the source is physically larger than the destination there is a chance that excessive data may be written to the source. This data then cannot be replicated to the destination, as it is already full. Avoid this scenario by limiting the size of the source to match the destination. Check whether the system is a source for collection replication using the commands from Step 2. Check for contexts in the output of ' replication status ' that start with ' col:// ' and do NOT contain the hostname of the local system in the destination path. If the system is a source for collection replication, check the size of each system's active tier by logging into both and running the ' filesys show space ' command. Compare the active tier 'post-comp' size on each. If the source is larger than the destination, then its active tier size is artificially limited. To allow all space on the source to be usable for data, perform the following: Add more storage to the destination active tier such that its size is ≥ the size of the source active tier. Break the collection replication context using commands from Step 2. This obviously prevents data being replicated from the source to the destination Data Domain. Space is made available in the active tier of the source system as soon as either of these has been performed. There is no need to run active tier GC before using this space. Step 9 - Check whether data movement is being regularly run. If the Data Domain is configured with Long-Term Retention (LTR), it has a second tier of storage attached (cloud tier). In this scenario, data movement policies are likely configured against mtrees to migrate older or unmodified data requiring long-term retention from the active tier to the cloud tier. This allows GC to physically reclaim space used by these files in the active tier. If data movement policies are incorrectly configured or if the data movement process is not regularly run, old data remains in the active tier longer than expected. Until the data is moved to the cloud tier, it continues to use physical space on disk. Initially confirm whether the system is configured for LTR by running ' filesys show space ' and checking for the existence of a cloud tier. In order to be usable, this alternative tier of storage must have a post-comp size of >0 GB: # filesys show space ... Archive Tier: Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB ---------------- -------- -------- --------- ---- ------------- /data: pre-comp - 4163.8 - - - /data: post-comp 31938.2 1411.9 30526.3 4% - ---------------- -------- -------- --------- ---- -------------# filesys show space ... Cloud Tier Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB ---------------- -------- -------- --------- ---- ------------- /data: pre-comp - 0.0 - - - /data: post-comp 338905.8 0.0 338905.8 0% 0.0 ---------------- -------- -------- --------- ---- ------------- If the system is configured with LTR: Check data movement policies against mtrees to ensure that these are as expected and set such that old data is pushed out to the cloud tier: # data-movement policy show Correct any data movement policies that are missing or incorrectly configured. Refer to the Data Domain Administration Guide for assistance in performing this. Confirm that data movement is scheduled to run at regular intervals to physically migrate data from the active tier to the cloud tier: # data-movement schedule show Data Domain generally recommends running data movement with an automated schedule. However, some customers choose to run this process in an ad-hoc manner when required. In this scenario, data movement should be started regularly by running: # data-movement start For more information about modifying the data movement schedule, refer to the Data Domain Administration Guide . Check the last time that data movement was run: # data-movement status If data movement has not been run for some time, attempt to manually start the process then monitor as follows: # data-movement watch If data movement fails to start for any reason, contact your contracted support provider for further assistance. Run active tier GC once data movement is complete to ensure that space used by migrated files in the active tier is physically freed: # filesys clean start Step 10 - Add more storage to the active tier. If all previous steps have been performed, active tier clean has run to completion, and there is still insufficient space available on the active tier, it is likely that the system has not been correctly sized for the workload it is receiving. In this case, perform one of the following: Reduce the workload sent to the system. For example: Redirect a subset of backups to alternate storage. Reduce the retention period of backups such that they are expired or deleted more quickly. Reduce the number and expiration period of scheduled snapshots against mtrees on the system. Break unneeded replication contexts for which the local system is a destination, then delete corresponding mtrees. Add additional storage to the active tier of the system and expand its size: # storage add <tier active> enclosure <enclosure number> | disk <device number> # filesys expand Contact your sales account team to discuss the addition of storage.
Click on a version to see all relevant bugs
Dell Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.