...
The VMware vSphere Data Protection (VDP) appliance reports a high percentage of utilizationThe VDP appliance does not re-balance the datastores after adding disk space using Expand Storage wizardIn the /data01/cur/err.log file on the VDP appliance you see entries similar to: 2015/09/01-22:43:37.27528 {0.0} [balancebeat:135] WARN: <0465> balance aborted moving stripe 0.0-1996 - failed to move stripe from old node server_exception(MSG_ERR_ALREADYSTARTED) 2015/09/01-22:43:37.30141 {0.0} [balancebeat:135] WARN: <0482> balancebeat::movestripe server_exception server_exception(MSG_ERR_ALREADYSTARTED) 2015/09/01-22:46:37.32530 {0.0} [balancebeat:135] WARN: <0465> balance aborted moving stripe 0.0-1996 - failed to move stripe from old node server_exception(MSG_ERR_ALREADYSTARTED) 2015/09/01-22:46:37.32536 {0.0} [balancebeat:135] WARN: <0482> balancebeat::movestripe server_exception server_exception(MSG_ERR_ALREADYSTARTED) Running the status.dpn command displays balance task as currently running and the datastores are online and unbalanced For example: # status.dpn Fri Sep 4 12:39:29 EDT 2015 [VDPInfrastructure01.fhmi.org] Fri Sep 4 16:39:29 2015 UTC (Initialized Wed Jun 24 21:10:56 2015 UTC) Node IP Address Version State Runlevel Srvr+Root+User Dis Suspend Load UsedMB Errlen %Full Percent Full and Stripe Status by Disk 0.0 10.104.12.95 7.1.81-107 ONLINE fullaccess 00pu+00pu+00pu 1 false 28.41 11844 11380496 28.3% 54%(onl:2252) 54%(onl:2253) 54%(onl:2250) 1%(onl:100) 1%(onl:101) 1%(onl:100) Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable System ID: 1435180256@00:50:56:85:77:BE All reported states=(ONLINE), runlevels=(fullaccess), modes=(00pu+00pu+00pu) System-Status: ok Access-Status: admin Checkpoint in progress: cp.20150904163824 started Fri Sep 4 12:38:28 2015 >> completed 5831 of 7056 stripes (so far) Last GC: finished Fri Sep 4 09:42:17 2015 after 01h 41m >> recovered 4.43 GB (OK) Last hfscheck: finished Fri Sep 4 12:38:20 2015 after 01h 53m >> checked 2702 of 2702 stripes (OK) Maintenance windows scheduler capacity profile is active. The maintenance window is currently running. Currently running task(s): cp, balance Next backup window start time: Fri Sep 4 20:00:00 2015 EDT Next maintenance window start time: Sat Sep 5 08:00:00 2015 EDT Running the df -h command displays an unbalanced datastore usage For example: # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 7.9G 4.9G 2.7G 65% / udev 5.9G 156K 5.9G 1% /dev tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 130M 38M 86M 31% /boot /dev/sda7 1.5G 197M 1.2G 15% /var /dev/sda9 77G 9.2G 64G 13% /space /dev/sdb1 1.0T 587G 437G 58% /data01 /dev/sdc1 1.0T 583G 441G 57% /data02 /dev/sdd1 1.0T 584G 440G 58% /data03 /dev/sde1 1.0T 21G 1004G 2% /data04 /dev/sdf1 1.0T 21G 1003G 3% /data05 /dev/sdg1 1.0T 21G 1004G 2% /data06 Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.
This issue is resolved in VMware vSphere Data Protection 6.1, available at VMware Downloads. For more information, see the vSphere Data Protection (VDP) 6.1 Release Notes.For more information on migrating your existing VDP appliance to VDP 6.1, see the VDP Appliance Migration section in the vSphere Data Protection Administration Guide.Notes: The re-balance operation automatically restarts and perform the re-balance operation after the integrity check completes. This operation may take a few days to complete.Prior to VDP 6.x, the Disk Expansion feature was only available in VDP Advanced versions.