
OPERATIONAL DEFECT DATABASE
...


...

In the ACM UI, under ESXi Compute node upgrade details section, the following errors are seen. The "esxAppLevelUpgrade.log" shows the following error as well: RUNNING,31,Upgrading ESXi x.x.x.x. This may take around 15 minutes RUNNING,32,Installing upgrade packages on ESXi x.x.x.x RUNNING,33,Failed to install upgrade packages on ESXi x.x.x.x RUNNING,33,Failed to upgrade ESXi Server In the esxi_logs.log under "/data01/tmp/patch/logs directory," verify that the following error is seen: 04/06/19 13:40:04 main() Package name: Dell-EMC-13G-ESXi-6.5.0-update-04 04/06/19 13:40:04 main() Executing install packages command: esxcli software profile update --depot=https://x.x.x.x:9443/dataprotection-upgrade/esxi_upgrade --profile=Dell-EMC-13G-ESXi-6.5.0-update-04 04/06/19 13:40:50 run() Parsing returnCode, output: ['', '[InstallationError]\r\n [Errno 32] Broken pipe\r\n vibs = VMware_locker_tools-light_6.5.0-1.47.8285314\r\n Please refer to the log file for more details.', 'Status: 1'] 04/06/19 13:40:50 run_cmd_esxi() Command: esxcli software profile update --depot=https://192.168.100.100:9443/dataprotection-upgrade/esxi_upgrade --profile=Dell-EMC-13G-ESXi-6.5.0-update-04 04/06/19 13:40:50 run_cmd_esxi() [InstallationError] 04/06/19 13:40:50 run_cmd_esxi() [Errno 32] Broken pipe 04/06/19 13:40:50 run_cmd_esxi() vibs = VMware_locker_tools-light_6.5.0-1.47.8285314 04/06/19 13:40:50 run_cmd_esxi() Please refer to the log file for more details. 04/06/19 13:40:50 run_cmd_esxi() Status: 1 04/06/19 13:40:50 main() Install packages command result: 1 04/06/19 13:40:50 main() Failed to install packages. The Upgrade utility log shows the following exception: 2019-06-04 14:41:17,777 INFO [upgrade-workflow-4]-upgradeutil.UpgradeUtil: Getting latest progress percent from: /data01/tmp/patch/logs/status/esxAppLevelUpgrade.log 2019-06-04 14:41:17,778 INFO [upgrade-workflow-4]-upgradeutil.UpgradeUtil: prevPercent: 20 currentPercent: 33 2019-06-04 14:41:17,778 ERROR [upgrade-workflow-4]-esx.EsxUpgradeOperations: Failed to upgrade ESXi. responseCode: 1 2019-06-04 14:41:17,778 ERROR [upgrade-workflow-4]-esx.EsxUpgradeTask: Failed to upgrade ESXi. com.emc.vcedpa.upgradeutil.common.exception.UpgradeException: Unable to upgrade ESXi. at com.emc.vcedpa.upgradeutil.configure.esx.EsxUpgradeOperations.upgradeEsxHosts(EsxUpgradeOperations.java:298) at com.emc.vcedpa.upgradeutil.configure.esx.EsxUpgradeOperations.upgrade(EsxUpgradeOperations.java:170) at com.emc.vcedpa.upgradeutil.configure.esx.EsxUpgradeTask.run(EsxUpgradeTask.java:46) at com.emc.vcedpa.upgradeutil.configure.impl.UpgradeWorkflowManager.executeAcmAndEsxTask(UpgradeWorkflowManager.java:381) at com.emc.vcedpa.upgradeutil.configure.impl.UpgradeWorkflowManager.executeAcmEsxWhenAllCriticalPpUpgradeSuccessful(UpgradeWorkflowManager.java:334) at com.emc.vcedpa.upgradeutil.configure.impl.UpgradeWorkflowManager.notifyUpgradeTaskStatus(UpgradeWorkflowManager.java:343) at com.emc.vcedpa.upgradeutil.configure.dd.DataDomainUpgradeTask.run(DataDomainUpgradeTask.java:47) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
This issue maybe caused by a missing vsantrace partition on the affected ESXi Host and the mtree missing on the Data Domain for the same host where the VSAN logs are stored. On the ESXi Host, run the command "df -h" to confirm the vsantrace partition is created and listed. The output should look similar to the below: [root@dpappliance-esx2:~] df -h Filesystem Size Used Available Use% Mounted on vfat 285.8M 285.7M 168.0K 100% /vmfs/volumes/5c82e746-bb00f246-9667-f8f21e473e10 vfat 249.7M 191.9M 57.8M 77% /vmfs/volumes/6186f3e0-df715171-4993-8b8091a1977e vfat 249.7M 186.8M 62.9M 75% /vmfs/volumes/718348e6-2c128245-ca05-e9d78fbff849 vsan 22.7T 14.9T 7.8T 66% /vmfs/volumes/vsanDatastore A healthy system looks similar to the below: [root@dpappliance-esx2:~] df -h Filesystem Size Used Available Use% Mounted on NFS 169.5T 2.0G 169.5T 0% /vmfs/volumes/vsantrace2 vfat 285.8M 285.7M 168.0K 100% /vmfs/volumes/5c82e746-bb00f246-9667-f8f21e473e10 vfat 249.7M 191.9M 57.8M 77% /vmfs/volumes/6186f3e0-df715171-4993-8b8091a1977e vfat 249.7M 186.8M 62.9M 75% /vmfs/volumes/718348e6-2c128245-ca05-e9d78fbff849 vsan 22.7T 14.9T 7.8T 66% /vmfs/volumes/vsanDatastore Open an SSH session to Data Domain and run the following command to confirm the issue: sysadmin@example.com# mtree list Name Pre-Comp (GiB) Status ---------------------------- -------------- ------ /data/col1/avamar-1xxxxx 0.5 RW /data/col1/backup 0.0 RW /data/col1/esx1-logs 0.1 RW/Q ---------------------------- -------------- ------ D : Deleted Q : Quota Defined RO : Read Only RW : Read Write RD : Replication Destination RLGE : Retention-Lock Governance Enabled RLGD : Retention-Lock Governance Disabled RLCE : Retention-Lock Compliance Enabled In the above output, it is clear that the mtree for ESXi Host 2 and 3 are missing for IDPA. Therefore, recreate the same using the steps in the resolution section. This is a known issue on Integrated Data Protection Appliance (IDPA) 2.1.
Follow the appropriate section for the host having the issue. Note: Skip the steps for the hosts which already have the vsantrace partition created and have the mtree for VSAN logs in the Data Domain side. Check if there are any NFS on Data Domain (vsantrace*) loaded on each ESXi: esxcli storage nfs list If there are not, perform the following steps: Steps for ESXi Host 1: Run the following command on Data Domain: quota capacity enable Run the following command on Data Domain for ESXi 1: mtree create /data/col1/esx1-logs quota-hard-limit 2 GiBnfs add /data/col1/esx1-logs 192.168.100.101 Run the following commands on ESXi Host 1: esxcli storage nfs remove -v vsantrace1esxcfg-nas -a vsantrace1 -o 192.168.100.109 -s /data/col1/esx1-logsesxcli vsan trace set -p /vmfs/volumes/vsantrace1esxcli vsan trace set --logtosyslog=false Steps for ESXi Host 2: Run the following command on Data Domain for ESXi Host 2: mtree create /data/col1/esx2-logs quota-hard-limit 2 GiBnfs add /data/col1/esx2-logs 192.168.100.102 Run the following commands on ESXi Host 2: esxcli storage nfs remove -v vsantrace2esxcfg-nas -a vsantrace2 -o 192.168.100.109 -s /data/col1/esx2-logsesxcli vsan trace set -p /vmfs/volumes/vsantrace2esxcli vsan trace set --logtosyslog=false Steps for ESXi Host 3: Run the following command on Data Domain for ESXi Host 3: mtree create /data/col1/esx3-logs quota-hard-limit 2 GiBnfs add /data/col1/esx3-logs 192.168.100.103 Run the following commands on ESXi Host 3: esxcli storage nfs remove -v vsantrace3esxcfg-nas -a vsantrace3 -o 192.168.100.109 -s /data/col1/esx3-logsesxcli vsan trace set -p /vmfs/volumes/vsantrace3esxcli vsan trace set --logtosyslog=false
Click on a version to see all relevant bugs
Dell Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.