...
This article provides steps to manually collect upgrade logs if the log bundle collection fails when a VMware Aria Automation upgrade fails.When an upgrade fails, the following logs need to be analyzed: /var/log/vmware/prelude/upgrade-*.logUpgrade reports. Review based on timestamp One node environment /opt/vmware/var/log/vami/*.log Cluster environments /opt/log/vmware/var/log/vami/*.log Packages installation details/var/log/bootstrap/postupdate.logInitialization scripts details/var/log/bootstrap/everyboot.logInitialization scripts details/var/log/vmware/prelude/deploy-*.logServices startup details Note: Some log files will have timestamps as part of the file name, for others the information is appended. It is important to validate that the information is from the latest upgrade attempt. This blog provides an example of the expected output of the logs during the upgrade Upgrade runbook vRA 8.8.1 Deep-Dive Overview Upgrade Prerequisites Read the product release notes.Check Hardware requirements.Check the Services status.Ensure backups are available.Ensure a pre-upgrade snapshot is taken. Trigger upgrade in Aria Suite Lifecycle Validate prechecks are successful. Upgrade Process Breakdown Manifest is downloaded (upgrade-noop.log) Searches for the bootstap package and checks if ssh port is open and able to connect (update-datetime.log) Retrieves product version on all nodesPerforms Infrastructure healthShuts down Infrastructure and Application servicesSaves local restore point for k8s processes node data and activates monitor on all nodesOnce upgrade monitor is activated on the and cluster nodes are removed successfully, it will proceed for installation. This takes about 30 minutes. VAMI upgrade startsDownload of all packages happens (vami.log)Once down, installation of packages starts (updatecli.log & postupdate.log)Once completed, Appliance is rebooted and VAMI Upgrade is marked successful (update-datetime.log)Cluster nodes are added back in and restore points are restoredInfrastructure and Application Services are startedUpgrade cleanup performed
Manually collecting upgrade failure diagnostic information SSH to the VMware Aria Automation node indicated in the Aria Suite Lifecycle error. Validate that there is available disk space in the root partition (/dev/sda4) running the command vracli disk-mgr root@vranode1 [ /tmp ]# vracli disk-mgr /dev/sda4(/): Total size: 47.80GiB Free: 33.58GiB(70.2%) Available(for non-superusers): 31.13GiB(65.1%) SCSI ID: (0:0) /dev/sdb(/data): Total size: 140.68GiB Free: 109.54GiB(77.9%) Available(for non-superusers): 102.32GiB(72.7%) SCSI ID: (0:1) /dev/sdc(/var/log): Total size: 21.48GiB Free: 9.09GiB(42.3%) Available(for non-superusers): 7.97GiB(37.1%) SCSI ID: (0:2) /dev/sdd(/home): Total size: 29.36GiB Free: 27.41GiB(93.4%) Available(for non-superusers): 25.90GiB(88.2%) SCSI ID: (0:3) Run the following command to collect the directories and logs related to the upgrade: mkdir /tmp/upgradelogs && cp -R /var/log/vmware/prelude /tmp/upgradelogs && cp -R /opt/vmware/var/log/vami /tmp/upgradelogs && cp -R /var/log/bootstrap /tmp/upgradelogs && tar -zcvf /tmp/upgradelogs.tar.gz /tmp/upgradelogs Extract and continue your review with the collected file /tmp/upgradelogs.tar.gz for the failure code or submit this data to Global Services for additional assistance in troubleshooting the upgrade. After extracting the file from the appliance, remove the file and directory to save disk space: cd / tmp rm upgradelogs.tar.gz rm -r upgradelogs
Build numbers and versions for VMware Aria Automation (formerly VMware vRealize Automation Troubleshooting VMware Aria Automation cloud proxies and On-Premises appliance deployments Upgrade of Cluster VRA 8.x fails with Split brain scenarioUpgrade from vRA or vRO to newer may fail if there are certain records in the known_hosts file of the virtual appliance vRealize Automation 8.x upgrade failed when iptables.service did not start VMware Aria Suite Lifecycle 8.14 Patch 1 Day 2 operations fail for VMware Aria Automation with error code LCMVRAVACONFIG590024