...
The health check script (proactive_check.pl) reports the following when run on an Avamar grid with Gen4T Nodes: ... WARNING: Node (0.n) SSD Drive expected but not identified RESOLUTION: Check the Media Type output from 'avsysreport pdisk'. Expected to see 'SSD' or 'Solid State'. Check hardware health if not seen NOTE: This may cause the Micron SSD results to be wrong # --> SSD Found FAILED ... (Where 0.n is the node reporting the issue) Note: Node 0.6 is used in the examples below. Substitute the correct node number or IP address as required. The SSD drive on the problematic node reports no issues: 1. Log in to the Avamar Utility Node as admin. 2. Elevate to root. 3. Load the ssh keys per Avamar: How to Log in to an Avamar Server and Load Various Keys. 4. Connect to the problematic node using the ssn command: ssn --user=root 0.6 (In the above example, the connection is to node 0.6) 5. Once connected to the node, run the following command to verify that the SSD is not reporting any issues: arcconf getconfig 1 pd The output should be similar to the following: Device #12 Device is a Hard drive State : Raw (Pass Through) Block Size : 512 Bytes Supported : Yes Programmed Max Speed : SAS 12.0 Gb/s Transfer Speed : SAS 12.0 Gb/s Reported Channel,Device(T:L) : 0,20(20:0) Reported Location : Enclosure 0, Slot 12(Connector 0, Connector 1) Reported ESD(T:L) : 2,0(0:0) Vendor : HITACHI Model : HUSMM112 CLAR200 Firmware : C29C Serial number : 0PY4R85A World-wide name : 5000CCA04DB1AE4B Reserved Size : 0 KB Total Size : 190782 MB Write Cache : Disabled (write-through) FRU : None S.M.A.R.T. : No S.M.A.R.T. warnings : 0 Power State : Full rpm Supported Power States : Full power,Powered off SSD : Yes Temperature : 30 C/ 86 F
The node reporting the warning has an outdated version of dpnavsys that does not support Gen4T nodes. This can be confirmed by running the following command (with keys loaded per Avamar: How to Log in to an Avamar Server and Load Various Keys): mapall --all+ --noerror 'rpm -qa | grep dpnavsys' Example output: (0.s) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.1 'rpm -qa' dpnavsys-1.1.0-24 (0.0) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.2 'rpm -qa' dpnavsys-1.1.0-24 (0.1) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.3 'rpm -qa' dpnavsys-1.1.0-24 (0.2) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.4 'rpm -qa' dpnavsys-1.1.0-24 (0.3) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.5 'rpm -qa' dpnavsys-1.1.0-24 (0.4) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.6 'rpm -qa' dpnavsys-1.1.0-24 (0.5) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.7 'rpm -qa' dpnavsys-1.1.0-24 (0.6) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.8 'rpm -qa' dpnavsys-1.1.0-17 (0.7) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.9 'rpm -qa' dpnavsys-1.1.0-24 All nodes should be running the same version of dpnavsys. From the output above, node 0.6 is not running the same version of dpnavsys.
1. Log in to the Avamar Utility Node as admin, 2. Elevate to root. 3. Load the keys per Avamar: How to Log in to an Avamar Server and Load Various Keys): 4. If unknown, verify which node, or nodes, have the outdated copy of dpnavsys: mapall --all+ --noerror 'rpm -qa | grep dpnavsys' Example output: (0.s) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.1 'rpm -qa' dpnavsys-1.1.0-24 (0.0) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.2 'rpm -qa' dpnavsys-1.1.0-24 (0.1) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.3 'rpm -qa' dpnavsys-1.1.0-24 (0.2) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.4 'rpm -qa' dpnavsys-1.1.0-24 (0.3) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.5 'rpm -qa' dpnavsys-1.1.0-24 (0.4) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.6 'rpm -qa' dpnavsys-1.1.0-24 (0.5) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.7 'rpm -qa' dpnavsys-1.1.0-24 (0.6) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.8 'rpm -qa' dpnavsys-1.1.0-17 (0.7) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.9 'rpm -qa' dpnavsys-1.1.0-24 5. Copy the correct version of the package to the problematic node or nodes: a. Run the following command to determine which node (or nodes) has the updated copy of dpnavsys: mapall --all+ --noerror 'ls -al /usr/local/avamar/src/dpnavsys*' The output is similar to the following: Using /usr/local/avamar/var/probe.xml (0.s) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.1 'ls -al /usr/local/avamar/src/dpnavsys*' ls: cannot access '/usr/local/avamar/src/dpnavsys*': No such file or directory (0.0) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.2 'ls -al /usr/local/avamar/src/dpnavsys*' -rw-r----- 1 root root 576658 Dec 8 2018 /usr/local/avamar/src/dpnavsys-1.1.0-17.x86_64.rpm -rw-r----- 1 root root 579210 Oct 4 2019 /usr/local/avamar/src/dpnavsys-1.1.0-24.x86_64.rpm (0.1) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.3 'ls -al /usr/local/avamar/src/dpnavsys*' -rw-r----- 1 root root 576658 Dec 8 2018 /usr/local/avamar/src/dpnavsys-1.1.0-17.x86_64.rpm -rw-r----- 1 root root 579210 Oct 4 2019 /usr/local/avamar/src/dpnavsys-1.1.0-24.x86_64.rpm (0.2) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.4 'ls -al /usr/local/avamar/src/dpnavsys*' -rw-r----- 1 root root 576658 Dec 8 2018 /usr/local/avamar/src/dpnavsys-1.1.0-17.x86_64.rpm -rw-r----- 1 root root 579210 Oct 4 2019 /usr/local/avamar/src/dpnavsys-1.1.0-24.x86_64.rpm (0.3) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.5 'ls -al /usr/local/avamar/src/dpnavsys*' -rw-r----- 1 root root 576658 Dec 8 2018 /usr/local/avamar/src/dpnavsys-1.1.0-17.x86_64.rpm -rw-r----- 1 root root 579210 Oct 4 2019 /usr/local/avamar/src/dpnavsys-1.1.0-24.x86_64.rpm (0.4) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.6 'ls -al /usr/local/avamar/src/dpnavsys*' -rw-r----- 1 root root 576658 Dec 8 2018 /usr/local/avamar/src/dpnavsys-1.1.0-17.x86_64.rpm -rw-r----- 1 root root 579210 Oct 4 2019 /usr/local/avamar/src/dpnavsys-1.1.0-24.x86_64.rpm (0.5) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.7 'ls -al /usr/local/avamar/src/dpnavsys*' -rw-r----- 1 root root 576658 Dec 8 2018 /usr/local/avamar/src/dpnavsys-1.1.0-17.x86_64.rpm -rw-r----- 1 root root 579210 Oct 4 2019 /usr/local/avamar/src/dpnavsys-1.1.0-24.x86_64.rpm (0.6) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.8 'ls -al /usr/local/avamar/src/dpnavsys*' -rw-r----- 1 root root 576658 Dec 8 2018 /usr/local/avamar/src/dpnavsys-1.1.0-17.x86_64.rpm (0.7) ssh -q -x -o GSSAPIAuthentication=no admin@192.168.255.9 'ls -al /usr/local/avamar/src/dpnavsys*' -rw-r----- 1 root root 576658 Dec 8 2018 /usr/local/avamar/src/dpnavsys-1.1.0-17.x86_64.rpm -rw-r----- 1 root root 579210 Oct 4 2019 /usr/local/avamar/src/dpnavsys-1.1.0-24.x86_64.rpm b. Record the IP address of one of the nodes with the required version (this would be 192.168.255.2, 192.168.255.3, 192.168.255.4, 192.168.255.5, 192.168.255.6, 192.168.255.7, or 192.168.255.9 based on the output above). Note: If the required dpnavsys version is not present on any nodes, download it from the Dell Support Website to the Avamar Utility Nodes /tmp directory and continue from step d below. c. Copy dpnavsys from the IP address noted in (b) above, to the /tmp directory on the Utility Node: cd /tmp scp root@:/usr/local/avamar/src/dpnavsys-.rpm . Example: scp root@192.168.255.5:/usr/local/avamar/src/dpnavsys-1.1.0-24_x86_64.rpm . d. Copy the updated dpnavsys package from the Utility Node to the problematic node with the outdated dpnavsys: scp dpnavsys-.rpm root@:/usr/local/avamar/src/ Example: scp dpnavsys-1.1.0-24_x86_64.rpm root@192.168.255.3:/usr/local/avamar/src/ 6. Connect to the problematic node with the outdated dpnavsys as root: ssn --user=root 0.6 7. Update the software: rpm -U /usr/local/avamar/src/dpnavsys-.rpm Example: rpm -U /usr/local/avamar/src/dpnavsys-1.1.0-24_x86_64.rpm 8. Perform validation checks: a. Verify that the dpnavsys version is now correct: rpm -qa | grep dpnavsys b. Verify that the avsysreport command runs: avsysreport pdisk 9. Type "exit" to return to the utility node. 10. Perform steps 5-9 on any other nodes that must be updated. 11. Once all nodes have been updated, rerun the health check script. This should now complete without reporting an error.