...
HPE Performance Cluster Manager (HPCM) 1.3 (and later versions) changed to kibana-oss, elasticsearch-oss, and logstash-oss RPMs rather than RPMs of the same name without the "-oss." Due to the change in users and issues with file ownership/permissions, if upgrading from a previous HPCM release, some Elasticsearch, Logstash, and Kibana (ELK) services will not start.
Any systems being upgraded (rather than performing a fresh install) from HPCM 1.2 to any later release of HPCM if ELK is used.
The switch to *-oss RPMs with HPCM 1.3.x (or later) changed users and created issues with files ownership/permissions if upgraded from a previous HPCM release. Before upgrading HPCM , perform the following: Remove elasticsearch, kibana, or logstash RPMs by typing the following command: # rpm -e kibana elasticsearch logstash -noscripts Remove the /usr/share/logstash directory (because some files have X-Pack entries) by typing the following command: # rm -rf /usr/share/logstash Upgrade to HPCM 1.3.1. If already upgraded to HPCM 1.3.1 , remove the logstash "/usr/share/logstash" directory contents and reinstall the elasticsearch, kibana and logstash rpms on the admin node and leader nodes and restart the services. More details are below as there are file permissions and other issues that will need to be addressed. If the elasticsearch, kibana, or logstash RPMs (no "oss" in the name) are still installed on the admin, remove them with the below command: # rpm -e --noscripts kibana elasticsearch logstash If using rackleaders, use the noscripts option there to remove logstash and elasticsearch by typing the following command: r1lead# rpm -e --noscripts elasticsearch logstash Note : Make all changes in the leader node image or pull a new image from the running at the end of this procedure. HPE recommends pulling a new image from the rack leader at the end of this procedure with cm image capture -i <image> -n <leader name>. Otherwise, perform the steps described above except with "-oss" in the RPM names. With the RPMs now removed, delete the /usr/share/logstash directory on the admin and any leader nodes before re-installing # rm -rf /usr/share/logstash. The HPCM 1.3.1 repo must be selected by default or --repos or --repo-group used. To make sure the Cluster-Manager-1.3.1 and correct OS repository are selected, use crepo --show. If required, change the selections as follows: # crepo --unselect Red-Hat-Enterprise-Linux-7.6-x86_64 Cluster-Manager - 1.2.0-rhel76-x86_64 Unselecting: Red-Hat-Enterprise-Linux-7.6-x86_64 Unselecting: Cluster-Manager-1.2.0-rhel76-x86_64 Removing: /opt/clmgr/image/rpmlists/generated/generated-admin-rhel7.6.rpmlist Removing: /opt/clmgr/image/rpmlists/generated/generated - ice-rhel7.6.rpmlist Removing: /opt/clmgr/image/rpmlists/generated/generated-lead-rhel7.6.rpmlist Removing: /opt/clmgr/image/rpmlists/generated/generated - rhel7.6.rpmlist # crepo --select Cluster-Manager-1.3.1-rhel77-x86_64 Red-Hat-Enterprise - Linux-7.7-x86_64 Selecting: Cluster-Manager-1.3.1-rhel77-x86_64 Selecting: Red-Hat-Enterprise-Linux-7.7-x86_64 Updating: /opt/clmgr/image/rpmlists/generated/generated-rhel7.7.rpmlist Updating: /opt/clmgr/image/rpmlists/generated/generated-ice-rhel7.7.rpmlist Updating: /opt/clmgr/image/rpmlists/generated/generated-lead-rhel7.7.rpmlist Updating: /opt/clmgr/image/rpmlists/generated/generated-admin-rhel7.7.rpmlist Install the "-oss" RPMs using zypper/dnf/yum with the cm command as follows: # cm node dnf -n admin install logstash-oss kibana-oss elasticsearch-os s # cm node dnf -n <Comma-separated list of leaders> install logstash-oss elasticsearch-oss After the new RPMs are installed, the new users should exist on the admin and leaders (kibana does not exist on the leaders). Check the users/UIDs that have been created on the system: # id elasticsearch uid=969(elasticsearch) gid=967(elasticsearch) groups=967(elasticsearch) # id kibana uid=965(kibana) gid=966(kibana) groups=966(kibana) # id logstash uid=964(logstash) gid=965(logstash) groups=965(logstash) Ownership should be checked for the following and ownership with the old UIDs should be corrected to the new UIDs. This should be owned by elasticsearch:elasticsearch as follows: # ll -d /var/lib/elasticsearch/* drwxr-xr-x 3 976 kibana 4096 Apr 25 2019 /var/lib/elasticsearch/nodes The following should be root:elasticsearch: # ll -d /etc/elasticsearch/* -rw-rw---- 1 root mosquitto 199 Apr 14 13:09 /etc/elasticsearch/elasticsearch.keystore -rw-rw---- 1 root elasticsearch 216 Jul 6 11:52 /etc/elasticsearch/elasticsearch.yml -rw-rw---- 1 root elasticsearch 2869 May 5 11:42 /etc/elasticsearch/elasticsearch.yml.orig -rw-rw---- 1 root elasticsearch 3685 Jun 18 2019 /etc/elasticsearch/jvm.options -rw-r----- 1 root kibana 2920 Jun 13 2019 /etc/elasticsearch/jvm.options.orig -rw-rw---- 1 root mosquitto 3685 Jun 18 2019 /etc/elasticsearch/jvm.options.rpmnew -rw-rw---- 1 root kibana 2922 Jun 13 2019 /etc/elasticsearch/jvm.options.rpmsave -rw-rw---- 1 root elasticsearch 5156 Jun 18 2019 /etc/elasticsearch/log4j2.properties The following should be elasticsearch:elasticsearch as follows: # ll -d /var/log/elasticsearch/ drwxr-s--- 2 elasticsearch elasticsearch 4096 Apr 29 15:25 /var/log/elasticsearch/ The following should be logstash:logtash: # ll /usr/share/logstash/ total 876 drwxr-xr-x 2 logstash logstash 4096 Apr 29 14:44 bin drwxr-xr-x 2 root root 4096 Apr 29 15:20 config -rw-r--r-- 1 logstash logstash 2276 Jun 18 2019 CONTRIBUTORS drwxrwxr-x 2 logstash logstash 4096 Jun 18 2019 data -rw-r--r-- 1 logstash logstash 4194 Jun 18 2019 Gemfile -rw-r--r-- 1 logstash logstash 22528 Jun 18 2019 Gemfile.lock drwxr-xr-x 6 logstash logstash 4096 Apr 29 14:44 lib -rw-r--r-- 1 logstash logstash 11357 Jun 18 2019 LICENSE.txt drwxr-xr-x 4 logstash logstash 4096 Apr 29 14:44 logstash-core drwxr-xr-x 3 logstash logstash 4096 Apr 29 14:44 logstash-core-plugin-ap i drwxr-xr-x 4 logstash logstash 4096 Apr 29 14:44 modules -rw-r--r-- 1 logstash logstash 808305 Jun 18 2019 NOTICE.TXT drwxr-xr-x 3 logstash logstash 4096 Apr 29 14:44 tools drwxr-xr-x 4 logstash logstash 4096 Apr 29 14:44 vendor # ll /var/log/logstash/ total 128324 -rw-r--r-- 1 logstash elasticsearch 576592 Mar 9 16:06 logstash-plain-202 0 -01-30-7.log.gz -rw-r--r-- 1 logstash elasticsearch 4572 Mar 25 10:47 logstash-plain-2020 - 03-09-1.log.gz -rw-r--r-- 1 logstash elasticsearch 8044 Apr 3 14:30 logstash-plain-2020 - 03-25-1.log.gz -rw-r--r-- 1 logstash logstash 5920 Apr 29 14:47 logstash-plain-2020-04-0 3 -1.log.gz -rw-r--r-- 1 logstash logstash 47875 Apr 30 00:00 logstash-plain-2020-04-29-1.log.gz -rw-r--r-- 1 logstash logstash 122511 May 1 00:00 logstash-plain-2020-04-30-1.log.gz -rw-r--r-- 1 logstash logstash 121777 May 2 00:00 logstash-plain-2020-05-01-1.log.gz -rw-r--r-- 1 logstash logstash 121299 May 3 00:00 logstash-plain-2020-05-02-1.log.gz -rw-r--r-- 1 logstash logstash 121464 May 4 00:00 logstash-plain-2020-05- 0 3-1.log.gz -rw-r--r-- 1 logstash logstash 119999 May 5 00:00 logstash-plain-2020-05 - 04-1.log.gz -rw-r--r-- 1 logstash logstash 58201 Jul 3 01:09 logstash-plain-2020-05-0 5 -1.log.gz -rw-r--r-- 1 logstash logstash 112629 Jul 4 00:00 logstash-plain-2020-07-03-1.log.gz -rw-r--r-- 1 logstash logstash 117981 Jul 5 00:00 logstash-plain-2020-07- 0 4-1.log.gz -rw-r--r-- 1 logstash logstash 118250 Jul 6 00:00 logstash-plain-2020-07 - 05-1.log.gz -rw-r--r-- 1 logstash logstash 58242 Jul 20 11:18 logstash-plain-2020-07 - 06-1.log.gz -rw-r--r-- 1 logstash logstash 64785 Jul 21 00:00 logstash-plain-2020-07 - 20-1.log.gz -rw-r--r-- 1 logstash logstash 4985978 Jul 21 14:40 logstash-plain.log -rw-r--r-- 1 logstash elasticsearch 0 Apr 25 2019 logstash-slowlog-plain.log Ensure that the key is only readable by kibana:kibana: #ll /opt/sgi/secrets/CA/private/kibana.key -rw------- 1 kibana kibana 3272 Apr 25 2019 /opt/sgi/secrets/CA/private/kibana.key Below are some examples: For Elasticsearch: # chown -R elasticsearch:elasticsearch /var/lib/elasticsearch/ # chown -R root:elasticsearch /etc/elasticsearch/ # chmod g+s /var/lib/elasticsearch/nodes For Logstash: # chown -R logstash:logstash /var/log/logstash/ For Kibana: # chown kibana:kibana /opt/sgi/secrets/CA/private/kibana.key Reconcile any rpmsave files with elasticsarch.yml under /etc/elasticsearch. Reboot the admin and any leader nodes. The upgraded kibana in HPCM 1.3.1 cannot function with the old kibana indices( .kibana*) and are not compatible with new version of elasticsearch and must be removed. # curl http://admin:/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .kibana_1 pEQMD6Y2TkeTUbM9vuCbrw 1 0 8 0 21.1kb 21.1kb yellow open environmental_power_cmc sXeMOU5QT8ykVjtACAgfmQ 3 1 0 0 783b 783b yellow open event_cooldev K_1fbAxIR4C2avl6CN4gMA 5 1 0 0 1.2kb 1.2kb # curl -X DELETE http://admin:/.kibana_1 {acknowledged": true} Restart the monitoring service using the following commands: # cm monitoring kafka restart # cm monitoring elk restart Review the output from the two above commands to ensure the services are running successfully on the admin node. (errors from other nodes may occur that are not currently available) The following commands can be also be used to review: # cm monitoring kafka status # cm monitoring elk status The word "active" should be displayed by the admin node hostname. elk-dist-setup and kfka-dist-setup commands can then be used to distribute the elasticsearch and kafka services among leader nodes. After the monitoring service are started and working on admin, type the following commands to distribute these services: # /opt/sgi/sbin/elk-dist-setup Below nodes are going to be part of elasticsearch cluster :: - ['r1lead'] If you want to remove some nodes, Remove nodes from /opt/clmgr/etc/elk_node.lst file and press 'y' to continue the setup Want to continue [y|n] (default y): Successfully configured r1lead node to be part of the distributed elasticsearch Note: Forwarding request to 'systemctl enable elasticsearch.service'. Waiting for other data nodes to add in ES cluster Restart logstash service These nodes are added to the elasticsearch cluster : ['r1lead'] If an index KeyError occurs, edit /opt/sgi/sbin/elk-dist-setup as follows: Remove the following line: v['settings']['index']['number_of_replicas'] = str(nrep) Replace with the following line: if v.has_key('settings') and v['settings'].has_key('index'): v['settings'] [ 'index']['number_of_replicas'] = str(nrep ) After the above change, stop the elasticsearch service on one of the leader nodes, execute "cm monitoring elk restart" on the admin and then re-run the elk-dist-setup script on the admin node. Then, run kfka-dist-setup as follows: # /opt/sgi/sbin/kfka-dist-setup Below nodes are going to be part of kafka cluster :: - ['r1lead'] If you want to remove some nodes, Remove nodes from /opt/clmgr/etc/kafka_node.lst file and press 'y' to continue the setup Want to continue [y|n] (default y): y Successfully configured r1lead to be part of the distributed kafka Created symlink /etc/systemd/system/multi-user.target.wants/confluent-kafka.service -> /usr/lib/systemd/system/confluent-kafka.service. rebalancing the partitions among leaders The admin and leaders can then be rebooted as follows: # pdsh -g leader /etc/opt/sgi/conf.d/80-logstash-configure # pdsh -g leader systemctl restart logstash # /etc/opt/sgi/conf.d/80-elk-configure -n <node> # systemctl restart kibana Capture the updated image from a running leader node by typing the following command: cm image capture -i <image> -n <node> This mage can be assigned to leader nodes using the following command: cm node set --image <image> --kernel <kernel> RECEIVE PROACTIVE UPDATES : Receive support alerts (such as Customer Advisories), as well as updates on drivers, software, firmware, and customer replaceable components, proactively in your e-mail through HPE Support Alerts. Sign up for Support Alerts at the following URL: HPE Email Preference Center NAVIGATION TIP: For hints on navigating HPE.com to locate the latest drivers, patches and other support software downloads, refer to the Navigation Tips document. SEARCH TIP: For hints on locating similar documents on HPE.com, refer to the Search Tips document. To search for additional advisories related to HPCM, use the following search string: +Advisory +ProLiant -"Software and Drivers" +HPCM
Click on a version to see all relevant bugs
Hewlett Packard Enterprise Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.