Loading...
Loading...
In the Consolidated Internet Small Computer System Interface (iSCSI) Protocol specification (RFC7143), it specifies that an iSCSI Initiator portal can have one or more sessions with an iSCSI Target portal. Specifically, it says the following regarding the Initiator Session ID (ISID) used to uniquely identify Initiator sessions.The ISID RULE states that there must not be more than one session matching the following 4-tuple:<InitiatorName, ISID, TargetName, TargetPortalGroupTag>While this provision of the specification allows for multiple sessions from an iSCSI Initiator portal to an iSCSI Target portal, its intended purpose is for redundancy (i.e. multipathing). Having multiple sessions to the same Target portal does not provide the intended redundancy or other benefits. In contrast, having multiple sessions from an iSCSI Initiator portal to multiple Target portals, does provide redundancy when combined with multipathing software.The below diagrams illustrate what is and is not supported in the HPE GreenLake for Block Storage MP OS 10.4.2 release. As a reminder, an iSCSI portal is an iSCSI network entity that has a TCP/IP address and can be used by an iSCSI node to connect to another iSCSI node. An iSCSI portal is identified by its IP address.SupportedNot SupportedIn HPE GreenLake for Block Storage MP OS 10.4.2, multiple iSCSI Initiator sessions were disallowed to the same iSCSI Target portal. Before a second session can be established, the first session will be closed. In the event that the user configures more than one session from an iSCSI Initiator portal to an iSCSI Target portal, it results in an unintended side effect of continual session establishment and teardown. Because of the limited time an iSCSI session is established, access to LUNs is lost from the Initiator portal configured with multiple sessions. Other Initiator portals configured with one session between the iSCSI Initiator portal and Target portal are unaffected.To work around this issue, iSCSI Initiator portals should be configured with one session per iSCSI Target portal. Multiple sessions from an iSCSI Initiator portal to the same iSCSI Target portal will be supported in an upcoming HPE GreenLake for Block Storage MP OS maintenance release (see the Resolution section for details).Determining whether you are impacted by this issueTo determine whether you are impacted by this issue, you can run the following CLI command:# showiscsisessionN:S:P --IPAddr-- TPGTTSIH Conns ------------iSCSI_Name------------- -------StartTime------- VLANState QID0:4:1 10.43.69.9 41 2 1iqn.1994-05.com.redhat:49d367495bb3 2024-12-03 21:10:38 PST -ONLINE 10:4:1 10.43.69.9 41 3 1iqn.1994-05.com.redhat:49d367495bb3 2024-12-03 21:10:38 PST -ONLINE 1------------------------------------------------------------------------------------------------------------2 totalIf the same initiator portal is logged into the same target portal multiple times, then you will see multiple entries between them. In the above example, initiator portal (10.43.69.9) is logged into the target portal identified by Target Portal Group Tag (TPGT)41 two (2) times.To display the iSCSI Session ID(ISID) associated with each session, you can run the following CLI command:showiscsisession -dIn the above example, the first session between the iSCSI initiator portal and target portal is identified by iSCSI Session ID(ISID) 0x1300003d0200. The second session between the same initiator portal and target portal is identified by ISID 0x1400003d0200.Examples of an Unsupported ConfigurationThe examples in this section illustrate how an unsupported configuration can be created and what you must do to avoid the problem.WindowsIn the Windows Sessions tab below, two iSCSI sessions from the same Initiator portal have been established to two iSCSI Target portals. The highlighted session is connected to the iSCSI Target portal on Node 0, Slot 4, Port 1 (0:4:1). The other session is connected to the iSCSI Target portal on Node 1, Slot 4, Port 2 (1:4:2). At this point, we are in a supported configuration:To create a second (unsupported) session from the same iSCSI Initiator portal to the iSCSI Target Portal on 0:4:1, you must selectAdd sessionin the above Window. This will result in the following pop-up window:To create multiple sessions, you must first enable multi-path. Again, the intended use would be to create another session to a different iSCSI Target portal for redundancy, but here we are not doing that and creating another session on the same Target portal. After selectingOK, the Session window will now show the unsupported second session to iSCSI Target Portal on 0:4:1. As a result, you will encounter the continual session establishment and teardown on iSCSI Target Portal 0:4:1 and lose access to LUNs through that port. Access through the session established with port 1:4:2 is unaffected:LinuxIn the Linux example below, we start with four sessions (313, 314, 315, and 316) from the same iSCSI Initiator portal to four iSCSI Target portals. These sessions are established on the following Array Node:Slot:Ports – 0:4:1, 0:4:2, 1:4:1, 1:4:2. As noted in the previous example, these multiple sessions are typically established for redundancy and used by the operating system’s multi-pathing software:iscsiadm -m sessiontcp: [313] 10.43.69.41:3260,41 iqn.2023-24.com.hpe:4uw0004369 (non-flash)tcp: [314] 10.43.69.42:3260,42 iqn.2023-24.com.hpe:4uw0004369 (non-flash)tcp: [315] 10.43.69.141:3260,141 iqn.2023-24.com.hpe:4uw0004369 (non-flash)tcp: [316] 10.43.69.142:3260,142 iqn.2023-24.com.hpe:4uw0004369 (non-flash)To create a second (unsupported) session from the iSCSI Initiator portal to the iSCSI Target portal on port 0:4:1, the following command can be used:iscsiadm -m session -r 313 --op=newLogging in to [iface: eno5np0, target: iqn.2023-24.com.hpe:4uw0004369, portal: 10.43.69.41,3260] (multiple)Login to [iface: eno5np0, target: iqn.2023-24.com.hpe:4uw0004369, portal: 10.43.69.41,3260] successful.After executing the above command, a second session (317) is created from the same iSCSI Initiator portal to the iSCSI Target portal on port 0:4:1.iscsiadm -m sessiontcp: [313] 10.43.69.41:3260,41 iqn.2023-24.com.hpe:4uw0004369 (non-flash)tcp: [314] 10.43.69.42:3260,42 iqn.2023-24.com.hpe:4uw0004369 (non-flash)tcp: [315] 10.43.69.141:3260,141 iqn.2023-24.com.hpe:4uw0004369 (non-flash)tcp: [316] 10.43.69.142:3260,142 iqn.2023-24.com.hpe:4uw0004369 (non-flash)tcp: [317] 10.43.69.41:3260,41 iqn.2023-24.com.hpe:4uw0004369 (non-flash)As a result, you will encounter the continual session establishment and teardown on iSCSI Target Portal 0:4:1 and lose access to LUNs through that port. Access through the sessions established with ports 0:4:2, 1:4:1, and 1:4:2 are unaffected.NOTE: While Linux also allows multiple sessions to be created from an iSCSI Initiator Portal to an iSCSI Target Portal, it does not provide additional redundancy or other benefits.VMware ESXiVMware ESXi is also susceptible to this issue if more than one session is created between an iSCSI Initiator portal and an iSCSI Target portal. An example of how an unsupported session configuration could be configured is provided below.As shown in the list command output below, a single session is currently established on the target portal associated with Array port 0:3:1.esxcli iscsi session listvmhba66,iqn.2023-24.com.hpe:4uw0004469,00023d000001Adapter: vmhba66Target: iqn.2023-24.com.hpe:4uw0004469ISID: 00023d000001TargetPortalGroupTag: 31AuthenticationMethod: noneDataPduInOrder: trueDataSequenceInOrder: trueDefaultTime2Retain: 0DefaultTime2Wait: 2ErrorRecoveryLevel: 0FirstBurstLength: IrrelevantImmediateData: falseInitialR2T: trueMaxBurstLength: 262144MaxConnections: 1MaxOutstandingR2T: 1TSIH: 65Using the belowesxclicommand, a second session is created between the same iSCSI Initiator portal and iSCSI Target portal:esxcli iscsi session add -A vmhba66 -s 00023d000001 -n iqn.2023-24.com.hpe:4uw0004469Running the list command again shows the two sessions created between the same iSCSI Initiator portal and the iSCSI Target portal:esxcli iscsi session listvmhba66,iqn.2023-24.com.hpe:4uw0004469,00023d000001Adapter: vmhba66Target: iqn.2023-24.com.hpe:4uw0004469ISID: 00023d000001TargetPortalGroupTag: 31AuthenticationMethod: noneDataPduInOrder: trueDataSequenceInOrder: trueDefaultTime2Retain: 0DefaultTime2Wait: 0ErrorRecoveryLevel: 0FirstBurstLength: IrrelevantImmediateData: falseInitialR2T: trueMaxBurstLength: 262144MaxConnections: 1MaxOutstandingR2T: 1TSIH: 67vmhba66,iqn.2023-24.com.hpe:4uw0004469,00023d010001Adapter: vmhba66Target: iqn.2023-24.com.hpe:4uw0004469ISID: 00023d010001TargetPortalGroupTag: 31AuthenticationMethod: noneDataPduInOrder: trueDataSequenceInOrder: trueDefaultTime2Retain: 0DefaultTime2Wait: 2ErrorRecoveryLevel: 0FirstBurstLength: 262144ImmediateData: trueInitialR2T: falseMaxBurstLength: 262144MaxConnections: 1MaxOutstandingR2T: 1TSIH: 66As a result, you will encounter the continual session establishment and teardown on iSCSI Target Portal 0:3:1 and lose access to LUNs through that port.Events that trigger the issue:Upgrading to HPE GreenLake for Block Storage MP OS version 10.4.2 with more than one session between an iSCSI Initiator portal to an iSCSI Target portalUpgrading to HPE GreenLake for Block Storage MP OS version 10.4.2, then adding a second session between an iSCSI Initiator portal and an iSCSI Target portalAdditional informationTo help identify this issue, the HPE GreenLake for Block Storage MP OS IOSTACK log will show a repeated pattern of continuous logins and connection exits. An example of this repeated pattern is provided below:2024-11-04 10:46:25.38 GMT {8398149} {iscsi_poll_group_0 } iscsi.c:2343 - iscsi_pdu_payload_op_login: info: executing NVF_SHIM_ISCSI_CONN_IS_LOGIN_ALLOWED for conn_id=1422024-11-04 10:46:25.38 GMT {8398149} {iscsi_poll_group_0 } iscsi.c:4993 - iscsi_pdu_hdr_handle: info: executing NVF_SHIM_ISCSI_CONN_LOGIN conn_id=1432024-11-04 10:46:25.38 GMT {8398149} {iscsi_poll_group_0 } iscsi.c:2329 - iscsi_pdu_payload_op_login: info: executing NVF_SHIM_ISCSI_CONN_WAIT_FOR_DUP_EXIT for conn_id=1432024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:1907 - iscsi_conn_full_feature_migrate: info: executing NVF_SHIM_ISCSI_CONN_LOGIN_DONE for conn_id=1422024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:1910 - iscsi_conn_full_feature_migrate: conn_id=142 NVF_SHIM_ISCSI_CONN_LOGIN_DONE was successful, rc=02024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:1914 - iscsi_conn_full_feature_migrate: Adding Conn I/O Stall Poller conn 0x7fd0ceb48400 conn_id 142 conn_state 1 pdu_recv_state 1 thread=iscsi_poll_group_1 channel thread=iscsi_poll_group_12024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } iscsi_subsystem.c:981 - iscsi_poll_group_poll: info: executing NVF_SHIM_ISCSI_CONN_EXITING for host=192.168.21.30/3260 iqn=iqn.1991-05.com.microsoft:Ini001.abc.group conn_id=1412024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } iscsi_subsystem.c:985 - iscsi_poll_group_poll: info: executing NVF_SHIM_ISCSI_CONN_EXITED for host=192.168.21.30/3260 iqn=iqn.1991-05.com.microsoft:Ini001.abc.group conn_id=1412024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:826 - iscsi_conn_destruct: conn_id=141 host=192.168.21.30/3260 state=2 pending_task_cnt=0 conn->sess->connections=1 pending=02024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:881 - iscsi_conn_destruct: conn_id=141 calling _iscsi_conn_destruct()2024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:755 - _iscsi_conn_destruct: conn_id=141 entering2024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:769 - _iscsi_conn_destruct: Deleting Conn I/O Stall Poller conn 0x7fd0ceb471b8 conn_id 141 conn_state 3 pdu_recv_state 1 thread=iscsi_poll_group_1 channel thread=iscsi_poll_group_12024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:793 - _iscsi_conn_destruct: conn_id=141 iscsi_conn_free_tasks() rc=0, stopping conn now2024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:695 - iscsi_conn_stop: conn_id=141 sess type=1 full feature=12024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:707 - iscsi_conn_stop: conn_id=141 current num_active_conns=22024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:521 - iscsi_conn_close_luns: conn_id=141 closing all LUNs2024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } conn.c:500 - iscsi_conn_close_lun: conn_id=141 closing LUN bdev=14156-2 lun_id=02024-11-04 10:46:25.38 GMT {8398150} {iscsi_poll_group_1 } lun.c:762 - scsi_lun_free_io_channel: LUN bdev=14156-2 id=0 ref=2 thread=iscsi_poll_group_1 channel thread=iscsi_poll_group_1For more information, refer to theManaging iSCSI Session on ESXi Hostweb page.Disclaimer: One or more of the links above will take you outside the HPE website. HPE is not responsible for content outside of its domain.
HPE GreenLake products affected:HPE GreenLake for Private Cloud Business EditionHPE GreenLake For Block Storage MPHPE Alletra Storage MPOperating systems affected:HPE GreenLake For Block Storage MP OS 10.4.2
Before upgrading to HPE GreenLake for Block Storage MP OS version 10.4.2, ensure that there is only one session between each iSCSI Initiator portal and iSCSI Target portal. Remove sessions as needed.After upgrading, do not create more than one session between an iSCSI Initiator portal and an iSCSI Target portal.The HPE GreenLake for Block Storage MP OS 10.4.5 release will handle the exception gracefully, and the HPE GreenLake for Block Storage MP OS 10.4.8 release will support multiple sessions from an iSCSI Initiator portal to the same iSCSI Target portal.Currently, online upgrades to HPE GreenLake Block Storage MP OS 10.4.2 are blocked on iSCSI systems. If you need to upgrade your storage system to this release and understand the limitations explained in this advisory, open a case withHPE Support.Revision HistoryDocument VersionRelease DateDetails2January 17, 2025Added the "Determining whether you are impacted by this issue" section in the Description1November 19, 2024Original Document Release
Operating Systems Affected:Debian GNU/Linux 8.0, Debian GNU/Linux 9.0, Microsoft Windows Server 2016, Microsoft Windows Server 2019, Microsoft Windows Server 2022, OS Independent, Red Hat Enterprise Linux 7 Server, Red Hat Enterprise Linux 8 Server, Red Hat Enterprise Linux 9, SUSE Linux Enterprise Server 11 (x86), SUSE Linux Enterprise Server 12, SUSE Linux Enterprise Server 15, Ubuntu 18.04, Ubuntu 22.04 LTS, Ubuntu 24.04 LTS, VMware ESXi 6.7, VMware ESXi 7.0, VMware ESXi 8.0
Click on a version to see all relevant bugs
Hewlett Packard Enterprise Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.