...
This issue is introduced since 7.0 U2.The issue can be seen on a vSAN and on-vSAN setup even.Following backtrace is ideally seen on vSAN setup .xxxx-xx-xxxxxxxx:16.407Z cpu27:2341534)@BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4933 - Usage error in dlmallocxxxx-xx-xxxxxxxx:16.419Z cpu27:2341534)Code start: 0x420009000000 VMK uptime: 74:08:57:04.897xxxx-xx-xxxxxxxx:16.439Z cpu27:2341534)0x45396631bbe0:[0x4200090fa5b0]PanicvPanicInt@vmkernel#nover+0x2cc stack: 0x4200090fa5b0xxxx-xx-xxxxxxxx:16.459Z cpu27:2341534)0x45396631bc90:[0x4200090faa9c]Panic_NoSave@vmkernel#nover+0x4d stack: 0x45396631bcf0xxxx-xx-xxxxxxxx:16.477Z cpu27:2341534)0x45396631bcf0:[0x4200091414ac]DLM_free@vmkernel#nover+0x22d stack: 0x43163f7b1c60xxxx-xx-xxxxxxxx:16.493Z cpu27:2341534)0x45396631bd10:[0x42000913e801]Heap_Free@vmkernel#nover+0xba stack: 0x1xxxx-xx-xxxxxxxx:16.514Z cpu27:2341534)0x45396631bd60:[0x42000903c6f1]FS_IOAccessDone@vmkernel#nover+0x132 stack: 0x45d981558fc0xxxx-xx-xxxxxxxx:16.535Z cpu27:2341534)0x45396631bda0:[0x4200090bc450]AsyncPopCallbackFrameInt@vmkernel#nover+0x55 stack: 0x80000000xxxx-xx-xxxxxxxx:16.557Z cpu27:2341534)0x45396631bdd0:[0x4200099b80da]vsanapi_AsyncTokenCallback@com.vmware.vsanapi#0.0.0.2+0x1f stack: 0x0xxxx-xx-xxxxxxxx:16.581Z cpu27:2341534)0x45396631bdf0:[0x42000ac03d57]DOMClientSendResponse@com.vmware.vsan#0.0.0.1+0x9c4 stack: 0x45d9ae6866f0xxxx-xx-xxxxxxxx:16.607Z cpu27:2341534)0x45396631be80:[0x42000ac6d8d0]DOMOperationSendResponseToBackend@com.vmware.vsan#0.0.0.1+0x19 stack: 0x45396631bf18xxxx-xx-xxxxxxxx:16.628Z cpu27:2341534)0x45396631be90:[0x42000ac6f41d]DOMOperationDispatch@com.vmware.vsan#0.0.0.1+0x4a stack: 0x0xxxx-xx-xxxxxxxx:16.652Z cpu27:2341534)0x45396631bee0:[0x42000a799bfb]VSANServerMainLoop@com.vmware.vsanutil#0.0.0.1+0x570 stack: 0x432151e93ad8xxxx-xx-xxxxxxxx:16.671Z cpu27:2341534)0x45396631bf90:[0x420009119158]vmkWorldFunc@vmkernel#nover+0x49 stack: 0x420009119154xxxx-xx-xxxxxxxx:16.690Z cpu27:2341534)0x45396631bfe0:[0x420009381e69]CpuSched_StartWorld@vmkernel#nover+0x86 stack: 0x0xxxx-xx-xxxxxxxx:16.708Z cpu27:2341534)0x45396631c000:[0x4200090c2c23]Debug_IsInitialized@vmkernel#nover+0xc stack: 0x0
This article provides information on how to overcome this PSOD issue
In rare cases, in an asynchronous read I/O containing a SCATTER_GATHER_ELEMENT array of more than 16 members, at least 1 member might fall in the last partial block of a file. This might lead to corrupting VMFS memory heap, which in turn causes ESXi hosts to fail with a purple diagnostic screen.
Fixed in 7.0 U3c and later .https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html
No workaround available.
Click on a version to see all relevant bugs
VMware Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.