Loading...
Loading...
Failure to deploy AND register devices to FMC with health alert "Failed to deploy due to communication error" on FMC GUI
The following will be seen in FMC /var/log/messages and in the netstat command respectively. SFTunnel failing to fully establish May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:stream_file [INFO] Creating task on SRC for incoming task:: File copy 0 % completed, 0 bytes of file copied out of 0 May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:stream_file [INFO] Adding SRC Task on Request, key: 0:6 May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:stream_file [INFO] ADDED INIT confirmation to be SRC: curr_read=0, curr_write=0, total_bytes=0, stream_id_src=0, stream_id_dest=6, seq_id_src=0, seq_id_dest=0, state =Started, started:2023 05 11 19:20:29 UTC, expires:2023 05 11 19:30:29 UTC May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:stream_file [INFO] ADDED INIT confirmation to be SRC:: File copy 0 % completed, 0 bytes of file copied out of 0 May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:stream_file [INFO] FILE /var/sf/clamupd_download/hifistatic.cvd May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:MessageSocket [WARN] SSL INTERNAL Read Error - check CA certificates: Resource temporarily unavailable May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_connections [INFO] Unable to receive message from peer FTDv-Repro:General read error May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_channel [INFO] >> ChannelState dropChannel peer 10.122.149.216 / channelA / CONTROL [ msgSock & ssl_context ] << May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_connections [INFO] Exiting channel (recv). Peer FTDv-Repro closed connection on interface eth0. May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_connections [INFO] Failed to send in control channel for peer FTDv-Repro (eth0) May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_channel [INFO] >> ChannelState dropChannel peer 10.122.149.216 / channelA / DROPPED [ msgSock & ssl_context ] << May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_channel [INFO] >> ChannelState freeChannel peer 10.122.149.216 / channelA / DROPPED [ msgSock & ssl_context ] << May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_connections [INFO] ChannelState Peer FTDv-Repro TOP OF THE LOOP CHANNEL COUNT 0 May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_connections [INFO] >>>>>>>>>>>>>>>>>>>>>>> May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:stream_file [INFO] Stream CTX destroyed for 10.122.149.216 May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_channel [INFO] >> ChannelState ShutDownPeer peer 10.122.149.216 / channelA / NONE [ msgSock & ssl_context ] << May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_channel [INFO] >> ChannelState ShutDownPeer peer 10.122.149.216 / channelB / DROPPED [ msgSock & ssl_context ] << May 11 19:20:29 firepower SF-IMS[20825]: [25061] sftunneld:sf_connections [INFO] Peer FTDv-Repro needs re-connect root@firepower:/var/log# netstat -nap | grep 8305 tcp 0 0 10.122.149.220:8305 0.0.0.0:* LISTEN 20825/sftunnel tcp 0 0 172.19.0.1:8305 0.0.0.0:* LISTEN 20825/sftunnel tcp 0 0 10.122.149.220:8305 10.122.149.216:34165 ESTABLISHED 20825/sftunnel
Correct the misconfiguration in running _conf.conf file on FMC located in director /etc/sf - make sure only connected interfaces are present and remaining interfaces have both "control" and "events" flags: services control events
Specifically an example: Interfaces eth0 and eth1 are up, have IPv4 addresses and are connected to networks. Customer configures events and data split ? sets eth0 to be ?control? and eth1 to be ?events? from UI. If eth1 is disconnected ? it creates invalid configuration that needs to be handled (ignore customer configured split).
Click on a version to see all relevant bugs
Cisco Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.