Checkpoint: Gaia on ESX – “No Hard Drives Have Been Found” Error

 

This article describes how to fix the error “No Hard Drives Have Been Found” when installing Checkpoint Gaia on ESX.

This is a simple one – I used to work round it by implementing an IDE drive instead of SCSI but actually it’s just due to the SCSI controller type. It happens when you choose a Red Hat machine which in turn sets the SCSI controller type to VMware Paravirtual as shown here below:

 

All you need to do is go into the VM Settings, SCSI Controller, and change the SCSI controller type from “VMware Paravirtual” to “LSI Logic Parallel” or to “LSI Logic SAS.”

Job done! Screenshots available in SecureKnowledge here.

 

Checkpoint: Useful Safe@ / Edge UTM-1 Links & Information

Some handy bits and pieces for troubleshooting Edge devices. I’ll update this as and when other stuff crops up.

Checkpoint Safe@ / Edge UTM-1 Interface Demo

This is quite handy when you haven’t got one to hand. Now if someone would mock one of these up for IPSO / Gaia then we’d be rocking ..

Checkpoint Edge Interface Demo

 LibSW File Links

If you push policies to your Edge device centrally,  you need to have the correct libsw files installed on your smartcenter in order to be able to compile the policy. The libsw version directly correlates to the version of firmware running on the Edge so firmware upgrade = libsw upgrade. All the links to the Checkpoint libsw files are contained in one handy list below.

Checkpoint Libsw Files – Firmware Versions

Checkpoint: Troubleshooting VRRP on Nokia Firewalls

Troubleshooting VRRP on Nokia Checkpoint Firewalls

The purpose of this article is to help in troubleshooting VRRP related issues on NOkia Checkpoint Firewalls. One of the most common problems faced in Nokia VRRP Implementations is that interfaces on active and standby firewalls go into the master master state. THe main reason for this is because the individual vrids of the master and backup firewall are not able to see the vrrp multicast requests of each other.

 

The first step is to check the vrrp state of the interfaces. THis is how you can check that:

 

PrimaryFW-A[admin]# iclid
PrimaryFW-A> sh vrrp

 

VRRP State
Flags: On
6 interface enabled
6 virtual routers configured
0 in Init state
0 in Backup state
6 in Master state
PrimaryFW-A>
PrimaryFW-A> exit

 

Bye.
PrimaryFW-A[admin]#

 

SecondaryFW-B[admin]# iclid
SecondaryFW-B> sh vrrp

 

VRRP State
Flags: On
6 interface enabled
6 virtual routers configured
0 in Init state
4 in Backup state
2 in Master state
SecondaryFW-B>
SecondaryFW-B> exit

 

Bye.
SecondaryFW-B[admin]#

 

In the example shown you see that 2 interfaces each from both firewalls are in the Master state.

 

The next step should involve running tcpdumps to see if the vrrp multicasts are reaching the particular interface.

 

As the first troubleshooting measure, put a tcpdump on the problematic interface of the master and backup firewalls. If you want to know what the problematic interface is, “echo sh vrrp int | iclid” should give you the answer. It is that interface on the backup firewall which would be in a Master state.

 

PrimaryFW-A[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
00:46:11.379961 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:12.399982 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:13.479985 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:14.560007 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]

 

When you put a tcpdump on the Primary Firewall, you see that the vrrp multicast request is leaving the interface.

 

Next put the tcpdump on the secondary firewall.

 

SecondaryFW-B[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
00:19:38.507294 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:39.527316 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:40.607328 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:41.687351 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:42.707364 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]

 

Now you can see that the interface on both the primary and the secondary firewalls are broadcasting vrrp multicasts. This is because the vrrp multicasts are not reaching the firewalls interfaces. This means there is a communication breakdown which can be possibly caused by network issues.

 

Once the network issue is resolved, communication would be possible and the interface with the lower priority will go as the secondary or backup state.

 

Now let us discuss another scenario where there is a problem with the firewall interfaces in Master Master state.

 

Again put a tcpdump on both the interfaces in question:

 

PrimaryFW-A[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
00:46:11.206994 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:11.379961 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:12.286990 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:12.399982 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:13.307014 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:13.479985 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:14.387098 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:14.560007 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:15.467064 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:15.580010 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]

 

SecondaryFW-B[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
00:19:38.507294 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:38.630075 I 10.10.10.2 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 100 [tos 0xc0]
00:19:39.527316 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:39.710131 I 10.10.10.2 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 100 [tos 0xc0]
00:19:40.607328 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:40.790142 I 10.10.10.2 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 100 [tos 0xc0]
00:19:41.687351 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:41.810150 I 10.10.10.2 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 100 [tos 0xc0]

 

In the above example look at the vrid numbers of the incoming and outgoing packets. From the vrids you see that that the vrids donot match. This is an indication that the cabling is not correct. The cables going to vrid 102 and 103 are not connected correctly and they need to be swapped to fix this issue.

 

Swap the cables and the issue will be resolved. The firewall with the higher priority will go into the Master state.

 

A properly functioning firewall will be like this:

 

PrimaryFW-A[admin]# iclid
PrimaryFW-A> sh vrrp

 

VRRP State
Flags: On
6 interface enabled
6 virtual routers configured
0 in Init state
0 in Backup state
6 in Master state
PrimaryFW-A> exit

 

Bye.
PrimaryFW-A[admin]#

 

SecondaryFW-B[admin]# iclid
SecondaryFW-B> sh vrrp

 

VRRP State
Flags: On
6 interface enabled
6 virtual routers configured
0 in Init state
6 in Backup state
0 in Master state
SecondaryFW-B> exit

 

Bye.
SecondaryFW-B[admin]#

 

If you were to tcpdump the healthy interface, this is how it would look:

 

PrimaryFW-A[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
18:25:44.015711 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:45.095726 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:46.175751 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:47.195770 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:48.275819 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:49.355812 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
^C
97 packets received by filter
0 packets dropped by kernel
PrimaryFW-A[admin]#

 

SecondaryFW-B[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
18:26:07.415446 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:08.495451 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:09.515480 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:10.595486 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:11.675485 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:12.695522 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:13.775590 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
^C
14 packets received by filter
0 packets dropped by kernel
SecondaryFW-B[admin]#

Credit to Secmanager

 

Exit mobile version
%%footer%%