The conntrack-tools user manual

Pablo Neira Ayuso


      
     

This document details how to install and configure the conntrack-tools >= 1.4.0. This document will evolve in the future to cover new features and changes.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License".


Table of Contents

1. Introduction
2. What are the conntrack-tools?
3. Requirements
4. Installation
5. Using conntrack: the command line interface
6. Setting up conntrackd: the daemon
State table synchronization
Requirements
Configuring the daemon
Active-Backup setup
Active-Active setup
Launching conntrackd
Other configuration options
User-space helpers
Troubleshooting
7. System integration

Chapter 1. Introduction

This document should be a kick-off point to install and configure the conntrack-tools. If you find any error or imprecision in this document, please send an email to the author, it will be appreciated.

In this document, the author assumes that the reader is familiar with firewalling concepts and iptables in general. If this is not your case, I suggest you to read the iptables documentation before going ahead. Moreover, the reader must also understand the difference between stateful and stateless firewalls. If this is not your case, I strongly suggest you to read the article Netfilter's Connection Tracking System published in :login; the USENIX magazine. That document contains a general description that should help to clarify the concepts.

If you do not fulfill the previous requirements, this documentation is likely to be a source of frustration. Probably, you wonder why I'm insisting on these prerequisites too much, the fact is that if your iptables rule-set is stateless, it is very likely that the conntrack-tools will not be of any help for you. You have been warned!

Chapter 2. What are the conntrack-tools?

The conntrack-tools are a set of free software tools for GNU/Linux that allow system administrators interact, from user-space, with the in-kernel Connection Tracking System, which is the module that enables stateful packet inspection for iptables. Probably, you did not hear about this module so far. However, if any of the rules of your rule-set use the state or ctstate iptables matches, you are indeed using it.

The conntrack-tools package contains two programs:

  • conntrack is command line interface conntrack provides a more flexible interface to the connnection tracking system than /proc/net/ip_conntrack. With conntrack, you can show, delete and update the existing state entries; and you can also listen to flow events.

  • conntrackd is the user-space connection tracking daemon. This daemon can be used to deploy fault-tolerant GNU/Linux firewalls but you can also use it to collect flow-based statistics of the firewall use.

Although the name of both tools is very similar - and you can blame me for that, I'm not a marketing guy - they are used for very different tasks.

Chapter 3. Requirements

You have to install the following software in order to get the conntrack-tools working. Make sure that you have installed them correctly before going ahead:

  • Linux kernel version >= 2.6.18 that, at least, has support for:

    • Connection Tracking System.

      • CONFIG_NF_CONNTRACK=m

      • CONFIG_NF_CONNTRACK_IPV4=m

      • CONFIG_NF_CONNTRACK_IPV6=m (if your setup supports IPv6)

    • nfnetlink: the generic messaging interface for Netfilter.

      • CONFIG_NETFILTER_NETLINK=m

    • nf_conntrack_netlink: the messaging interface for the Connection Tracking System.

      • CONFIG_NF_CT_NETLINK=m

    • connection tracking event notification API: the flow-based event notification interface.

      • CONFIG_NF_CONNTRACK_EVENTS=y

    [Note]Verifying kernel support

    Make sure you have loaded nf_conntrack, nf_conntrack_ipv4 (if your setup also supports IPv6, nf_conntrack_ipv6) and nf_conntrack_netlink.

  • libnfnetlink: the netfilter netlink library use the official release available in netfilter.org

  • libnetfilter_conntrack: the netfilter netlink library use the official release available in netfilter.org

Chapter 4. Installation

To compile and install the conntrack-tools run the following commands:

	(non-root)$ tar xvjf conntrack-tools-x.x.x.tar.bz2
	(non-root)$ cd conntrack-tools-x.x.x
	(non-root)$ ./configure --prefix=/usr
	(non-root)$ make
	(root)    # make install
[Note]Fedora Users

If you are installing the libraries in /usr/local/, do not forget to do the following things:

  • PKG_CONFIG_PATH=/usr/local/lib/pkgconfig; export PKG_CONFIG_PATH

  • Add `/usr/local/lib' to your /etc/ld.so.conf file and run `ldconfig'

Check `ldd' for trouble-shooting, read this for more information on how libraries work.

[Note]Verifying kernel support

To check that the modules are enabled in the kernel, run `conntrack -E' and generate traffic, you should see flow events reporting new connections and updates.

Chapter 5. Using conntrack: the command line interface

The /proc/net/ip_conntrack interface is very limited as it only allows you to display the existing flows, their state and other information:

 # cat /proc/net/ip_conntrack
 tcp      6 431982 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34846 dport=993 packets=169 bytes=14322 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34846 packets=113 bytes=34787 [ASSURED] mark=0 secmark=0 use=1
 tcp      6 431698 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34849 dport=993 packets=244 bytes=18723 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34849 packets=203 bytes=144731 [ASSURED] mark=0 secmark=0 use=1
 

The command line tool conntrack can be used to display the same information:

 # conntrack -L
 tcp      6 431982 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34846 dport=993 packets=169 bytes=14322 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34846 packets=113 bytes=34787 [ASSURED] mark=0 secmark=0 use=1
 tcp      6 431698 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34849 dport=993 packets=244 bytes=18723 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34849 packets=203 bytes=144731 [ASSURED] mark=0 secmark=0 use=1
conntrack v0.9.7 (conntrack-tools): 2 flow entries have been shown.
 

You can natively filter the output without using grep:

 # conntrack -L -p tcp --dport 34856
 tcp      6 431982 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34846 dport=993 packets=169 bytes=14322 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34846 packets=113 bytes=34787 [ASSURED] mark=0 secmark=0 use=1
conntrack v0.9.7 (conntrack-tools): 1 flow entries have been shown.
 

Update the mark based on a selection, this allows you to change the mark of an entry without using the CONNMARK target:

 # conntrack -U -p tcp --dport 3486 --mark 10
 tcp      6 431982 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34846 dport=993 packets=169 bytes=14322 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34846 packets=113 bytes=34787 [ASSURED] mark=1 secmark=0 use=1
conntrack v0.9.7 (conntrack-tools): 1 flow entries has been updated.
 

Delete one entry, this can be used to block traffic if:

  • You have a stateful rule-set that blocks traffic in INVALID state.

  • You have set /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_loose or /proc/sys/net/netfilter/nf_conntrack_tcp_loose, depending on your kernel version, to zero.

 # conntrack -D -p tcp --dport 3486
 tcp      6 431982 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34846 dport=993 packets=169 bytes=14322 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34846 packets=113 bytes=34787 [ASSURED] mark=1 secmark=0 use=1
conntrack v0.9.7 (conntrack-tools): 1 flow entries has been deleted.
 

Display the connection tracking events:

 # conntrack -E
     [NEW] udp      17 30 src=192.168.2.100 dst=192.168.2.1 sport=57767 dport=53 [UNREPLIED] src=192.168.2.1 dst=192.168.2.100 sport=53 dport=57767
  [UPDATE] udp      17 29 src=192.168.2.100 dst=192.168.2.1 sport=57767 dport=53 src=192.168.2.1 dst=192.168.2.100 sport=53 dport=57767
     [NEW] tcp      6 120 SYN_SENT src=192.168.2.100 dst=66.102.9.104 sport=33379 dport=80 [UNREPLIED] src=66.102.9.104 dst=192.168.2.100 sport=80 dport=33379
  [UPDATE] tcp      6 60 SYN_RECV src=192.168.2.100 dst=66.102.9.104 sport=33379 dport=80 src=66.102.9.104 dst=192.168.2.100 sport=80 dport=33379
  [UPDATE] tcp      6 432000 ESTABLISHED src=192.168.2.100 dst=66.102.9.104 sport=33379 dport=80 src=66.102.9.104 dst=192.168.2.100 sport=80 dport=33379 [ASSURED]

You can also display the existing flows in XML format, filter the output based on the NAT handling applied, etc.

Chapter 6. Setting up conntrackd: the daemon

The daemon conntrackd supports two working modes:

  • State table synchronization: the daemon can be used to synchronize the connection tracking state table between several firewall replicas. This can be used to deploy fault-tolerant stateful firewalls. This is the main feature of the daemon.

  • Flow-based statistics collection: the daemon can be used to collect flow-based statistics. This feature is similar to what ulogd-2.x provides.

State table synchronization

Requirements

In order to get conntrackd working in synchronization mode, you have to fulfill the following requirements:

  1. A high availability manager like keepalived that manages the virtual IPs of the firewall cluster, detects errors, and decide when to migrate the virtual IPs from one firewall replica to another. Without it, conntrackd will not work appropriately.

    The state synchronization setup requires a working installation of keepalived, preferibly a recent version. Check if your distribution comes with a recent packaged version. Otherwise, you may compile it from the sources.

    There is a very simple example file in the conntrackd sources to setup a simple HA cluster with keepalived (see the file keepalived.conf under the doc/sync/ directory). This file can be used to set up a simple VRRP cluster composed of two machines that hold the virtual IPs 192.168.0.100 on eth0 and 192.168.1.100 on eth1.

    If you are not familiar with keepalived, please read the official documentation available at the keepalived website (http://www.keepalived.org).

    If you use a different high availability manager, make sure it works correctly before going ahead.

  2. A dedicated link. The dedicated link between the firewalls is used to transmit and receive the state information. The use of a dedicated link is mandatory for security reasons as someone may pick the state information that is transfered between the firewalls.

  3. A well-formed stateful rule-set. Otherwise you are likely to experience problems during the fail-over. An example of a well-formed stateful iptables rule-set is available in the conntrack-tools website.

  4. If your Linux kernel is < 2.6.22, you have to disable TCP window tracking:

        # echo 1 > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_be_liberal
       

Configuring the daemon

The daemon conntrackd in synchronization mode supports up to three replication approaches:

  • notrack: this approach is the most simple as it is based on a best effort replication protocol, ie. unreliable protocol. This protocol sends and receives the state information without performing any specific checking.

  • ft-fw: this approach is based on a reliable protocol that performs message tracking. Thus, the protocol can recover from message loss, re-ordering and corruption.

  • alarm: this approach is spamming. It is based on a alarm-based protocol that periodically re-sends the flow state to the backup firewall replicas. This protocol consumes a lot of bandwidth but it resolves synchronization problems fast.

The three existing approaches are soft real-time asynchronous replication protocols that are aimed to have negligible impact in terms of latency and bandwidth throughput in the stateful firewall filtering.

To configure conntrackd in any of the existing synchronization modes, you have to copy the example configuration file to the directory /etc/conntrackd/ on every firewall replica. Note that _type_ is the synchronization type selected.

 (conntrack-tools-x.x.x)# cp doc/_type_/conntrackd.conf /etc/conntrackd/conntrackd.conf

Do not forget to edit the files before going ahead. There are several parameters that you have to tune to adapt the example configuration file to your setup.

[Note]Configuration file location

If you don't want to put the config file under /etc/conntrackd/, just tell conntrackd where to find it passing the option -C.

Active-Backup setup

[Note]Stateful firewall architectures

A good reading to extend the information about firewall architectures is Demystifying cluster-based fault-tolerant firewalls published in IEEE Internet Computing magazine.

In the Active-Backup setup, one of the stateful firewall replicas filters traffic and the other acts as backup. If you use this approach, you have to copy the script primary-backup.sh to:

 (conntrack-tools-x.x.x)# cp doc/sync/primary-backup.sh /etc/conntrackd/

The HA manager invokes this script when a transition happens, ie. If a stateful firewall replica:

  • becomes active to recover the filtering.

  • becomes backup.

  • hits failure (this is available if the HA manager has a failure state, which is true for keepalived.

The script is simple, and it contains the different actions that conntrackd performs to recover the filtering or purge obsolete entries from the state table, among others. The script is commented, you can have a look at it if you need further information.

Active-Active setup

The Active-Active setup consists of having more than one stateful firewall replicas actively filtering traffic. Thus, we reduce the resource waste that implies to have a backup firewall which does nothing.

We can classify the type of Active-Active setups in several families:

  • Symmetric path routing: The stateful firewall replicas share the workload in terms of flows, ie. the packets that are part of a flow are always filtered by the same firewall.

  • Asymmetric multi-path routing: The packets that are part of a flow can be filtered by whatever stateful firewall in the cluster. Thus, every flow-states have to be propagated to all the firewalls in the cluster as we do not know which one would be the next to filter a packet. This setup goes against the design of stateful firewalls as we define the filtering policy based on flows, not in packets anymore.

As for 0.9.8, the design of conntrackd allows you to deploy an symmetric Active-Active setup based on a static approach. For example, assume that you have two virtual IPs, vIP1 and vIP2, and two firewall replicas, FW1 and FW2. You can give the virtual vIP1 to the firewall FW1 and the vIP2 to the FW2.

Unfortunately, you will have to wait for the support for the Active-Active setup based on dynamic approach, ie. a workload sharing setup without directors that allow the stateful firewall share the filtering.

On the other hand, the asymmetric scenario may work if your setup fulfills several strong assumptions. However, in the opinion of the author of this work, the asymmetric setup goes against the design of stateful firewalls and conntrackd. Therefore, you have two choices here: you can deploy an Active-Backup setup or go back to your old stateless rule-set (in that case, the conntrack-tools will not be of any help anymore, of course).

Launching conntrackd

Once you have configured conntrackd, you can run in console mode which is an interactive mode, in that case type 'conntrackd' as root.

(root)# conntrackd

If you want to run conntrackd in daemon mode, then type:

(root)# conntrackd -d

You can verify that conntrackd is running by checking the log messages via ps. Moreover, if conntrackd is running fine, you can dump the current status of the daemon:

 # conntrackd -s
 cache internal:
 current active connections:                4
 connections created:                       4    failed:            0
 connections updated:                       0    failed:            0
 connections destroyed:                     0    failed:            0

 cache external:
 current active connections:                0
 connections created:                       0    failed:            0
 connections updated:                       0    failed:            0
 connections destroyed:                     0    failed:            0

 traffic processed:
                    0 Bytes                         0 Pckts

 multicast traffic:
                  352 Bytes sent                    0 Bytes recv
                   22 Pckts sent                    0 Pckts recv
                    0 Error send                    0 Error recv

 multicast sequence tracking:
                    0 Pckts mfrm                    0 Pckts lost
 

This command displays the number of entries in the internal and external cache:

  • The internal cache contains the states that this firewall replica is filtering, ie. this is a cache of the kernel state table.

  • The external cache contains the states that the other firewall replica is filtering.

You can dump the internal cache with the following command:

 # conntrackd -i
 tcp      6 ESTABLISHED src=192.168.2.100 dst=139.174.175.20 sport=58491 dport=993 src=139.174.175.20 dst=192.168.2.100 sport=993 dport=58491 [ASSURED] mark=0 secmark=0 [active since 536s]
 tcp      6 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=38211 dport=993 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=38211 [ASSURED] mark=0 secmark=0 [active since 536s]
 tcp      6 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=38209 dport=993 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=38209 [ASSURED] mark=0 secmark=0 [active since 536s]
 tcp      6 TIME_WAIT src=192.168.2.100 dst=74.125.45.166 sport=42593 dport=80 src=74.125.45.166 dst=192.168.2.100 sport=80 dport=42593 [ASSURED] [active since 165s]
 tcp      6 ESTABLISHED src=192.168.2.100 dst=139.174.175.20 sport=37962 dport=993 src=139.174.175.20 dst=192.168.2.100 sport=993 dport=37962 [ASSURED] mark=0 secmark=0 [active since 536s]
 

You can dump the external cache with the following command:

# conntrackd -e

If the replication works fine, conntrackd -s displays the active's internal cache should display the same number of entries than the backup's external cache and vice-versa.

To verify that the recovery works fine, if you trigger a fail-over, the log files should display the following information:

 [Thu Sep 18 18:03:02 2008] (pid=9759) [notice] committing external cache
 [Thu Sep 18 18:03:02 2008] (pid=9759) [notice] Committed 1545 new entries

This means that the state entries have been injected into the kernel correctly.

Other configuration options

The daemon allows several configuration options that you may want to enable. This section contains some information about them.

Disabling external cache

It is possible to disable the external cache. Thus, conntrackd directly injects the flow-states into the in-kernel Connection Tracking System of the backup firewall. You can do it by enabling the DisableExternalCache option in the conntrackd.conf configuration file:

Sync {
	Mode FTFW {
		 [...]
		 DisableExternalCache Off
	}
}
 

You can also use this option with the NOTRACK and ALARM modes. This increases CPU consumption in the backup firewall but now you do not need to commit the flow-states during the master failures since they are already in the in-kernel Connection Tracking table. Moreover, you save memory in the backup firewall since you do not need to store the foreign flow-states anymore.

Disabling internal cache

You can also disable the internal cache by means of the DisableInternalCache option in the conntrackd.conf configuration file:

Sync {
	Mode NOTRACK {
		 [...]
		 DisableInternalCache Off
	}
}
 

However, this option is only available for the NOTRACK mode. This mode provides unreliable flow-state synchronization between firewalls. Thus, if flow-states are lost during the synchronization, the protocol provides no way to recover them.

Using UDP, TCP or multicast for flow-state synchronization

You can use up to three different transport layer protocols to synchronize flow-state changes between the firewalls: UDP, TCP and Multicast. UDP and multicast are unreliable but together with the FT-FW mode provide partial reliable flow-state synchronization.

The preferred choice is FT-FW over UDP, or multicast alternatively. TCP introduces latency in the flow-state synchronization due to the congestion control. Under flow-state message are lost, the FIFO delivery becomes also a problem since the backup firewall quickly gets out of sync. For that reason, its use is discouraged. Note that using TCP only makes sense with the NOTRACK mode.

Redundant dedicated links

You can set redundant dedicated links without using bonding, you have to configure as many redundant links as you want in the configuration file. In case of failure of the master dedicated link, conntrackd failovers to one of the backups. An example of this configuration is the following:

Sync {
	Mode FTFW {
		 [...]
	}
	# default master dedicated link
        UDP Default {
                IPv4_address 192.168.2.1
                IPv4_Destination_Address 192.168.2.2
                Port 3780
                Interface eth3
                SndSocketBuffer 24985600
                RcvSocketBuffer 24985600
                Checksum on
        }
	# backup dedicated link
        UDP {
               IPv4_address 192.168.1.3
               IPv4_Destination_Address 192.168.1.4
               Port 3780
               Interface eth2
               SndSocketBuffer 24985600
               RcvSocketBuffer 24985600
               Checksum on
        }
	[...]
}
 

Filtering Connection tracking events with iptables

Since Linux kernel >= 2.6.34, iptables provides the CT iptables target that allows to reduce the amount of Connection Tracking events that are delivered to user-space. However, you will have to use a Linux kernel >= 2.6.38 to profit from this feature, since several aspects of the event filtering were broken.

The following example shows how to only generate the assured and destroy events:

 # iptables -I PREROUTING -t raw -j CT --ctevents assured,destroy
 
[Note]Assured flows

One flow is assured if the firewall has seen traffic for it in both directions.

Reducing the amount of events generated helps to reduce CPU consumption in the active firewall.

Synchronization of expectations

[Note]Check your Linux kernel version first

The synchronization of expectations require a Linux kernel >= 3.5 to work appropriately.

The connection tracking system provides helpers that allows you to filter multi-flow application protocols like FTP, H.323 and SIP among many others. These protocols usually split the control and data traffic in different flows. Moreover, the control flow usually announces layer 3 and 4 information to let the other peer know where the data flows will be open. This sort of protocols require that the firewall inspects the content of the packet, otherwise filtering by layer 3 and 4 selectors like addresses and ports become a real nightmare. Netfilter already provides the so-called helpers that track this protocol aspects to allow deploying appropriate filtering. These helpers create expectation entries that represent expected traffic that will arrive to the firewall according to the inspected packets.

In case that you have enabled tracking of these protocols, you may want to enable the state-synchronization of expectation as well. Thus, established flows for this specific protocols will not suffer any disruption.

To enable the expectation support in the configuration file, you have to use the following option:

Sync {
       ...
       Options {
               ExpectationSync {
                       ftp
                       sip
                       ras    # for H.323
                       q.931  # for H.323
                       h.245  # for H.323
               }
       }
}

The example above enables the synchronization of the expectations for the FTP, SIP and H.323 helpers.

In my testbed, there are two firewalls in a primary-backup configuration running keepalived. They use a couple of floating cluster IP address (192.168.0.100 and 192.168.1.100) that are used by the client. These firewalls protect one FTP server (192.168.1.2) that will be accessed by one client.

In ASCII art, it looks like this:

         192.168.0.100      192.168.1.100
                  eth1      eth2
                       fw-1
                     /      \       FTP
        client ------       ------ server
      192.168.0.2    \      /   192.168.1.2
                       fw-2
 

This is the rule-set for the firewalls:

    -A FORWARD -m state --state RELATED -j ACCEPT
    -A FORWARD -i eth2 -m state --state ESTABLISHED -j ACCEPT
    -A FORWARD -i eth1 -p tcp -m tcp --dport 21 --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j ACCEPT
    -A FORWARD -i eth1 -p tcp -m state --state ESTABLISHED -j ACCEPT
    -A FORWARD -m state --state INVALID -j LOG --log-prefix "invalid: "

Before going ahead, make sure nf_conntrack_ftp is loaded.

The following steps detail how to check that the expectation support works fine with FTP traffic:

  1. Switch to the client. Start one FTP control connection to one server that is protected by the firewalls, enter passive mode:

      (term-1) user@client$ nc 192.168.1.2 21
       220 dummy FTP server
       USER anonymous
       331 Please specify the password.
       PASS nothing
       230 Login successful.
       PASV
       227 Entering Passive Mode (192,168,1,2,163,11).

    This means that port 163*256+11=41739 will be used for the data traffic. I suggest you to read djb's FTP protocol description in case that you don't understand how this calculation is done.

  2. Switch to fw-1 (primary) to check that the expectation is in the internal cache.

     root@fw1# conntrackd -i exp
     proto=6 src=192.168.0.2 dst=192.168.1.2 sport=0 dport=41739 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=192.168.0.2 master-dst=192.168.1.2 sport=36390 dport=21 helper=ftp [active since 5s]
     
  3. Switch to fw-2 (backup) to check that the expectation has been successfully replicated.

     root@fw2# conntrackd -e exp
     proto=6 src=192.168.0.2 dst=192.168.1.2 sport=0 dport=41739 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=192.168.0.2 master-dst=192.168.1.2 sport=36390 dport=21 [active since 8s]
     
  4. Make the primary firewall fw-1 fail. Now fw-2 becomes primary.

  5. Switch to fw-2 (primary) to commit the external cache into the kernel. The logs should display that the commit was successful:

     root@fw2# tail -100f /var/log/conntrackd.log
     [Wed Dec  7 22:16:31 2011] (pid=19195) [notice] committing external cache: expectations
     [Wed Dec  7 22:16:31 2011] (pid=19195) [notice] Committed 1 new entries
     [Wed Dec  7 22:16:31 2011] (pid=19195) [notice] commit has taken 0.000366 seconds
  6. Switch to the client. Open a new terminal and connect to the port that has been announced by the server:

     (term-2) user@client$ nc -vvv 192.168.1.2 41739
     (UNKNOWN) [192.168.1.2] 41739 (?) open
  7. Switch to term-1 and ask for the file listing:

     [...]
     227 Entering Passive Mode (192,168,1,2,163,11).
     LIST
  8. Switch to term-2, it should display the listing. That means everything has worked fine.

You may want to try disabling the expectation support and repeating the steps to check that it does not work without the state-synchronization.

User-space helpers

[Note]Check your Linux kernel version first

The user-space helper infrastructure requires a Linux kernel >= 3.6 to work appropriately.

Connection tracking helpers allows you to filter multi-flow protocols that usually separate control and data traffic into different flows. These protocols usually violate network layering by including layer 3/4 details, eg. IP address and TCP/UDP ports, in their application protocol (which resides in layer 7). This is problematic for gateways since they operate at packet-level, ie. layers 3/4, and therefore they miss this important information to filter these protocols appropriately.

Helpers inspect packet content (at layer 7) and create the so-called expectations. These expectations are added to one internal table that resides in the gateway. For each new packet arriving to the gateway, the gateway first looks up for matching expectations. If there is any, then this flow is accepted since it's been expected. Note this lookup only occurs for the first packet that is part of one newly established flow, not for all packets.

Since 1.4.0, conntrackd provides the infrastructure to develop helpers in user-space. The main features of the user-space infrastructure for helpers are:

  • Rapid connection tracking helper development, as developing code in user-space is usually faster.

  • Reliability: A buggy helper does not crash the kernel. If the helper fails, ie. the conntrackd crashes, Moreover, we can monitor the helper process and restart it in case of problems.

  • Security: Avoid complex string matching and mangling in kernel-space running in privileged mode. Going further, we can even think about running user-space helper as a non-root process.

  • It allows the development of very specific helpers for proprietary protocols that are not standard. This is the case of the SQL*net helper. Implementing this in kernel-space may be problematic, since this may not be accepted for ainline inclusion in the Linux kernel. As an alternative, we can still distribute this support as separate patches. However, my personal experience is that, given that the kernel API/ABI is not stable, changes in the interface lead to the breakage of the patch. This highly increase the overhead in the maintainance.

Currently, the infrastructure supports the following user-space helpers:

  • Oracle*TNS, to support its special Redirect message.

  • NFSv3, mind that version 4 does not require this helper.

  • FTP (this helper is also available in kernel-space).

  • SSDP.

The following steps describe how to enable the RPC portmapper helper for NFSv3 (this is similar for other helpers):

  1. Register user-space helper:

    nfct add helper rpc inet udp
    nfct add helper rpc inet tcp
    

    This registers the portmapper helper for both UDP and TCP (NFSv3 traffic goes both over TCP and UDP).

  2. Add iptables rule using the CT target:

    # iptables -I OUTPUT -t raw -p udp --dport 111 -j CT --helper rpc
    # iptables -I OUTPUT -t raw -p tcp --dport 111 -j CT --helper rpc
    

    With this, packets matching port TCP/UDP/111 are passed to user-space for inspection. If there is no instance of conntrackd configured to support user-space helpers, no inspection happens and packets are not sent to user-space.

  3. Add configuration to conntrackd.conf:

    Helper {
            Type rpc inet udp {
                    QueueNum 1
    		QueueLen 10240
                    Policy rpc {
                            ExpectMax 1
                            ExpectTimeout 300
                    }
            }
            Type rpc inet tcp {
                    QueueNum 2
    		QueueLen 10240
                    Policy rpc {
                            ExpectMax 1
                            ExpectTimeout 300
                    }
            }
    }
    

    This configures conntrackd to use NFQUEUE queue numbers 1 and 2 to send traffic for inspection to user-space

    [Note]If you have some custom libnetfilter_queue application

    Make sure your queue numbers do not collide with those used in your conntrackd.conf file.

Now you can test this (assuming you have some working NFSv3 setup) with:

mount -t nfs -onfsvers=3 mynfs.server.info:/srv/cvs /mnt/

You should see new expectations being added via:

# conntrack -E expect
    [NEW] 300 proto=17 src=1.2.3.4 dst=1.2.3.4 sport=0 dport=54834 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=1.2.3.4 master-dst=1.2.3.4 sport=58190 dport=111 PERMANENT class=0 helper=rpc
    [NEW] 300 proto=6 src=1.2.3.4 dst=1.2.3.4 sport=0 dport=2049 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=1.2.3.4 master-dst=1.2.3.4 sport=55450 dport=111 PERMANENT class=0 helper=rpc
    [NEW] 300 proto=17 src=1.2.3.4 dst=1.2.3.4 sport=0 dport=58031 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=1.2.3.4 master-dst=1.2.3.4 sport=56309 dport=111 PERMANENT class=0 helper=rpc

Troubleshooting

Problems with conntrackd? The following list of questions should help for troubleshooting:

1. I see packets lost in conntrackd -s
2. The log messages report that the maximum netlink socket buffer has been reached.
3. I see can't open multicast server in the log messages
4. Can I use wackamole, heartattack or any other HA manager?
5. Does conntrackd support TCP flow-recovery with window tracking enabled?
6. Does conntrackd support the H.323 and SIP connection tracking helpers?
7. Is there any way to set up a more verbose mode in the log message for debugging?

1.

I see packets lost in conntrackd -s

You can rise the value of McastRcvSocketBuffer and McastRcvSocketBuffer, if the problem is due to buffer overruns in the multicast sender or the receiver, the problem should disapear.

2.

The log messages report that the maximum netlink socket buffer has been reached.

You can increase the values of SocketBufferSize and SocketBufferSizeMaxGrown.

3.

I see can't open multicast server in the log messages

Make sure that the IPv4_interface clause has the IP of the dedicated link.

4.

Can I use wackamole, heartattack or any other HA manager?

Absolutely, you can. But before reporting issues, make sure that your HA manager is not the source of the problems.

5.

Does conntrackd support TCP flow-recovery with window tracking enabled?

Yes, but you require a Linux kernel >= 2.6.36 and the conntrack-tools >= 0.9.15. To enable it, check the TCPWindowTracking clause in the example configuration files.

6.

Does conntrackd support the H.323 and SIP connection tracking helpers?

Yes, conntrackd includes expectation support since version 1.2.0.

7.

Is there any way to set up a more verbose mode in the log message for debugging?

No, but conntrackd provides lots of information that you can look up in runtime via -s option.

You can check network statistics to find anomalies:

# conntrackd -s network
    network statistics:
        recv:
                Malformed messages:                        0
                Wrong protocol version:                    0
                Malformed header:                          0
                Malformed payload:                         0
                Bad message type:                          0
                Truncated message:                         0
                Bad message size:                          0
        send:
                Malformed messages:                        0

sequence tracking statistics:
        recv:
                Packets lost:                          42726
                Packets before:                            0

UDP traffic (active device=eth3):
              564232 Bytes sent              1979844 Bytes recv
                2844 Pckts sent                 8029 Pckts recv
                   0 Error send                    0 Error recv
    

You can check cache statistics:

# conntrackd -s cache
cache:internal  active objects:                    0
        active/total entries:                      0/           0
        creation OK/failed:                    11068/           0
                no memory available:               0
                no space left in cache:            0
        update OK/failed:                       4128/           0
                entry not found:                   0
        deletion created/failed:               11068/           0
                entry not found:                   0

cache:external  active objects:                    0
        active/total entries:                      0/           0
        creation OK/failed:                    10521/           0
                no memory available:               0
                no space left in cache:            0
        update OK/failed:                       8832/           0
                entry not found:                   0
        deletion created/failed:               10521/           0
                entry not found:                   0
    

You can check runtime miscelaneous statistics:

# conntrackd -s runtime
daemon uptime: 14 min

netlink stats:
        events received:                       24736
        events filtered:                           0
        events unknown type:                       0
        catch event failed:                        0
        dump unknown type:                         0
        netlink overrun:                           0
        flush kernel table:                        1
        resync with kernel table:                  0
        current buffer size (in bytes):      8000000

runtime stats:
        child process failed:                      0
                child process segfault:            0
                child process termsig:             0
        select failed:                             0
        wait failed:                               0
        local read failed:                         0
        local unknown request:                     0
    

You can check dedicated link statistics:

# conntrackd -s link
UDP traffic device=eth3 status=RUNNING role=ACTIVE:
              566848 Bytes sent              1982612 Bytes recv
                3018 Pckts sent                 8203 Pckts recv
                   0 Error send                    0 Error recv
    

You can check network queue statistics:

# conntrackd -s queue
allocated queue nodes:                     1

queue txqueue:
current elements:                          0
maximum elements:                 2147483647
not enough space errors:                   0

queue errorq:
current elements:                          0
maximum elements:                        128
not enough space errors:                   0

queue rsqueue:
current elements:                          1
maximum elements:                     131072
not enough space errors:                   0
    

Chapter 7. System integration

You may want to integrate conntrackd into your system in order to build a robust firewall cluster. You should take a look at how the linux distro of your choice does this, as there are some interesting things to take into account.

Depending on the architecture of the firewall cluster, you may want to sync each node after a fallback operation, so the new node inmediately knows the connection of the other. This is specially interesting in Active-Active mode.

This can be done using conntrackd -n just after the new node has joined the conntrackd cluster, for example at boot time. These operations require the main conntrackd daemon to open the UNIX socket to receive the order from the conntrackd -n call.

Care must be taken that no race conditions happens (i.e, the UNIX socket is actually opened before conntrackd -n is launched). Otherwise, you may end with a new node (after fallback) which doesn't know any connection states from the other node.

Since conntrack-tools 1.4.4, the conntrackd daemon includes integration with libsystemd. If conntrackd is configured at build time with this support (using --enable-systemd), then you can use Systemd on in the conntrackd.conf main configuration file. To benefit from this integration, you should use a systemd service file of Type=notify, which also includes support for the systemd watchdog.

Using systemd and conntrackd with libsystemd support and a service file of Type=notify means that conntrackd will notify of its readiness to systemd, so you can launch conntrackd -n safely, avoiding such race conditions.