• Kuniyuki Iwashima's avatar
    tcp: Introduce optional per-netns ehash. · d1e5e640
    Kuniyuki Iwashima authored
    The more sockets we have in the hash table, the longer we spend looking
    up the socket.  While running a number of small workloads on the same
    host, they penalise each other and cause performance degradation.
    
    The root cause might be a single workload that consumes much more
    resources than the others.  It often happens on a cloud service where
    different workloads share the same computing resource.
    
    On EC2 c5.24xlarge instance (196 GiB memory and 524288 (1Mi / 2) ehash
    entries), after running iperf3 in different netns, creating 24Mi sockets
    without data transfer in the root netns causes about 10% performance
    regression for the iperf3's connection.
    
     thash_entries		sockets		length		Gbps
    	524288		      1		     1		50.7
    			   24Mi		    48		45.1
    
    It is basically related to the length of the list of each hash bucket.
    For testing purposes to see how performance drops along the length,
    I set 131072 (1Mi / 8) to thash_entries, and here's the result.
    
     thash_entries		sockets		length		Gbps
            131072		      1		     1		50.7
    			    1Mi		     8		49.9
    			    2Mi		    16		48.9
    			    4Mi		    32		47.3
    			    8Mi		    64		44.6
    			   16Mi		   128		40.6
    			   24Mi		   192		36.3
    			   32Mi		   256		32.5
    			   40Mi		   320		27.0
    			   48Mi		   384		25.0
    
    To resolve the socket lookup degradation, we introduce an optional
    per-netns hash table for TCP, but it's just ehash, and we still share
    the global bhash, bhash2 and lhash2.
    
    With a smaller ehash, we can look up non-listener sockets faster and
    isolate such noisy neighbours.  In addition, we can reduce lock contention.
    
    We can control the ehash size by a new sysctl knob.  However, depending
    on workloads, it will require very sensitive tuning, so we disable the
    feature by default (net.ipv4.tcp_child_ehash_entries == 0).  Moreover,
    we can fall back to using the global ehash in case we fail to allocate
    enough memory for a new ehash.  The maximum size is 16Mi, which is large
    enough that even if we have 48Mi sockets, the average list length is 3,
    and regression would be less than 1%.
    
    We can check the current ehash size by another read-only sysctl knob,
    net.ipv4.tcp_ehash_entries.  A negative value means the netns shares
    the global ehash (per-netns ehash is disabled or failed to allocate
    memory).
    
      # dmesg | cut -d ' ' -f 5- | grep "established hash"
      TCP established hash table entries: 524288 (order: 10, 4194304 bytes, vmalloc hugepage)
    
      # sysctl net.ipv4.tcp_ehash_entries
      net.ipv4.tcp_ehash_entries = 524288  # can be changed by thash_entries
    
      # sysctl net.ipv4.tcp_child_ehash_entries
      net.ipv4.tcp_child_ehash_entries = 0  # disabled by default
    
      # ip netns add test1
      # ip netns exec test1 sysctl net.ipv4.tcp_ehash_entries
      net.ipv4.tcp_ehash_entries = -524288  # share the global ehash
    
      # sysctl -w net.ipv4.tcp_child_ehash_entries=100
      net.ipv4.tcp_child_ehash_entries = 100
    
      # ip netns add test2
      # ip netns exec test2 sysctl net.ipv4.tcp_ehash_entries
      net.ipv4.tcp_ehash_entries = 128  # own a per-netns ehash with 2^n buckets
    
    When more than two processes in the same netns create per-netns ehash
    concurrently with different sizes, we need to guarantee the size in
    one of the following ways:
    
      1) Share the global ehash and create per-netns ehash
    
      First, unshare() with tcp_child_ehash_entries==0.  It creates dedicated
      netns sysctl knobs where we can safely change tcp_child_ehash_entries
      and clone()/unshare() to create a per-netns ehash.
    
      2) Control write on sysctl by BPF
    
      We can use BPF_PROG_TYPE_CGROUP_SYSCTL to allow/deny read/write on
      sysctl knobs.
    
    Note that the global ehash allocated at the boot time is spread over
    available NUMA nodes, but inet_pernet_hashinfo_alloc() will allocate
    pages for each per-netns ehash depending on the current process's NUMA
    policy.  By default, the allocation is done in the local node only, so
    the per-netns hash table could fully reside on a random node.  Thus,
    depending on the NUMA policy the netns is created with and the CPU the
    current thread is running on, we could see some performance differences
    for highly optimised networking applications.
    
    Note also that the default values of two sysctl knobs depend on the ehash
    size and should be tuned carefully:
    
      tcp_max_tw_buckets  : tcp_child_ehash_entries / 2
      tcp_max_syn_backlog : max(128, tcp_child_ehash_entries / 128)
    
    As a bonus, we can dismantle netns faster.  Currently, while destroying
    netns, we call inet_twsk_purge(), which walks through the global ehash.
    It can be potentially big because it can have many sockets other than
    TIME_WAIT in all netns.  Splitting ehash changes that situation, where
    it's only necessary for inet_twsk_purge() to clean up TIME_WAIT sockets
    in each netns.
    
    With regard to this, we do not free the per-netns ehash in inet_twsk_kill()
    to avoid UAF while iterating the per-netns ehash in inet_twsk_purge().
    Instead, we do it in tcp_sk_exit_batch() after calling tcp_twsk_purge() to
    keep it protocol-family-independent.
    
    In the future, we could optimise ehash lookup/iteration further by removing
    netns comparison for the per-netns ehash.
    Signed-off-by: default avatarKuniyuki Iwashima <kuniyu@amazon.com>
    Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
    Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
    d1e5e640
tcp_ipv4.c 87.4 KB