1. 04 Apr, 2017 8 commits
  2. 03 Apr, 2017 27 commits
  3. 02 Apr, 2017 5 commits
    • David S. Miller's avatar
      Merge branch 'mpls-more-labels' · a6fc09df
      David S. Miller authored
      David Ahern says:
      
      ====================
      net: mpls: Allow users to configure more labels per route
      
      Increase the maximum number of new labels for MPLS routes from 2 to 30.
      
      To keep memory consumption in check, the labels array is moved to the end
      of mpls_nh and mpls_iptunnel_encap structs as a 0-sized array. Allocations
      use the maximum number of labels across all nexthops in a route for LSR
      and the number of labels configured for LWT.
      
      The mpls_route layout is changed to:
      
         +----------------------+
         | mpls_route           |
         +----------------------+
         | mpls_nh 0            |
         +----------------------+
         | alignment padding    |   4 bytes for odd number of labels; 0 for even
         +----------------------+
         | via[rt_max_alen] 0   |
         +----------------------+
         | alignment padding    |   via's aligned on sizeof(unsigned long)
         +----------------------+
         | ...                  |
      
      Meaning the via follows its mpls_nh providing better locality as the
      number of labels increases. UDP_RR tests with namespaces shows no impact
      to a modest performance increase with this layout for 1 or 2 labels and
      1 or 2 nexthops.
      
      mpls_route allocation size is limited to 4096 bytes allowing on the
      order of 30 nexthops with 30 labels (or more nexthops with fewer
      labels). LWT encap shares same maximum number of labels as mpls routing.
      
      v3
      - initialize n_labels to 0 in case RTA_NEWDST is not defined; detected
        by the kbuild test robot
      
      v2
      - updates per Eric's comments
        + added patch to ensure all reads of rt_nhn_alive and nh_flags in
          the packet path use READ_ONCE and all writes via event handlers
          use WRITE_ONCE
      
        + limit mpls_route size to 4096 (PAGE_SIZE for most arch)
      
        + mostly killed use of MAX_NEW_LABELS; it exists only for common
          limit between lwt and routing paths
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a6fc09df
    • David Ahern's avatar
      net: mpls: Increase max number of labels for lwt encap · 1511009c
      David Ahern authored
      Alow users to push down more labels per MPLS encap. Similar to LSR case,
      move label array to the end of mpls_iptunnel_encap and allocate based on
      the number of labels for the route.
      
      For consistency with the LSR case, re-use the same maximum number of
      labels.
      Signed-off-by: default avatarDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1511009c
    • David Ahern's avatar
      net: mpls: bump maximum number of labels · a4ac8c98
      David Ahern authored
      Allow users to push down more labels per MPLS route. With the previous
      patches, no memory allocations are based on MAX_NEW_LABELS; the limit
      is only used to keep userspace in check.
      
      At this point MAX_NEW_LABELS is only used for mpls_route_config (copying
      route data from userspace) and processing nexthops looking for the max
      number of labels across the route spec.
      Signed-off-by: default avatarDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a4ac8c98
    • David Ahern's avatar
      net: mpls: Limit memory allocation for mpls_route · df1c6316
      David Ahern authored
      Limit memory allocation size for mpls_route to 4096.
      Signed-off-by: default avatarDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      df1c6316
    • David Ahern's avatar
      net: mpls: change mpls_route layout · 59b20966
      David Ahern authored
      Move labels to the end of mpls_nh as a 0-sized array and within mpls_route
      move the via for a nexthop after the mpls_nh. The new layout becomes:
      
         +----------------------+
         | mpls_route           |
         +----------------------+
         | mpls_nh 0            |
         +----------------------+
         | alignment padding    |   4 bytes for odd number of labels; 0 for even
         +----------------------+
         | via[rt_max_alen] 0   |
         +----------------------+
         | alignment padding    |   via's aligned on sizeof(unsigned long)
         +----------------------+
         | ...                  |
         +----------------------+
         | mpls_nh n-1          |
         +----------------------+
         | via[rt_max_alen] n-1 |
         +----------------------+
      
      Memory allocated for nexthop + via is constant across all nexthops and
      their via. It is based on the maximum number of labels across all nexthops
      and the maximum via length. The size is saved in the mpls_route as
      rt_nh_size. Accessing a nexthop becomes rt->rt_nh + index * rt->rt_nh_size.
      
      The offset of the via address from a nexthop is saved as rt_via_offset
      so that given an mpls_nh pointer the via for that hop is simply
      nh + rt->rt_via_offset.
      
      With prior code, memory allocated per mpls_route with 1 nexthop:
           via is an ethernet address - 64 bytes
           via is an ipv4 address     - 64
           via is an ipv6 address     - 72
      
      With this patch set, memory allocated per mpls_route with 1 nexthop and
      1 or 2 labels:
           via is an ethernet address - 56 bytes
           via is an ipv4 address     - 56
           via is an ipv6 address     - 64
      
      The 8-byte reduction is due to the previous patch; the change introduced
      by this patch has no impact on the size of allocations for 1 or 2 labels.
      
      Performance impact of this change was examined using network namespaces
      with veth pairs connecting namespaces. ns0 inserts the packet to the
      label-switched path using an lwt route with encap mpls. ns1 adds 1 or 2
      labels depending on test, ns2 (and ns3 for 2-label test) pops the label
      and forwards. ns3 (or ns4) for a 2-label is the destination. Similar
      series of namespaces used for 2-nexthop test.
      
      Intent is to measure changes to latency (overhead in manipulating the
      packet) in the forwarding path. Tests used netperf with UDP_RR.
      
      IPv4:                     current   patches
         1 label, 1 nexthop      29908     30115
         2 label, 1 nexthop      29071     29612
         1 label, 2 nexthop      29582     29776
         2 label, 2 nexthop      29086     29149
      
      IPv6:                     current   patches
         1 label, 1 nexthop      24502     24960
         2 label, 1 nexthop      24041     24407
         1 label, 2 nexthop      23795     23899
         2 label, 2 nexthop      23074     22959
      
      In short, the change has no effect to a modest increase in performance.
      This is expected since this patch does not really have an impact on routes
      with 1 or 2 labels (the current limit) and 1 or 2 nexthops.
      Signed-off-by: default avatarDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      59b20966