1. 11 Aug, 2018 4 commits
  2. 10 Aug, 2018 1 commit
    • David S. Miller's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf · e91e2189
      David S. Miller authored
      Daniel Borkmann says:
      
      ====================
      pull-request: bpf 2018-08-10
      
      The following pull-request contains BPF updates for your *net* tree.
      
      The main changes are:
      
      1) Fix cpumap and devmap on teardown as they're under RCU context
         and won't have same assumption as running under NAPI protection,
         from Jesper.
      
      2) Fix various sockmap bugs in bpf_tcp_sendmsg() code, e.g. we had
         a bug where socket error was not propagated correctly, from Daniel.
      
      3) Fix incompatible libbpf header license for BTF code and match it
         before it gets officially released with the rest of libbpf which
         is LGPL-2.1, from Martin.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e91e2189
  3. 09 Aug, 2018 16 commits
    • Daniel Borkmann's avatar
      Merge branch 'bpf-fix-cpu-and-devmap-teardown' · 9c954201
      Daniel Borkmann authored
      Jesper Dangaard Brouer says:
      
      ====================
      Removing entries from cpumap and devmap, goes through a number of
      syncronization steps to make sure no new xdp_frames can be enqueued.
      But there is a small chance, that xdp_frames remains which have not
      been flushed/processed yet.  Flushing these during teardown, happens
      from RCU context and not as usual under RX NAPI context.
      
      The optimization introduced in commt 389ab7f0 ("xdp: introduce
      xdp_return_frame_rx_napi"), missed that the flush operation can also
      be called from RCU context.  Thus, we cannot always use the
      xdp_return_frame_rx_napi call, which take advantage of the protection
      provided by XDP RX running under NAPI protection.
      
      The samples/bpf xdp_redirect_cpu have a --stress-mode, that is
      adjusted to easier reproduce (verified by Red Hat QA).
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      9c954201
    • Jesper Dangaard Brouer's avatar
      xdp: fix bug in devmap teardown code path · 1bf9116d
      Jesper Dangaard Brouer authored
      Like cpumap teardown, the devmap teardown code also flush remaining
      xdp_frames, via bq_xmit_all() in case map entry is removed.  The code
      can call xdp_return_frame_rx_napi, from the the wrong context, in-case
      ndo_xdp_xmit() fails.
      
      Fixes: 389ab7f0 ("xdp: introduce xdp_return_frame_rx_napi")
      Fixes: 735fc405 ("xdp: change ndo_xdp_xmit API to support bulking")
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      1bf9116d
    • Jesper Dangaard Brouer's avatar
      samples/bpf: xdp_redirect_cpu adjustment to reproduce teardown race easier · 37d7ff25
      Jesper Dangaard Brouer authored
      The teardown race in cpumap is really hard to reproduce.  These changes
      makes it easier to reproduce, for QA.
      
      The --stress-mode now have a case of a very small queue size of 8, that helps
      to trigger teardown flush to encounter a full queue, which results in calling
      xdp_return_frame API, in a non-NAPI protect context.
      
      Also increase MAX_CPUS, as my QA department have larger machines than me.
      Tested-by: default avatarJean-Tsung Hsiao <jhsiao@redhat.com>
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      37d7ff25
    • Jesper Dangaard Brouer's avatar
      xdp: fix bug in cpumap teardown code path · ad0ab027
      Jesper Dangaard Brouer authored
      When removing a cpumap entry, a number of syncronization steps happen.
      Eventually the teardown code __cpu_map_entry_free is invoked from/via
      call_rcu.
      
      The teardown code __cpu_map_entry_free() flushes remaining xdp_frames,
      by invoking bq_flush_to_queue, which calls xdp_return_frame_rx_napi().
      The issues is that the teardown code is not running in the RX NAPI
      code path.  Thus, it is not allowed to invoke the NAPI variant of
      xdp_return_frame.
      
      This bug was found and triggered by using the --stress-mode option to
      the samples/bpf program xdp_redirect_cpu.  It is hard to trigger,
      because the ptr_ring have to be full and cpumap bulk queue max
      contains 8 packets, and a remote CPU is racing to empty the ptr_ring
      queue.
      
      Fixes: 389ab7f0 ("xdp: introduce xdp_return_frame_rx_napi")
      Tested-by: default avatarJean-Tsung Hsiao <jhsiao@redhat.com>
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      ad0ab027
    • Linus Torvalds's avatar
      Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 · 112cbae2
      Linus Torvalds authored
      Pull crypto fix from Herbert Xu:
       "This fixes a performance regression in arm64 NEON crypto as well as a
        crash in x86 aegis/morus on unsupported CPUs"
      
      * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
        crypto: x86/aegis,morus - Fix and simplify CPUID checks
        crypto: arm64 - revert NEON yield for fast AEAD implementations
      112cbae2
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · 6395ad85
      Linus Torvalds authored
      Pull networking fixes from David Miller:
      
       1) The real fix for the ipv6 route metric leak Sabrina was seeing, from
          Cong Wang.
      
       2) Fix syzbot triggers AF_PACKET v3 ring buffer insufficient room
          conditions, from Willem de Bruijn.
      
       3) vsock can reinitialize active work struct, fix from Cong Wang.
      
       4) RXRPC keepalive generator can wedge a cpu, fix from David Howells.
      
       5) Fix locking in AF_SMC ioctl, from Ursula Braun.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
        dsa: slave: eee: Allow ports to use phylink
        net/smc: move sock lock in smc_ioctl()
        net/smc: allow sysctl rmem and wmem defaults for servers
        net/smc: no shutdown in state SMC_LISTEN
        net: aquantia: Fix IFF_ALLMULTI flag functionality
        rxrpc: Fix the keepalive generator [ver #2]
        net/mlx5e: Cleanup of dcbnl related fields
        net/mlx5e: Properly check if hairpin is possible between two functions
        vhost: reset metadata cache when initializing new IOTLB
        llc: use refcount_inc_not_zero() for llc_sap_find()
        dccp: fix undefined behavior with 'cwnd' shift in ccid2_cwnd_restart()
        tipc: fix an interrupt unsafe locking scenario
        vsock: split dwork to avoid reinitializations
        net: thunderx: check for failed allocation lmac->dmacs
        cxgb4: mk_act_open_req() buggers ->{local, peer}_ip on big-endian hosts
        packet: refine ring v3 block size test to hold one frame
        ip6_tunnel: use the right value for ipv4 min mtu check in ip6_tnl_xmit
        ipv6: fix double refcount of fib6_metrics
      6395ad85
    • Andrew Lunn's avatar
      dsa: slave: eee: Allow ports to use phylink · 1be52e97
      Andrew Lunn authored
      For a port to be able to use EEE, both the MAC and the PHY must
      support EEE. A phy can be provided by both a phydev or phylink. Verify
      at least one of these exist, not just phydev.
      
      Fixes: aab9c406 ("net: dsa: Plug in PHYLINK support")
      Signed-off-by: default avatarAndrew Lunn <andrew@lunn.ch>
      Reviewed-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1be52e97
    • David S. Miller's avatar
      Merge branch 'smc-fixes' · ef91b6f9
      David S. Miller authored
      Ursula Braun says:
      
      ====================
      net/smc: fixes 2018-08-08
      
      here are small fixes for SMC: The first patch makes sure, shutdown code
      is not executed for sockets in state SMC_LISTEN. The second patch resets
      send and receive buffer values for accepted sockets, since TCP buffer size
      optimizations for the internal CLC socket should not be forwarded to the
      outer SMC socket. The third patch solves a race between connect and ioctl
      reported by syzbot.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ef91b6f9
    • Ursula Braun's avatar
      net/smc: move sock lock in smc_ioctl() · 7311d665
      Ursula Braun authored
      When an SMC socket is connecting it is decided whether fallback to
      TCP is needed. To avoid races between connect and ioctl move the
      sock lock before the use_fallback check.
      
      Reported-by: syzbot+5b2cece1a8ecb2ca77d8@syzkaller.appspotmail.com
      Reported-by: syzbot+19557374321ca3710990@syzkaller.appspotmail.com
      Fixes: 1992d998 ("net/smc: take sock lock in smc_ioctl()")
      Signed-off-by: default avatarUrsula Braun <ubraun@linux.ibm.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7311d665
    • Ursula Braun's avatar
      net/smc: allow sysctl rmem and wmem defaults for servers · bd58c7e0
      Ursula Braun authored
      Without setsockopt SO_SNDBUF and SO_RCVBUF settings, the sysctl
      defaults net.ipv4.tcp_wmem and net.ipv4.tcp_rmem should be the base
      for the sizes of the SMC sndbuf and rcvbuf. Any TCP buffer size
      optimizations for servers should be ignored.
      Signed-off-by: default avatarUrsula Braun <ubraun@linux.ibm.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bd58c7e0
    • Ursula Braun's avatar
      net/smc: no shutdown in state SMC_LISTEN · caa21e19
      Ursula Braun authored
      Invoking shutdown for a socket in state SMC_LISTEN does not make
      sense. Nevertheless programs like syzbot fuzzing the kernel may
      try to do this. For SMC this means a socket refcounting problem.
      This patch makes sure a shutdown call for an SMC socket in state
      SMC_LISTEN simply returns with -ENOTCONN.
      Signed-off-by: default avatarUrsula Braun <ubraun@linux.ibm.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      caa21e19
    • Dmitry Bogdanov's avatar
      net: aquantia: Fix IFF_ALLMULTI flag functionality · 11ba961c
      Dmitry Bogdanov authored
      It was noticed that NIC always pass all multicast traffic to the host
      regardless of IFF_ALLMULTI flag on the interface.
      The rule in MC Filter Table in NIC, that is configured to accept any
      multicast packets, is turning on if IFF_MULTICAST flag is set on the
      interface. It leads to passing all multicast traffic to the host.
      This fix changes the condition to turn on that rule by checking
      IFF_ALLMULTI flag as it should.
      
      Fixes: b21f502f ("net:ethernet:aquantia: Fix for multicast filter handling.")
      Signed-off-by: default avatarDmitry Bogdanov <dmitry.bogdanov@aquantia.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      11ba961c
    • David Howells's avatar
      rxrpc: Fix the keepalive generator [ver #2] · 330bdcfa
      David Howells authored
      AF_RXRPC has a keepalive message generator that generates a message for a
      peer ~20s after the last transmission to that peer to keep firewall ports
      open.  The implementation is incorrect in the following ways:
      
       (1) It mixes up ktime_t and time64_t types.
      
       (2) It uses ktime_get_real(), the output of which may jump forward or
           backward due to adjustments to the time of day.
      
       (3) If the current time jumps forward too much or jumps backwards, the
           generator function will crank the base of the time ring round one slot
           at a time (ie. a 1s period) until it catches up, spewing out VERSION
           packets as it goes.
      
      Fix the problem by:
      
       (1) Only using time64_t.  There's no need for sub-second resolution.
      
       (2) Use ktime_get_seconds() rather than ktime_get_real() so that time
           isn't perceived to go backwards.
      
       (3) Simplifying rxrpc_peer_keepalive_worker() by splitting it into two
           parts:
      
           (a) The "worker" function that manages the buckets and the timer.
      
           (b) The "dispatch" function that takes the pending peers and
           	 potentially transmits a keepalive packet before putting them back
           	 in the ring into the slot appropriate to the revised last-Tx time.
      
       (4) Taking everything that's pending out of the ring and splicing it into
           a temporary collector list for processing.
      
           In the case that there's been a significant jump forward, the ring
           gets entirely emptied and then the time base can be warped forward
           before the peers are processed.
      
           The warping can't happen if the ring isn't empty because the slot a
           peer is in is keepalive-time dependent, relative to the base time.
      
       (5) Limit the number of iterations of the bucket array when scanning it.
      
       (6) Set the timer to skip any empty slots as there's no point waking up if
           there's nothing to do yet.
      
      This can be triggered by an incoming call from a server after a reboot with
      AF_RXRPC and AFS built into the kernel causing a peer record to be set up
      before userspace is started.  The system clock is then adjusted by
      userspace, thereby potentially causing the keepalive generator to have a
      meltdown - which leads to a message like:
      
      	watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:1:23]
      	...
      	Workqueue: krxrpcd rxrpc_peer_keepalive_worker
      	EIP: lock_acquire+0x69/0x80
      	...
      	Call Trace:
      	 ? rxrpc_peer_keepalive_worker+0x5e/0x350
      	 ? _raw_spin_lock_bh+0x29/0x60
      	 ? rxrpc_peer_keepalive_worker+0x5e/0x350
      	 ? rxrpc_peer_keepalive_worker+0x5e/0x350
      	 ? __lock_acquire+0x3d3/0x870
      	 ? process_one_work+0x110/0x340
      	 ? process_one_work+0x166/0x340
      	 ? process_one_work+0x110/0x340
      	 ? worker_thread+0x39/0x3c0
      	 ? kthread+0xdb/0x110
      	 ? cancel_delayed_work+0x90/0x90
      	 ? kthread_stop+0x70/0x70
      	 ? ret_from_fork+0x19/0x24
      
      Fixes: ace45bec ("rxrpc: Fix firewall route keepalive")
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      330bdcfa
    • David S. Miller's avatar
      Merge branch 'mlx5-fixes' · f39cc1c7
      David S. Miller authored
      Saeed Mahameed says:
      
      ====================
      Mellanox, mlx5e fixes 2018-08-07
      
      I know it is late into 4.18 release, and this is why I am submitting
      only two mlx5e ethernet fixes.
      
      The first one from Or, is needed for -stable and it fixes hairpin
      for "same device" check.
      
      The second fix is a non risk fix from Huy which cleans up and improves
      error return value reporting for dcbnl_ieee_setapp.
      
      For -stable v4.16
      - net/mlx5e: Properly check if hairpin is possible between two functions
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f39cc1c7
    • Huy Nguyen's avatar
      net/mlx5e: Cleanup of dcbnl related fields · f280c6a1
      Huy Nguyen authored
      Remove unused netdev_registered_init/remove in en.h
      Return ENOSUPPORT if the check MLX5_DSCP_SUPPORTED fails.
      Remove extra white space
      
      Fixes: 2a5e7a13 ("net/mlx5e: Add dcbnl dscp to priority support")
      Signed-off-by: default avatarHuy Nguyen <huyn@mellanox.com>
      Cc: Yuval Shaia <yuval.shaia@oracle.com>
      Reviewed-by: default avatarParav Pandit <parav@mellanox.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f280c6a1
    • Or Gerlitz's avatar
      net/mlx5e: Properly check if hairpin is possible between two functions · 816f6706
      Or Gerlitz authored
      The current check relies on function BDF addresses and can get
      us wrong e.g when two VFs are assigned into a VM and the PCI
      v-address is set by the hypervisor.
      
      Fixes: 5c65c564 ('net/mlx5e: Support offloading TC NIC hairpin flows')
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Reported-by: default avatarAlaa Hleihel <alaa@mellanox.com>
      Tested-by: default avatarAlaa Hleihel <alaa@mellanox.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      816f6706
  4. 08 Aug, 2018 7 commits
  5. 07 Aug, 2018 9 commits
    • Cong Wang's avatar
      llc: use refcount_inc_not_zero() for llc_sap_find() · 0dcb8225
      Cong Wang authored
      llc_sap_put() decreases the refcnt before deleting sap
      from the global list. Therefore, there is a chance
      llc_sap_find() could find a sap with zero refcnt
      in this global list.
      
      Close this race condition by checking if refcnt is zero
      or not in llc_sap_find(), if it is zero then it is being
      removed so we can just treat it as gone.
      
      Reported-by: <syzbot+278893f3f7803871f7ce@syzkaller.appspotmail.com>
      Signed-off-by: default avatarCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0dcb8225
    • Alexey Kodanev's avatar
      dccp: fix undefined behavior with 'cwnd' shift in ccid2_cwnd_restart() · 61ef4b07
      Alexey Kodanev authored
      The shift of 'cwnd' with '(now - hc->tx_lsndtime) / hc->tx_rto' value
      can lead to undefined behavior [1].
      
      In order to fix this use a gradual shift of the window with a 'while'
      loop, similar to what tcp_cwnd_restart() is doing.
      
      When comparing delta and RTO there is a minor difference between TCP
      and DCCP, the last one also invokes dccp_cwnd_restart() and reduces
      'cwnd' if delta equals RTO. That case is preserved in this change.
      
      [1]:
      [40850.963623] UBSAN: Undefined behaviour in net/dccp/ccids/ccid2.c:237:7
      [40851.043858] shift exponent 67 is too large for 32-bit type 'unsigned int'
      [40851.127163] CPU: 3 PID: 15940 Comm: netstress Tainted: G        W   E     4.18.0-rc7.x86_64 #1
      ...
      [40851.377176] Call Trace:
      [40851.408503]  dump_stack+0xf1/0x17b
      [40851.451331]  ? show_regs_print_info+0x5/0x5
      [40851.503555]  ubsan_epilogue+0x9/0x7c
      [40851.548363]  __ubsan_handle_shift_out_of_bounds+0x25b/0x2b4
      [40851.617109]  ? __ubsan_handle_load_invalid_value+0x18f/0x18f
      [40851.686796]  ? xfrm4_output_finish+0x80/0x80
      [40851.739827]  ? lock_downgrade+0x6d0/0x6d0
      [40851.789744]  ? xfrm4_prepare_output+0x160/0x160
      [40851.845912]  ? ip_queue_xmit+0x810/0x1db0
      [40851.895845]  ? ccid2_hc_tx_packet_sent+0xd36/0x10a0 [dccp]
      [40851.963530]  ccid2_hc_tx_packet_sent+0xd36/0x10a0 [dccp]
      [40852.029063]  dccp_xmit_packet+0x1d3/0x720 [dccp]
      [40852.086254]  dccp_write_xmit+0x116/0x1d0 [dccp]
      [40852.142412]  dccp_sendmsg+0x428/0xb20 [dccp]
      [40852.195454]  ? inet_dccp_listen+0x200/0x200 [dccp]
      [40852.254833]  ? sched_clock+0x5/0x10
      [40852.298508]  ? sched_clock+0x5/0x10
      [40852.342194]  ? inet_create+0xdf0/0xdf0
      [40852.388988]  sock_sendmsg+0xd9/0x160
      ...
      
      Fixes: 113ced1f ("dccp ccid-2: Perform congestion-window validation")
      Signed-off-by: default avatarAlexey Kodanev <alexey.kodanev@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      61ef4b07
    • Ying Xue's avatar
      tipc: fix an interrupt unsafe locking scenario · 37436d9c
      Ying Xue authored
      Commit 9faa89d4 ("tipc: make function tipc_net_finalize() thread
      safe") tries to make it thread safe to set node address, so it uses
      node_list_lock lock to serialize the whole process of setting node
      address in tipc_net_finalize(). But it causes the following interrupt
      unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        rht_deferred_worker()
        rhashtable_rehash_table()
        lock(&(&ht->lock)->rlock)
      			       tipc_nl_compat_doit()
                                     tipc_net_finalize()
                                     local_irq_disable();
                                     lock(&(&tn->node_list_lock)->rlock);
                                     tipc_sk_reinit()
                                     rhashtable_walk_enter()
                                     lock(&(&ht->lock)->rlock);
        <Interrupt>
        tipc_disc_rcv()
        tipc_node_check_dest()
        tipc_node_create()
        lock(&(&tn->node_list_lock)->rlock);
      
       *** DEADLOCK ***
      
      When rhashtable_rehash_table() holds ht->lock on CPU0, it doesn't
      disable BH. So if an interrupt happens after the lock, it can create
      an inverse lock ordering between ht->lock and tn->node_list_lock. As
      a consequence, deadlock might happen.
      
      The reason causing the inverse lock ordering scenario above is because
      the initial purpose of node_list_lock is not designed to do the
      serialization of node address setting.
      
      As cmpxchg() can guarantee CAS (compare-and-swap) process is atomic,
      we use it to replace node_list_lock to ensure setting node address can
      be atomically finished. It turns out the potential deadlock can be
      avoided as well.
      
      Fixes: 9faa89d4 ("tipc: make function tipc_net_finalize() thread safe")
      Signed-off-by: default avatarYing Xue <ying.xue@windriver.com>
      Acked-by: default avatarJon Maloy <maloy@donjonn.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      37436d9c
    • Cong Wang's avatar
      vsock: split dwork to avoid reinitializations · 455f05ec
      Cong Wang authored
      syzbot reported that we reinitialize an active delayed
      work in vsock_stream_connect():
      
      	ODEBUG: init active (active state 0) object type: timer_list hint:
      	delayed_work_timer_fn+0x0/0x90 kernel/workqueue.c:1414
      	WARNING: CPU: 1 PID: 11518 at lib/debugobjects.c:329
      	debug_print_object+0x16a/0x210 lib/debugobjects.c:326
      
      The pattern is apparently wrong, we should only initialize
      the dealyed work once and could repeatly schedule it. So we
      have to move out the initializations to allocation side.
      And to avoid confusion, we can split the shared dwork
      into two, instead of re-using the same one.
      
      Fixes: d021c344 ("VSOCK: Introduce VM Sockets")
      Reported-by: <syzbot+8a9b1bd330476a4f3db6@syzkaller.appspotmail.com>
      Cc: Andy king <acking@vmware.com>
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Cc: Jorgen Hansen <jhansen@vmware.com>
      Signed-off-by: default avatarCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      455f05ec
    • Colin Ian King's avatar
      net: thunderx: check for failed allocation lmac->dmacs · a94cead7
      Colin Ian King authored
      The allocation of lmac->dmacs is not being checked for allocation
      failure. Add the check.
      
      Fixes: 3a34ecfd ("net: thunderx: add MAC address filter tracking for LMAC")
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a94cead7
    • Al Viro's avatar
      cxgb4: mk_act_open_req() buggers ->{local, peer}_ip on big-endian hosts · adfb442d
      Al Viro authored
      Unlike fs.val.lport and fs.val.fport, cxgb4_process_flow_match()
      sets fs.val.{l,f}ip to net-endian values without conversion - they come
      straight from flow_dissector_key_ipv4_addrs ->dst and ->src resp.  So
      the assignment in mk_act_open_req() ought to be a straight copy.
      
      	As far as I know, T4 PCIe cards do exist, so it's not as if that
      thing could only be found on little-endian systems...
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Acked-by: default avatarRahul Lakkireddy <rahul.lakkireddy@chelsio.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      adfb442d
    • Ondrej Mosnacek's avatar
      crypto: x86/aegis,morus - Fix and simplify CPUID checks · 877ccce7
      Ondrej Mosnacek authored
      It turns out I had misunderstood how the x86_match_cpu() function works.
      It evaluates a logical OR of the matching conditions, not logical AND.
      This caused the CPU feature checks for AEGIS to pass even if only SSE2
      (but not AES-NI) was supported (or vice versa), leading to potential
      crashes if something tried to use the registered algs.
      
      This patch switches the checks to a simpler method that is used e.g. in
      the Camellia x86 code.
      
      The patch also removes the MODULE_DEVICE_TABLE declarations which
      actually seem to cause the modules to be auto-loaded at boot, which is
      not desired. The crypto API on-demand module loading is sufficient.
      
      Fixes: 1d373d4e ("crypto: x86 - Add optimized AEGIS implementations")
      Fixes: 6ecc9d9f ("crypto: x86 - Add optimized MORUS implementations")
      Signed-off-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Tested-by: default avatarMilan Broz <gmazyland@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      877ccce7
    • Ard Biesheuvel's avatar
      crypto: arm64 - revert NEON yield for fast AEAD implementations · f10dc56c
      Ard Biesheuvel authored
      As it turns out, checking the TIF_NEED_RESCHED flag after each
      iteration results in a significant performance regression (~10%)
      when running fast algorithms (i.e., ones that use special instructions
      and operate in the < 4 cycles per byte range) on in-order cores with
      comparatively slow memory accesses such as the Cortex-A53.
      
      Given the speed of these ciphers, and the fact that the page based
      nature of the AEAD scatterwalk API guarantees that the core NEON
      transform is never invoked with more than a single page's worth of
      input, we can estimate the worst case duration of any resulting
      scheduling blackout: on a 1 GHz Cortex-A53 running with 64k pages,
      processing a page's worth of input at 4 cycles per byte results in
      a delay of ~250 us, which is a reasonable upper bound.
      
      So let's remove the yield checks from the fused AES-CCM and AES-GCM
      routines entirely.
      
      This reverts commit 7b67ae4d and
      partially reverts commit 7c50136a.
      
      Fixes: 7c50136a ("crypto: arm64/aes-ghash - yield NEON after every ...")
      Fixes: 7b67ae4d ("crypto: arm64/aes-ccm - yield NEON after every ...")
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f10dc56c
    • Linus Torvalds's avatar
      Merge tag 'gpio-v4.18-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio · 1236568e
      Linus Torvalds authored
      Pull GPIO fix from Linus Walleij:
       "This is a single fix affecting X86 ACPI, and as such pretty important.
      
        It is going to stable as well and have all the high-notch x86 platform
        developers agreeing on it"
      
      * tag 'gpio-v4.18-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio:
        gpiolib-acpi: make sure we trigger edge events at least once on boot
      1236568e
  6. 06 Aug, 2018 3 commits