1. 23 Nov, 2020 5 commits
  2. 22 Nov, 2020 1 commit
  3. 21 Nov, 2020 32 commits
  4. 20 Nov, 2020 2 commits
    • Jakub Kicinski's avatar
      Merge branch 'mptcp-more-miscellaneous-mptcp-fixes' · 9e8ac63f
      Jakub Kicinski authored
      Mat Martineau says:
      
      ====================
      mptcp: More miscellaneous MPTCP fixes
      
      Here's another batch of fixup and enhancement patches that we have
      collected in the MPTCP tree.
      
      Patch 1 removes an unnecessary flag and related code.
      
      Patch 2 fixes a bug encountered when closing fallback sockets.
      
      Patches 3 and 4 choose a better transmit subflow, with a self test.
      
      Patch 5 adjusts tracking of unaccepted subflows
      
      Patches 6-8 improve handling of long ADD_ADDR options, with a test.
      
      Patch 9 more reliably tracks the MPTCP-level window shared with peers.
      
      Patch 10 sends MPTCP-level acknowledgements more aggressively, so the
      peer can send more data without extra delay.
      ====================
      
      Link: https://lore.kernel.org/r/20201119194603.103158-1-mathew.j.martineau@linux.intel.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      9e8ac63f
    • Paolo Abeni's avatar
      mptcp: refine MPTCP-level ack scheduling · ea4ca586
      Paolo Abeni authored
      Send timely MPTCP-level ack is somewhat difficult when
      the insertion into the msk receive level is performed
      by the worker.
      
      It needs TCP-level dup-ack to notify the MPTCP-level
      ack_seq increase, as both the TCP-level ack seq and the
      rcv window are unchanged.
      
      We can actually avoid processing incoming data with the
      worker, and let the subflow or recevmsg() send ack as needed.
      
      When recvmsg() moves the skbs inside the msk receive queue,
      the msk space is still unchanged, so tcp_cleanup_rbuf() could
      end-up skipping TCP-level ack generation. Anyway, when
      __mptcp_move_skbs() is invoked, a known amount of bytes is
      going to be consumed soon: we update rcv wnd computation taking
      them in account.
      
      Additionally we need to explicitly trigger tcp_cleanup_rbuf()
      when recvmsg() consumes a significant amount of the receive buffer.
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      ea4ca586