Commit 929b845f authored by Eric Dumazet's avatar Eric Dumazet Committed by Luis Henriques

tcp: avoid looping in tcp_send_fin()

commit 845704a5 upstream.

Presence of an unbound loop in tcp_send_fin() had always been hard
to explain when analyzing crash dumps involving gigantic dying processes
with millions of sockets.

Lets try a different strategy :

In case of memory pressure, try to add the FIN flag to last packet
in write queue, even if packet was already sent. TCP stack will
be able to deliver this FIN after a timeout event. Note that this
FIN being delivered by a retransmit, it also carries a Push flag
given our current implementation.

By checking sk_under_memory_pressure(), we anticipate that cooking
many FIN packets might deplete tcp memory.

In the case we could not allocate a packet, even with __GFP_WAIT
allocation, then not sending a FIN seems quite reasonable if it allows
to get rid of this socket, free memory, and not block the process from
eventually doing other useful work.
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
[ luis: backported to 3.16: adjusted context ]
Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
parent 049ba123
...@@ -2677,7 +2677,8 @@ void tcp_xmit_retransmit_queue(struct sock *sk) ...@@ -2677,7 +2677,8 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
/* We allow to exceed memory limits for FIN packets to expedite /* We allow to exceed memory limits for FIN packets to expedite
* connection tear down and (memory) recovery. * connection tear down and (memory) recovery.
* Otherwise tcp_send_fin() could loop forever. * Otherwise tcp_send_fin() could be tempted to either delay FIN
* or even be forced to close flow without any FIN.
*/ */
static void sk_forced_wmem_schedule(struct sock *sk, int size) static void sk_forced_wmem_schedule(struct sock *sk, int size)
{ {
...@@ -2690,33 +2691,40 @@ static void sk_forced_wmem_schedule(struct sock *sk, int size) ...@@ -2690,33 +2691,40 @@ static void sk_forced_wmem_schedule(struct sock *sk, int size)
sk_memory_allocated_add(sk, amt, &status); sk_memory_allocated_add(sk, amt, &status);
} }
/* Send a fin. The caller locks the socket for us. This cannot be /* Send a FIN. The caller locks the socket for us.
* allowed to fail queueing a FIN frame under any circumstances. * We should try to send a FIN packet really hard, but eventually give up.
*/ */
void tcp_send_fin(struct sock *sk) void tcp_send_fin(struct sock *sk)
{ {
struct sk_buff *skb, *tskb = tcp_write_queue_tail(sk);
struct tcp_sock *tp = tcp_sk(sk); struct tcp_sock *tp = tcp_sk(sk);
struct sk_buff *skb = tcp_write_queue_tail(sk);
int mss_now;
/* Optimization, tack on the FIN if we have a queue of /* Optimization, tack on the FIN if we have one skb in write queue and
* unsent frames. But be careful about outgoing SACKS * this skb was not yet sent, or we are under memory pressure.
* and IP options. * Note: in the latter case, FIN packet will be sent after a timeout,
* as TCP stack thinks it has already been transmitted.
*/ */
mss_now = tcp_current_mss(sk); if (tskb && (tcp_send_head(sk) || sk_under_memory_pressure(sk))) {
coalesce:
if (tcp_send_head(sk) != NULL) { TCP_SKB_CB(tskb)->tcp_flags |= TCPHDR_FIN;
TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_FIN; TCP_SKB_CB(tskb)->end_seq++;
TCP_SKB_CB(skb)->end_seq++;
tp->write_seq++; tp->write_seq++;
if (!tcp_send_head(sk)) {
/* This means tskb was already sent.
* Pretend we included the FIN on previous transmit.
* We need to set tp->snd_nxt to the value it would have
* if FIN had been sent. This is because retransmit path
* does not change tp->snd_nxt.
*/
tp->snd_nxt++;
return;
}
} else { } else {
/* Socket is locked, keep trying until memory is available. */ skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation);
for (;;) { if (unlikely(!skb)) {
skb = alloc_skb_fclone(MAX_TCP_HEADER, if (tskb)
sk->sk_allocation); goto coalesce;
if (skb) return;
break;
yield();
} }
skb_reserve(skb, MAX_TCP_HEADER); skb_reserve(skb, MAX_TCP_HEADER);
sk_forced_wmem_schedule(sk, skb->truesize); sk_forced_wmem_schedule(sk, skb->truesize);
...@@ -2725,7 +2733,7 @@ void tcp_send_fin(struct sock *sk) ...@@ -2725,7 +2733,7 @@ void tcp_send_fin(struct sock *sk)
TCPHDR_ACK | TCPHDR_FIN); TCPHDR_ACK | TCPHDR_FIN);
tcp_queue_skb(sk, skb); tcp_queue_skb(sk, skb);
} }
__tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_OFF); __tcp_push_pending_frames(sk, tcp_current_mss(sk), TCP_NAGLE_OFF);
} }
/* We get here when a process closes a file descriptor (either due to /* We get here when a process closes a file descriptor (either due to
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment