Commit a0264a9f authored by Kuniyuki Iwashima's avatar Kuniyuki Iwashima Committed by Jakub Kicinski

af_unix: Move spin_lock() in manage_oob().

When OOB skb has been already consumed, manage_oob() returns the next
skb if exists.  In such a case, we need to fall back to the else branch
below.

Then, we want to keep holding spin_lock(&sk->sk_receive_queue.lock).

Let's move it out of if-else branch and add lightweight check before
spin_lock() for major use cases without OOB skb.
Signed-off-by: default avatarKuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20240905193240.17565-4-kuniyu@amazon.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent beb2c5f1
......@@ -2657,9 +2657,12 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
struct sk_buff *read_skb = NULL, *unread_skb = NULL;
struct unix_sock *u = unix_sk(sk);
if (!unix_skb_len(skb)) {
spin_lock(&sk->sk_receive_queue.lock);
if (likely(unix_skb_len(skb) && skb != READ_ONCE(u->oob_skb)))
return skb;
spin_lock(&sk->sk_receive_queue.lock);
if (!unix_skb_len(skb)) {
if (copied && (!u->oob_skb || skb == u->oob_skb)) {
skb = NULL;
} else if (flags & MSG_PEEK) {
......@@ -2670,14 +2673,9 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
__skb_unlink(read_skb, &sk->sk_receive_queue);
}
spin_unlock(&sk->sk_receive_queue.lock);
consume_skb(read_skb);
return skb;
goto unlock;
}
spin_lock(&sk->sk_receive_queue.lock);
if (skb != u->oob_skb)
goto unlock;
......@@ -2698,6 +2696,7 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
unlock:
spin_unlock(&sk->sk_receive_queue.lock);
consume_skb(read_skb);
kfree_skb(unread_skb);
return skb;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment