Commit a6e944f2 authored by Ciara Loftus's avatar Ciara Loftus Committed by Daniel Borkmann

xsk: Fix generic transmit when completion queue reservation fails

Two points of potential failure in the generic transmit function are:

  1. completion queue (cq) reservation failure.
  2. skb allocation failure

Originally the cq reservation was performed first, followed by the skb
allocation. Commit 67571640 ("xdp: fix possible cq entry leak")
reversed the order because at the time there was no mechanism available
to undo the cq reservation which could have led to possible cq entry leaks
in the event of skb allocation failure. However if the skb allocation is
performed first and the cq reservation then fails, the xsk skb destructor
is called which blindly adds the skb address to the already full cq leading
to undefined behavior.

This commit restores the original order (cq reservation followed by skb
allocation) and uses the xskq_prod_cancel helper to undo the cq reserve
in event of skb allocation failure.

Fixes: 67571640 ("xdp: fix possible cq entry leak")
Signed-off-by: default avatarCiara Loftus <ciara.loftus@intel.com>
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
Acked-by: default avatarMagnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20220614070746.8871-1-ciara.loftus@intel.com
parent 4b7a632a
...@@ -538,12 +538,6 @@ static int xsk_generic_xmit(struct sock *sk) ...@@ -538,12 +538,6 @@ static int xsk_generic_xmit(struct sock *sk)
goto out; goto out;
} }
skb = xsk_build_skb(xs, &desc);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
goto out;
}
/* This is the backpressure mechanism for the Tx path. /* This is the backpressure mechanism for the Tx path.
* Reserve space in the completion queue and only proceed * Reserve space in the completion queue and only proceed
* if there is space in it. This avoids having to implement * if there is space in it. This avoids having to implement
...@@ -552,11 +546,19 @@ static int xsk_generic_xmit(struct sock *sk) ...@@ -552,11 +546,19 @@ static int xsk_generic_xmit(struct sock *sk)
spin_lock_irqsave(&xs->pool->cq_lock, flags); spin_lock_irqsave(&xs->pool->cq_lock, flags);
if (xskq_prod_reserve(xs->pool->cq)) { if (xskq_prod_reserve(xs->pool->cq)) {
spin_unlock_irqrestore(&xs->pool->cq_lock, flags); spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
kfree_skb(skb);
goto out; goto out;
} }
spin_unlock_irqrestore(&xs->pool->cq_lock, flags); spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
skb = xsk_build_skb(xs, &desc);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
spin_lock_irqsave(&xs->pool->cq_lock, flags);
xskq_prod_cancel(xs->pool->cq);
spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
goto out;
}
err = __dev_direct_xmit(skb, xs->queue_id); err = __dev_direct_xmit(skb, xs->queue_id);
if (err == NETDEV_TX_BUSY) { if (err == NETDEV_TX_BUSY) {
/* Tell user-space to retry the send */ /* Tell user-space to retry the send */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment