Commit 65567e41 authored by Bart Van Assche's avatar Bart Van Assche Committed by Doug Ledford

RDMA/rxe: Fix a race condition in rxe_requester()

The rxe driver works as follows:
* The send queue, receive queue and completion queues are implemented as
  circular buffers.
* ib_post_send() and ib_post_recv() calls are serialized through a spinlock.
* Removing elements from various queues happens from tasklet
  context. Tasklets are guaranteed to run on at most one CPU. This serializes
  access to these queues. See also rxe_completer(), rxe_requester() and
  rxe_responder().
* rxe_completer() processes the skbs queued onto qp->resp_pkts.
* rxe_requester() handles the send queue (qp->sq.queue).
* rxe_responder() processes the skbs queued onto qp->req_pkts.

Since rxe_drain_req_pkts() processes qp->req_pkts, calling
rxe_drain_req_pkts() from rxe_requester() is racy. Hence this patch.
Reported-by: default avatarMoni Shoua <monis@mellanox.com>
Signed-off-by: default avatarBart Van Assche <bart.vanassche@wdc.com>
Cc: stable@vger.kernel.org
Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
parent 37cb11ac
...@@ -237,7 +237,6 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, ...@@ -237,7 +237,6 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
void rxe_release(struct kref *kref); void rxe_release(struct kref *kref);
void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify);
int rxe_completer(void *arg); int rxe_completer(void *arg);
int rxe_requester(void *arg); int rxe_requester(void *arg);
int rxe_responder(void *arg); int rxe_responder(void *arg);
......
...@@ -594,15 +594,8 @@ int rxe_requester(void *arg) ...@@ -594,15 +594,8 @@ int rxe_requester(void *arg)
rxe_add_ref(qp); rxe_add_ref(qp);
next_wqe: next_wqe:
if (unlikely(!qp->valid)) { if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR))
rxe_drain_req_pkts(qp, true);
goto exit; goto exit;
}
if (unlikely(qp->req.state == QP_STATE_ERROR)) {
rxe_drain_req_pkts(qp, true);
goto exit;
}
if (unlikely(qp->req.state == QP_STATE_RESET)) { if (unlikely(qp->req.state == QP_STATE_RESET)) {
qp->req.wqe_index = consumer_index(qp->sq.queue); qp->req.wqe_index = consumer_index(qp->sq.queue);
......
...@@ -1209,7 +1209,7 @@ static enum resp_states do_class_d1e_error(struct rxe_qp *qp) ...@@ -1209,7 +1209,7 @@ static enum resp_states do_class_d1e_error(struct rxe_qp *qp)
} }
} }
void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify) static void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify)
{ {
struct sk_buff *skb; struct sk_buff *skb;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment