Commit 5f0b2a60 authored by Mikhail Malygin's avatar Mikhail Malygin Committed by Jason Gunthorpe

RDMA/rxe: Prevent access to wr->next ptr afrer wr is posted to send queue

rxe_post_send_kernel() iterates over linked list of wr's, until the
wr->next ptr is NULL.  However if we've got an interrupt after last wr is
posted, control may be returned to the code after send completion callback
is executed and wr memory is freed.

As a result, wr->next pointer may contain incorrect value leading to
panic. Store the wr->next on the stack before posting it.

Fixes: 8700e3e7 ("Soft RoCE driver")
Link: https://lore.kernel.org/r/20200716190340.23453-1-m.malygin@yadro.comSigned-off-by: default avatarMikhail Malygin <m.malygin@yadro.com>
Signed-off-by: default avatarSergey Kojushev <s.kojushev@yadro.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
parent eb7f84e3
......@@ -682,6 +682,7 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
unsigned int mask;
unsigned int length = 0;
int i;
struct ib_send_wr *next;
while (wr) {
mask = wr_opcode_mask(wr->opcode, qp);
......@@ -698,6 +699,8 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
break;
}
next = wr->next;
length = 0;
for (i = 0; i < wr->num_sge; i++)
length += wr->sg_list[i].length;
......@@ -708,7 +711,7 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
*bad_wr = wr;
break;
}
wr = wr->next;
wr = next;
}
rxe_run_task(&qp->req.task, 1);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment