Commit 03fa18a9 authored by Honggang LI's avatar Honggang LI Committed by Leon Romanovsky

RDMA/rxe: Fix data copy for IB_SEND_INLINE

For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
specified in sge list will be placed inline in the Send Request.

The data should be copied by CPU from the virtual addresses of
corresponding sge list DMA addresses.

Cc: stable@kernel.org
Fixes: 8d7c7c0e ("RDMA: Add ib_virt_dma_to_page()")
Signed-off-by: default avatarHonggang LI <honggangli@163.com>
Link: https://lore.kernel.org/r/20240516095052.542767-1-honggangli@163.comReviewed-by: default avatarZhu Yanjun <yanjun.zhu@linux.dev>
Reviewed-by: default avatarLi Zhijian <lizhijian@fujitsu.com>
Reviewed-by: default avatarJason Gunthorpe <jgg@nvidia.com>
Signed-off-by: default avatarLeon Romanovsky <leon@kernel.org>
parent 056620da
......@@ -812,7 +812,7 @@ static void copy_inline_data_to_wqe(struct rxe_send_wqe *wqe,
int i;
for (i = 0; i < ibwr->num_sge; i++, sge++) {
memcpy(p, ib_virt_dma_to_page(sge->addr), sge->length);
memcpy(p, ib_virt_dma_to_ptr(sge->addr), sge->length);
p += sge->length;
}
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment