Commit f30bceab authored by Christian König's avatar Christian König

RDMA: use dma_resv_wait() instead of extracting the fence

Use dma_resv_wait() instead of extracting the exclusive fence and
waiting on it manually.
Signed-off-by: default avatarChristian König <christian.koenig@amd.com>
Reviewed-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: default avatarJason Gunthorpe <jgg@nvidia.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Maor Gottlieb <maorg@nvidia.com>
Cc: Gal Pressman <galpress@amazon.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Link: https://patchwork.freedesktop.org/patch/msgid/20220321135856.1331-4-christian.koenig@amd.com
parent 0941a4e3
...@@ -16,7 +16,6 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) ...@@ -16,7 +16,6 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
{ {
struct sg_table *sgt; struct sg_table *sgt;
struct scatterlist *sg; struct scatterlist *sg;
struct dma_fence *fence;
unsigned long start, end, cur = 0; unsigned long start, end, cur = 0;
unsigned int nmap = 0; unsigned int nmap = 0;
int i; int i;
...@@ -68,11 +67,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) ...@@ -68,11 +67,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
* may be not up-to-date. Wait for the exporter to finish * may be not up-to-date. Wait for the exporter to finish
* the migration. * the migration.
*/ */
fence = dma_resv_excl_fence(umem_dmabuf->attach->dmabuf->resv); return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv, false,
if (fence) false, MAX_SCHEDULE_TIMEOUT);
return dma_fence_wait(fence, false);
return 0;
} }
EXPORT_SYMBOL(ib_umem_dmabuf_map_pages); EXPORT_SYMBOL(ib_umem_dmabuf_map_pages);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment