Commit f00b4dad authored by Daniel Vetter's avatar Daniel Vetter Committed by Sumit Semwal

dma-buf: implement vmap refcounting in the interface logic

All drivers which implement this need to have some sort of refcount to
allow concurrent vmap usage. Hence implement this in the dma-buf core.

To protect against concurrent calls we need a lock, which potentially
causes new funny locking inversions. But this shouldn't be a problem
for exporters with statically allocated backing storage, and more
dynamic drivers have decent issues already anyway.

Inspired by some refactoring patches from Aaron Plattner, who
implemented the same idea, but only for drm/prime drivers.

v2: Check in dma_buf_release that no dangling vmaps are left.
Suggested by Aaron Plattner. We might want to do similar checks for
attachments, but that's for another patch. Also fix up ERR_PTR return
for vmap.

v3: Check whether the passed-in vmap address matches with the cached
one for vunmap. Eventually we might want to remove that parameter -
compared to the kmap functions there's no need for the vaddr for
unmapping.  Suggested by Chris Wilson.

v4: Fix a brown-paper-bag bug spotted by Aaron Plattner.

Cc: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: default avatarAaron Plattner <aplattner@nvidia.com>
Tested-by: default avatarAaron Plattner <aplattner@nvidia.com>
Reviewed-by: default avatarRob Clark <rob@ti.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: default avatarSumit Semwal <sumit.semwal@linaro.org>
parent d895cb1a
...@@ -302,7 +302,11 @@ Access to a dma_buf from the kernel context involves three steps: ...@@ -302,7 +302,11 @@ Access to a dma_buf from the kernel context involves three steps:
void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
The vmap call can fail if there is no vmap support in the exporter, or if it The vmap call can fail if there is no vmap support in the exporter, or if it
runs out of vmalloc space. Fallback to kmap should be implemented. runs out of vmalloc space. Fallback to kmap should be implemented. Note that
the dma-buf layer keeps a reference count for all vmap access and calls down
into the exporter's vmap function only when no vmapping exists, and only
unmaps it once. Protection against concurrent vmap/vunmap calls is provided
by taking the dma_buf->lock mutex.
3. Finish access 3. Finish access
......
...@@ -39,6 +39,8 @@ static int dma_buf_release(struct inode *inode, struct file *file) ...@@ -39,6 +39,8 @@ static int dma_buf_release(struct inode *inode, struct file *file)
dmabuf = file->private_data; dmabuf = file->private_data;
BUG_ON(dmabuf->vmapping_counter);
dmabuf->ops->release(dmabuf); dmabuf->ops->release(dmabuf);
kfree(dmabuf); kfree(dmabuf);
return 0; return 0;
...@@ -481,12 +483,34 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap); ...@@ -481,12 +483,34 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
*/ */
void *dma_buf_vmap(struct dma_buf *dmabuf) void *dma_buf_vmap(struct dma_buf *dmabuf)
{ {
void *ptr;
if (WARN_ON(!dmabuf)) if (WARN_ON(!dmabuf))
return NULL; return NULL;
if (dmabuf->ops->vmap) if (!dmabuf->ops->vmap)
return dmabuf->ops->vmap(dmabuf); return NULL;
return NULL;
mutex_lock(&dmabuf->lock);
if (dmabuf->vmapping_counter) {
dmabuf->vmapping_counter++;
BUG_ON(!dmabuf->vmap_ptr);
ptr = dmabuf->vmap_ptr;
goto out_unlock;
}
BUG_ON(dmabuf->vmap_ptr);
ptr = dmabuf->ops->vmap(dmabuf);
if (IS_ERR_OR_NULL(ptr))
goto out_unlock;
dmabuf->vmap_ptr = ptr;
dmabuf->vmapping_counter = 1;
out_unlock:
mutex_unlock(&dmabuf->lock);
return ptr;
} }
EXPORT_SYMBOL_GPL(dma_buf_vmap); EXPORT_SYMBOL_GPL(dma_buf_vmap);
...@@ -500,7 +524,16 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) ...@@ -500,7 +524,16 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
if (WARN_ON(!dmabuf)) if (WARN_ON(!dmabuf))
return; return;
if (dmabuf->ops->vunmap) BUG_ON(!dmabuf->vmap_ptr);
dmabuf->ops->vunmap(dmabuf, vaddr); BUG_ON(dmabuf->vmapping_counter == 0);
BUG_ON(dmabuf->vmap_ptr != vaddr);
mutex_lock(&dmabuf->lock);
if (--dmabuf->vmapping_counter == 0) {
if (dmabuf->ops->vunmap)
dmabuf->ops->vunmap(dmabuf, vaddr);
dmabuf->vmap_ptr = NULL;
}
mutex_unlock(&dmabuf->lock);
} }
EXPORT_SYMBOL_GPL(dma_buf_vunmap); EXPORT_SYMBOL_GPL(dma_buf_vunmap);
...@@ -119,8 +119,10 @@ struct dma_buf { ...@@ -119,8 +119,10 @@ struct dma_buf {
struct file *file; struct file *file;
struct list_head attachments; struct list_head attachments;
const struct dma_buf_ops *ops; const struct dma_buf_ops *ops;
/* mutex to serialize list manipulation and attach/detach */ /* mutex to serialize list manipulation, attach/detach and vmap/unmap */
struct mutex lock; struct mutex lock;
unsigned vmapping_counter;
void *vmap_ptr;
void *priv; void *priv;
}; };
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment