- 13 Dec, 2012 14 commits
-
-
YoungJun Cho authored
This patch removes vaddr member from exynos_drm_overlay structure and also relevant codes for code cleanup. Signed-off-by: YoungJun Cho <yj44.cho@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
-
Inki Dae authored
Changelog v3: just code cleanup. Changelog v2: fix argument to dma_mmap_attr function. - use pages instead of kvaddr because kvaddr is 0 with DMA_ATTR_NO_KERNEL_MAPPING. Changelog v1: When gem allocation is requested, kernel space mapping isn't needed. But if need, such as console framebuffer, the physical pages would be mapped with kernel space though vmap function. Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
-
Inki Dae authored
This patch releases allocated resources correctly. Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
-
Prathyush K authored
Changelog v2: Added details of original patch in chromium kernel Changelog v1: When fimd is turned off, we disable the clocks which will stop the dma. Now if we remove the current framebuffer, we cannot disable the overlay but the current framebuffer will still be freed. When fimd resumes, the dma will continue from where it left off and will throw a PAGE FAULT since the memory was freed. This patch fixes the above problem by disabling the fimd windows before disabling the fimd clocks. It also keeps track of which windows were currently active by setting the 'resume' flag. When fimd resumes, the window with a resume flag set is enabled again. Now if a current fb is removed when fimd is off, fimd_win_disable will set the 'resume' flag of that window to zero and return. So when fimd resumes, that window will not be resumed. This patch is based on the following two patches: http://git.chromium.org/gitweb/?p=chromiumos/third_party/kernel.git;a=commitdiff;h=341e973c967304976a762211b6465b0074de62ef http://git.chromium.org/gitweb/?p=chromiumos/third_party/kernel.git;a=commitdiff;h=cfa22e49b7408547c73532c4bb03de47cc034a05 These two patches are rebased onto the current kernel with additional changes like removing 'fimd_win_commit' call from the resume function since this is taken care by encoder dpms, and the modification of resume flag in win_disable. Signed-off-by: Prathyush K <prathyush.k@samsung.com> Signed-off-by: Sean Paul <seanpaul@chromium.org> Signed-off-by: Stephane Marchesin <marcheu@chromium.org> Signed-off-by: Inki Dae <inki.dae@samsung.com>
-
Prathyush K authored
When mixer is turned off, we disable the clocks which will stop the dma. Now if we remove the current framebuffer, we cannot disable the overlay but the current framebuffer will still be freed. When mixer resumes, the dma will continue from where it left off and will throw a PAGE FAULT since the memory was freed. This patch fixes the above problem by disabling the mixer windows before disabling the mixer clocks. It also keeps track of which windows were currently active by setting the 'resume' flag. When mixer resumes, the window with a resume flag set is enabled again. Now if a current fb is removed when mixer is off, mixer_win_disable will set the 'resume' flag of that window to zero and return. So when mixer resumes, that window will not be resumed. Signed-off-by: Prathyush K <prathyush.k@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
-
Prathyush K authored
It is more optimium to use wait queues while waiting for vsync so that the current task is put to sleep. This way, the task wont hog the CPU while waiting. We use wait_event_timeout and not an interruptible function since we dont want the function to exit when a signal is pending (e.g. drm release). This patch modifies the wait for vblank function of fimd. Signed-off-by: Prathyush K <prathyush.k@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
-
Prathyush K authored
It is more optimium to use wait queues while waiting for vsync so that the current task is put to sleep. This way, the task wont hog the CPU while waiting. We use wait_event_timeout and not an interruptible function since we dont want the function to exit when a signal is pending (e.g. drm release). This patch modifies the wait for vblank function of mixer. Signed-off-by: Prathyush K <prathyush.k@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
-
Prathyush K authored
The wait for vblank callback is moved from overlay_ops to manager_ops for fimd. Signed-off-by: Prathyush K <prathyush.k@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
-
Prathyush K authored
The wait_for_vblank callback of hdmi and mixer is now moved from overlay_ops to manager_ops. Signed-off-by: Prathyush K <prathyush.k@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
-
Prathyush K authored
Changelog v2: remove unnecessay wait_for_vblank call. - with this patch, wait_for_vblank callback is moved from overlay ops to manager ops so it should be removed and it doesn't need to wait vblank signal at plane disable. Changelog v1: The wait_for_vblank callback is moved from overlay ops to manager ops of exynos drm driver. Also, the check for DPMS OFF of encoder is removed before calling wait_for_vblank. Signed-off-by: Prathyush K <prathyush.k@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
-
Inki Dae authored
With this patch, When dma_buf_unmap_attachment is called, the pages of sgt aren't unmapped from iommu table. Instead, when dma_buf_detach is called, that would be done. And also removes exynos_get_sgt function used to get clone sgt and uses attachment's sgt instead. This patch would resolve performance deterioration issue when v4l2-based driver is using the buffer imported from gem. This change is derived from videobuf2-dma-contig.c Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
-
Rahul Sharma authored
This patch moved the exynos-drm-hdmi platform device registration to the drm driver. When DT is enabled, platform devices needs to be registered within the driver code. This patch fits the requirement of both DT and Non DT based drm drivers. Signed-off-by: Rahul Sharma <rahul.sharma@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
-
Rahul Sharma authored
This patch moved the exynos-drm platform device registration to the drm driver. When DT is enabled, platform devices needs to be registered within the driver code. This patch fits the requirement of both DT and Non DT based drm drivers. Signed-off-by: Rahul Sharma <rahul.sharma@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
-
git://people.freedesktop.org/~agd5f/linuxDave Airlie authored
* 'drm-next-3.8' of git://people.freedesktop.org/~agd5f/linux: drm/radeon: fix fence driver for dma ring when wb is disabled drm/radeon/si: add VM CS checker support for CP DMA drm/radeon/cayman: add VM CS checker support for CP DMA drm/radeon: add support for CP DMA packet to evergreen CS checker drm/radeon: add support for CP DMA packet to r6xx/r7xx CS checker drm/radeon: add register headers for CP DMA on r6xx-SI drm/radeon: improve mc_stop/mc_resume on r5xx-r7xx drm/radeon: fix amd afusion gpu setup aka sumo v2 drm/radeon: do not move bo to different placement at each cs
-
- 12 Dec, 2012 9 commits
-
-
Jerome Glisse authored
The dma ring can't write to register thus have to write to memory its fence value. This ensure that it doesn't try to use scratch register for dma ring fence driver. Should fix: https://bugs.freedesktop.org/show_bug.cgi?id=58166Signed-off-by: Jerome Glisse <jglisse@redhat.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Need to verify for copies involving registers. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Need to verify for copies involving registers. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Currently only memory and GDS transfers are allowed. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Currently only memory to memory transfers are allowed. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Along the same lines of what was done for evergreen+ in the last kernel. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jerome Glisse authored
Set the proper number of tile pipe that should be a multiple of pipe depending on the number of se engine. Fix: https://bugs.freedesktop.org/show_bug.cgi?id=56405 https://bugs.freedesktop.org/show_bug.cgi?id=56720 v2: Don't change sumo2 Signed-off-by: Jerome Glisse <jglisse@redhat.com> Cc: stable@vger.kernel.org Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
-
Jerome Glisse authored
The bo creation placement is where the bo will be. Instead of trying to move bo at each command stream let this work to another worker thread that will use more advance heuristic. agd5f: remove leftover unused variable Signed-off-by: Jerome Glisse <jglisse@redhat.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
-
- 10 Dec, 2012 17 commits
-
-
git://people.freedesktop.org/~agd5f/linuxDave Airlie authored
Alex writes: "adds support for the asynchronous DMA engines on r6xx-SI. These engines are used for ttm bo moves and VM page table updates currently. They could also be exposed via the CS ioctl for userspace use, but I haven't had a chance to add proper CS checker patches for them yet. These patches have been tested extensively internally for months, so they should be pretty solid." * 'drm-next-3.8' of git://people.freedesktop.org/~agd5f/linux: drm/radeon: use DMA engine for VM page table updates on SI drm/radeon: add dma engine support for vm pt updates on si (v2) drm/radeon: use DMA engine for VM page table updates on cayman/TN drm/radeon: add dma engine support for vm pt updates on ni (v5) drm/radeon: use async dma for ttm buffer moves on 6xx-SI drm/radeon/kms: add support for dma rings to radeon_test_moves() drm/radeon/kms: Add initial support for async DMA on SI drm/radeon/kms: Add initial support for async DMA on cayman/TN drm/radeon/kms: Add initial support for async DMA on evergreen drm/radeon/kms: Add initial support for async DMA on r6xx/r7xx
-
Alex Deucher authored
DMA engine has special packets to facilitate this and it also keeps the 3D engine free for other things. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Async DMA has a special packet for contiguous pt updates which saves overhead. v2: rebase Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
DMA engine has special packets to facilitate this and it also keeps the 3D engine free for other things. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Async DMA has a special packet for contiguous pt updates which saves overhead. v2: leave the CP method enabled for now as doing the updates in the DMA rings is not working properly yet. v3: update for 2 level pts v4: rebase v5: drop pte/pde packet. doesn't seem to work on NI. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Pretty much the same as cayman. Some changes to the copy packets. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
There are 2 async DMA engines on cayman, one at 0xd000 and one at 0xd800. The programming interface is the same as evergreen however there are some changes to the commands for using vmids. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Pretty similar to 6xx/7xx except the count field increased in the packet header and the max IB size increased. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Uses the new multi-ring infrastucture. 6xx/7xx has a single async DMA ring. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Maarten Lankhorst authored
All items on the lru list are always reservable, so this is a stupid thing to keep. Not only that, it is used in a way which would guarantee deadlocks if it were ever to be set to block on reserve. This is a lot of churn, but mostly because of the removal of the argument which can be nested arbitrarily deeply in many places. No change of code in this patch except removal of the no_wait_reserve argument, the previous patch removed the use of no_wait_reserve. v2: - Warn if -EBUSY is returned on reservation, all objects on the list should be reservable. Adjusted patch slightly due to conflicts. v3: - Focus on no_wait_reserve removal only. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Maarten Lankhorst authored
Replace the goto loop with a simple for each loop, and only run the delayed destroy cleanup if we can reserve the buffer first. No race occurs, since lru lock is never dropped any more. An empty list and a list full of unreservable buffers both cause -EBUSY to be returned, which is identical to the previous situation, because previously buffers on the lru list were always guaranteed to be reservable. This should work since currently ttm guarantees items on the lru are always reservable, and reserving items blockingly with some bo held are enough to cause you to run into a deadlock. Currently this is not a concern since removal off the lru list and reservations are always done with atomically, but when this guarantee no longer holds, we have to handle this situation or end up with possible deadlocks. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Maarten Lankhorst authored
Replace the while loop with a simple for each loop, and only run the delayed destroy cleanup if we can reserve the buffer first. No race occurs, since lru lock is never dropped any more. An empty list and a list full of unreservable buffers both cause -EBUSY to be returned, which is identical to the previous situation. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Maarten Lankhorst authored
By removing the unlocking of lru and retaking it immediately, a race is removed where the bo is taken off the swap list or the lru list between the unlock and relock. As such the cleanup_refs code can be simplified, it will attempt to call ttm_bo_wait non-blockingly, and if it fails it will drop the locks and perform a blocking wait, or return an error if no_wait_gpu was set. The need for looping is also eliminated, since swapout and evict_mem_first will always follow the destruction path, no new fence is allowed to be attached. As far as I can see this may already have been the case, but the unlocking / relocking required a complicated loop to deal with re-reservation. Changes since v1: - Simplify no_wait_gpu case by folding it in with empty ddestroy. - Hold a reservation while calling ttm_bo_cleanup_memtype_use again. Changes since v2: - Do not remove bo from lru list while waiting Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Maarten Lankhorst authored
The few places that care should have those checks instead. This allows destruction of bo backed memory without a reservation. It's required for being able to rework the delayed destroy path, as it is no longer guaranteed to hold a reservation before unlocking. However any previous wait is still guaranteed to complete, and it's one of the last things to be done before the buffer object is freed. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Maarten Lankhorst authored
This requires changing the order in ttm_bo_cleanup_refs_or_queue to take the reservation first, as there is otherwise no race free way to take lru lock before fence_lock. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
-