- 25 Jul, 2017 7 commits
-
-
Zhang, Jerry authored
v2: fix the SOS loading failure for PSP v3.1 Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com> Cc: stable@vger.kernel.org Acked-by: Alex Deucher <alexander.deucher@amd.com> (v1) Acked-by: Huang Rui <ray.huang@amd.com> (v1) Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Nicolai Hähnle authored
Copy the approach taken by gfx8, which simplifies the code, and set the instance index properly. The latter is required for debugging, e.g. for reading wave status by UMR. Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Junwei Zhang authored
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Junwei Zhang authored
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Junwei Zhang authored
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Junwei Zhang authored
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Xie authored
In RCU read-side critical sections, blocking or sleeping is prohibited. v2: Unlock RCU for the code path where result==NULL. (David Zhou) Update subject Tested-by and reported by: Dave Airlie <airlied@redhat.com> [ 141.965723] ============================= [ 141.965724] WARNING: suspicious RCU usage [ 141.965726] 4.12.0-rc7 #221 Not tainted [ 141.965727] ----------------------------- [ 141.965728] /home/airlied/devel/kernel/linux-2.6/include/linux/rcupdate.h:531 Illegal context switch in RCU read-side critical section! [ 141.965730] other info that might help us debug this: [ 141.965731] rcu_scheduler_active = 2, debug_locks = 0 [ 141.965732] 1 lock held by amdgpu_cs:0/1332: [ 141.965733] #0: (rcu_read_lock){......}, at: [<ffffffffa01a0d07>] amdgpu_bo_list_get+0x0/0x109 [amdgpu] [ 141.965774] stack backtrace: [ 141.965776] CPU: 6 PID: 1332 Comm: amdgpu_cs:0 Not tainted 4.12.0-rc7 #221 [ 141.965777] Hardware name: To be filled by O.E.M. To be filled by O.E.M./M5A97 R2.0, BIOS 2603 06/26/2015 [ 141.965778] Call Trace: [ 141.965782] dump_stack+0x68/0x92 [ 141.965785] lockdep_rcu_suspicious+0xf7/0x100 [ 141.965788] ___might_sleep+0x56/0x1fc [ 141.965790] __might_sleep+0x68/0x6f [ 141.965793] __mutex_lock+0x4e/0x7b5 [ 141.965817] ? amdgpu_bo_list_get+0xa4/0x109 [amdgpu] [ 141.965820] ? lock_acquire+0x125/0x1b9 [ 141.965844] ? amdgpu_bo_list_set+0x464/0x464 [amdgpu] [ 141.965846] mutex_lock_nested+0x16/0x18 [ 141.965848] ? mutex_lock_nested+0x16/0x18 [ 141.965872] amdgpu_bo_list_get+0xa4/0x109 [amdgpu] [ 141.965895] amdgpu_cs_ioctl+0x4a0/0x17dd [amdgpu] [ 141.965898] ? radix_tree_node_alloc.constprop.11+0x77/0xab [ 141.965916] drm_ioctl+0x264/0x393 [drm] [ 141.965939] ? amdgpu_cs_find_mapping+0x83/0x83 [amdgpu] [ 141.965942] ? trace_hardirqs_on_caller+0x16a/0x186 Signed-off-by: Alex Xie <AlexBin.Xie@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
- 14 Jul, 2017 33 commits
-
-
Rex Zhu authored
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
The test was relaxed a bit to much. Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Tom St Denis <tom.stdenis@amd.com> Reviewed-and-Tested-by: Roger He <Hongbo.He@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
No need to try to map them every time. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
When a BO is moved or destroyed it shouldn't be kmapped any more. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
No need to do this after every single update. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
Handy for debugging. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Printing a warning into the logs that we will certainly run into a BUG() is completely nonsense, the BUG() is more than noisy enough. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
We need to wait with the correct owner on unmap operations or otherwise can run into VM faults. Also always wait for the page directory since this is where the reservation object comes from. So rename the function to amdgpu_vm_wait_pd instead as well. Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
We don't have any update fence in that case, so the need for flushing isn't detected automatically. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Spreading them causes performance regressions using compute queues on Polaris 11. Cc: Jim Qu <jim.qu@amd.com> Acked-by: Jim Qu <Jim.Qu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jim Qu authored
Signed-off-by: Jim Qu <Jim.Qu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Dan Carpenter authored
This is just future proofing code, not something that can be triggered in real life. We're testing to make sure we don't shift wrap when we do "1ull << i" so "i" has to be in the 0-63 range. If it's 64 then we have gone too far. Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
This allows us to read the vbios image directly from ROM. This is already implemented for other asics, but was not yet available for SI. Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Maybe a leftover from bringup? Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Rather than the legacy atombios version. Acked-by: Chunming Zhou <david1.zhou@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
The information has moved to different tables, notably smu_info for core refclk and umc_info for mem refclk. Acked-by: Chunming Zhou <david1.zhou@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jay Cornwall authored
A subset of VM fault types currently send retry XNACK to the client. This causes a storm of interrupts from the VM to the host. Until the storm is throttled by other means send no-retry XNACK for all fault types instead. No change in behavior to the client which will stall indefinitely with the current configuration in any case. Improves system stability under GC or MMHUB faults. Signed-off-by: Jay Cornwall <Jay.Cornwall@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: John Bridgman <John.Bridgman@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Yong Zhao authored
Signed-off-by: Yong Zhao <yong.zhao@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Oded Gabbay <oded.gabbay@gmail.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Felix Kuehling authored
Set a configurable SDMA phase quantum when enabling SDMA context switching. The default value significantly reduces SDMA latency in page table updates when user-mode SDMA queues have concurrent activity, compared to the initial HW setting. Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Andres Rodriguez <andres.rodriguez@amd.com> Reviewed-by: Shaoyun Liu <shaoyun.liu@amd.com> Acked-by: Chunming Zhou <david1.zhou@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Felix Kuehling authored
Enable SDMA context switching on CIK (copied from sdma_v3_0.c). Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
shaoyunl authored
For GFX context, the ATC bit in SDMA*_GFX_VIRTUAL_ADDRESS can be cleared to perform in VM mode. For RLC context, to support ATC mode , ATC bit in SDMA*_RLC*_VIRTUAL_ADDRESS should be set. SDMA_CNTL.ATC_L1_ENABLE bit is global setting that enables the L1-L2 translation for ATC address. Signed-off-by: shaoyun liu <shaoyun.liu@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Michel Dänzer authored
This gives BOs which haven't been accessed by the CPU since they were moved to visible VRAM another chance to stay in VRAM when another BO needs to go to visible VRAM. This should allow BOs to stay in VRAM longer in some cases. v2: * Only do this for BOs which don't have the AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED flag set. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
John Brooks authored
There is no need for page faults to force BOs into visible VRAM if it's full, and the time it takes to do so is great enough to cause noticeable stuttering. Add GTT as a possible placement so that if visible VRAM is full, page faults move BOs to GTT instead of evicting other BOs from VRAM. Suggested-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: John Brooks <john@fastquake.com> Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
John Brooks authored
When a BO is moved to VRAM, clear AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED. This allows it to potentially later move to invisible VRAM if the CPU does not access it again. Setting the CPU_ACCESS flag in amdgpu_bo_fault_reserve_notify() also means that we can remove the loop to restrict lpfn to the end of visible VRAM, because amdgpu_ttm_placement_init() will do it for us. v3 [Michel Dänzer] * Use AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED instead of a new flag (Christian König) * Clear flag in amdgpu_bo_move instead of amdgpu_move_ram_vram (Christian) * Explicitly mention amdgpu_bo_fault_reserve_notify in amdgpu_bo_move * Also clear flag in amdgpu_bo_create_restricted Suggested-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: John Brooks <john@fastquake.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
John Brooks authored
The BO move throttling code is designed to allow VRAM to fill quickly if it is relatively empty. However, this does not take into account situations where the visible VRAM is smaller than total VRAM, and total VRAM may not be close to full but the visible VRAM segment is under pressure. In such situations, visible VRAM would experience unrestricted swapping and performance would drop. Add a separate counter specifically for moves involving visible VRAM, and check it before moving BOs there. v2: Only perform calculations for separate counter if visible VRAM is smaller than total VRAM. (Michel Dänzer) v3: [Michel Dänzer] * Use BO's location rather than the AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED flag to determine whether to account a move for visible VRAM in most cases. * Use a single if (adev->mc.visible_vram_size < adev->mc.real_vram_size) { block in amdgpu_cs_get_threshold_for_moves. Fixes: 95844d20 (drm/amdgpu: throttle buffer migrations at CS using a fixed MBps limit (v2)) Signed-off-by: John Brooks <john@fastquake.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
John Brooks authored
Allow specifying a limit on visible VRAM via a module parameter. This is helpful for testing performance under visible VRAM pressure. v2: Add cast to 64-bit (Christian König) Signed-off-by: John Brooks <john@fastquake.com> Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
Limit the default GART size and save a lot of VRAM. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
This allows setting the gtt size independent of the gart size. v2: fix copy and paste typo Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
We should only cover the GART size with the GTT manager. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
Rename symbols from gtt_ to gart_ as appropriate. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
Not used any more. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Christian König authored
No functional change, just cleanup. v2: rebased, keep gart name. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
Rather than checking the CONGIG_MEMSIZE register as that may not be reliable on some APUs. v2: The scratch register is only used on CIK+ Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-