- 24 May, 2019 40 commits
-
-
Oak Zeng authored
Global function mqd_manager_init just calls asic-specific functions and it is not necessary. Delete it and introduce a mqd_manager_init interface in dqm for asic-specific mqd manager init. Call mqd_manager_init interface directly to initialize mqd manager Signed-off-by: Oak Zeng <ozeng@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Felix Kuehling authored
Use unsigned long for number of pages. Check that pfns are valid after hmm_vma_fault. If they are not, return an error instead of continuing with invalid page pointers and PTEs. Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Philip Yang <Philip.Yang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Philip Yang authored
If using old kernel config file, CONFIG_ZONE_DEVICE is not selected, so CONFIG_HMM and CONFIG_HMM_MIRROR is not enabled, the current driver error message "Failed to register MMU notifier" is not clear. Inform user with more descriptive message on how to fix the missing kernel config option. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109808Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Philip Yang authored
userptr may cross two VMAs if the forked child process (not call exec after fork) malloc buffer, then free it, and then malloc larger size buf, kerenl will create new VMA adjacent to old VMA which was cloned from parent process, some pages of userptr are in the first VMA, the rest pages are in the second VMA. HMM expects range only have one VMA, loop over all VMAs in the address range, create multiple ranges to handle this case. See is_mergeable_anon_vma in mm/mmap.c for details. Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Philip Yang authored
Userptr restore may have concurrent userptr invalidation after hmm_vma_fault adds the range to the hmm->ranges list, needs call hmm_vma_range_done to remove the range from hmm->ranges list first, then reschedule the restore worker. Otherwise hmm_vma_fault will add same range to the list, this will cause loop in the list because range->next point to range itself. Add function untrack_invalid_user_pages to reduce code duplication. Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Philip Yang authored
Only select HMM_MIRROR will get kernel config dependency warnings if CONFIG_HMM is missing in the config. Add depends on HMM will solve the issue. Add conditional compilation to fix compilation errors if HMM_MIRROR is not enabled as HMM config is not enabled. Remove unused function amdgpu_ttm_tt_mark_user_pages. Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Philip Yang authored
Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pages. Then use hmm_vma_range_done() to check if those pages are updated before amdgpu_cs_submit for gfx or before user queues are resumed for kfd. If userptr pages are updated, for gfx, amdgpu_cs_ioctl will restart from scratch, for kfd, restore worker is rescheduled to retry. HMM simplify the CPU page table concurrent update check, so remove guptasklock, mmu_invalidations, last_set_pages fields from amdgpu_ttm_tt struct. HMM does not pin the page (increase page ref count), so remove related operations like release_pages(), put_page(), mark_page_dirty(). Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Philip Yang authored
There is circular lock between gfx and kfd path with HMM change: lock(dqm) -> bo::reserve -> amdgpu_mn_lock To avoid this, move init/unint_mqd() out of lock(dqm), to remove nested locking between mmap_sem and bo::reserve. The locking order is: bo::reserve -> amdgpu_mn_lock(p->mn) Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Philip Yang authored
Replace our MMU notifier with hmm_mirror_ops.sync_cpu_device_pagetables callback. Enable CONFIG_HMM and CONFIG_HMM_MIRROR as a dependency in DRM_AMDGPU_USERPTR Kconfig. It supports both KFD userptr and gfx userptr paths. Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
shaoyunl authored
There is a bug found in vml2 xgmi logic: mtype is always sent as NC on the VMC to TC interface for a page walk, regardless of whether the request is being sent to local or remote GPU. NC means non-coherent and will cause the VMC return data to be cached in the TCC (versus UC – uncached will not cache the data). Since the page table updates are being done by SDMA/HDP, then TCC will never be updated and the GC VML2 will continue to hit on the TCC and never get the updated page tables and result in a fault. Heave weigh tlb invalidation does a WB/INVAL of the L1/L2 GL data caches so TCC will not be hit on next request Signed-off-by: shaoyunl <Shaoyun.Liu@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jay Cornwall authored
ttmp[4:5] is initialized by the SPI with SPI_GDBG_TRAP_DATA* values. These values are more useful to the debugger than ttmp[14:15], which carries dispatch_scratch_base*. There are too few registers to preserve both. Signed-off-by: Jay Cornwall <Jay.Cornwall@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jay Cornwall authored
SQ_WAVE_IB_STS.RCNT grew from 4 bits to 5 in gfx9. Do not truncate when saving in the high bits of TTMP1. Signed-off-by: Jay Cornwall <Jay.Cornwall@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jay Cornwall authored
If instruction fetch fails the wave cannot be halted and returned to the shader without raising MEM_VIOL again. Currently the wave is terminated if this occurs, but this loses information about the cause of the fault. The debugger would prefer the faulting wave state to be context-saved. Poll inside the trap handler until TRAPSTS.SAVECTX indicates context save is ready. Exit the poll loop and complete the remainder of the exception handler, then return to the shader. The next instruction fetch will be from the trap handler and not the faulting PC. Context save will then deschedule the wave and save its state. Signed-off-by: Jay Cornwall <Jay.Cornwall@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jay Cornwall authored
When MEM_VIOL is asserted the context save handler rewinds the program counter. This is incorrect for any source of the exception. MEM_VIOL may be raised in normal operation by out-of-bounds access to LDS or GDS and does not require special handling. Remove PC adjustment when MEM_VIOL has been raised. Signed-off-by: Jay Cornwall <Jay.Cornwall@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Harish Kasiviswanathan authored
Fix compute profile switching on process termination. Add a dedicated reference counter to keep track of entry/exit to/from compute profile. This enables switching compute profiles for other reasons than process creation or termination. Signed-off-by: Harish Kasiviswanathan <Harish.Kasiviswanathan@amd.com> Signed-off-by: Eric Huang <JinhuiEric.Huang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Oak Zeng authored
FW of some new ASICs requires sdma mqd size to be not more than 128 dwords. Repurpose the last 2 reserved fields of sdma mqd for driver internal use, so the total mqd size is no bigger than 128 dwords Signed-off-by: Oak Zeng <ozeng@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Oak Zeng authored
sdma_queue_id is sdma queue index inside one sdma engine. sdma_id is sdma queue index among all sdma engines. Use those two names properly. Signed-off-by: Oak Zeng <ozeng@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Oak Zeng authored
Add debug messages during SDMA queue allocation. Signed-off-by: Oak Zeng <Oak.Zeng@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Oak Zeng authored
Maximumly support 64 sdma queues Signed-off-by: Oak Zeng <Oak.Zeng@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Evan Quan authored
Support DPM/DS/ULV related bitmasks of ppfeaturemask module parameter. Signed-off-by: Evan Quan <evan.quan@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
It does the same thing we were doing already. I though it needed work for gen3/4 speeds, but that seems to be covered already. Reviewed-by: Evan Quan <evan.quan@amd.com> Acked-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Weitao Hou authored
fix eror to error Signed-off-by: Weitao Hou <houweitaoo@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Evan Quan authored
The UVD/VCE bits are set wrongly. This causes the UVD/VCE clocks are not brought back correctly on needed. Signed-off-by: Evan Quan <evan.quan@amd.com> Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Slava Abramov authored
v1: replace casting to unsigned long with div64_ul Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Slava Abramov <slava.abramov@amd.com> Tested-by: Slava Abramov <slava.abramov@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Chengming Gui authored
add pm_enabled to control the dpm off/on. v2: Directly return 0 to replace return ret and merge some check code. Signed-off-by: Chengming Gui <Jack.Gui@amd.com> Reviewed-by: Hawking Zhang <hawking.zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Bhawanpreet Lakha authored
This fixes the warning below error: ‘feature_mask’ may be used uninitialized in this function [-Werror=maybe-uninitialized] *features_enabled = ((((uint64_t)feature_mask[0] << SMU_FEATURES_LOW_SHIFT) & SMU_FEATURES_LOW_MASK) | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ (((uint64_t)feature_mask[1] << SMU_FEATURES_HIGH_SHIFT) & SMU_FEATURES_HIGH_MASK)); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Colin Ian King authored
There is a spelling mistake in a DRM_ERROR error message. Fix this. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Deucher authored
If RAS or XGMI are enabled, you have to use mode1 reset rather than BACO. Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Aric Cyr authored
Signed-off-by: Aric Cyr <aric.cyr@amd.com> Reviewed-by: Aric Cyr <Aric.Cyr@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Paul Hsieh authored
[Why] When disable driver, OS will set backlight optimization then do stop device. But this flag will cause driver to enable ABM when driver disabled. [How] Send ABM disable command before destroy ABM construct Signed-off-by: Paul Hsieh <paul.hsieh@amd.com> Reviewed-by: Anthony Koo <Anthony.Koo@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jun Lei authored
move the update of otg instance outside of hw programming logic, since this is sw state, it should always be updated and should never be optimized away. Signed-off-by: Jun Lei <Jun.Lei@amd.com> Reviewed-by: Eric Yang <eric.yang2@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Nicholas Kazlauskas authored
[Why] The bit for flip addr is being set causing the determination for FAST vs MEDIUM to always return MEDIUM when plane info is provided as a surface update. This causes extreme stuttering for the typical atomic update path on Linux. [How] Don't use update_flags->raw for determining FAST vs MEDIUM. It's too fragile to changes like this. Explicitly specify the update type per update flag instead. It's not as clever as checking the bits itself but at least it's correct. Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Reviewed-by: Harry Wentland <Harry.Wentland@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Acked-by: Eryk Brol <Eryk.Brol@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Joshua Aberback authored
Signed-off-by: Joshua Aberback <joshua.aberback@amd.com> Reviewed-by: Abdoulaye Berthe <Abdoulaye.Berthe@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Aric Cyr authored
DPRX should send the VCP extended colorimetry packet if the sink supports DPCD rev1.4 and reports the extended colorimetry bit. Signed-off-by: Aric Cyr <aric.cyr@amd.com> Reviewed-by: Anthony Koo <Anthony.Koo@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Wesley Chalmers authored
[WHY] DCN code should make as few references to DCE as possible [HOW] Copy DCE110 implementation of find_first_free_match_stream_enc_for_link into DCN10 Signed-off-by: Wesley Chalmers <Wesley.Chalmers@amd.com> Reviewed-by: Tony Cheng <Tony.Cheng@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Wesley Chalmers authored
[WHY] From DCE110 onward, we have the ability to assign DIG BE and FE separately for any display connector type; before, we could only do this for DP. Signed-off-by: Wesley Chalmers <Wesley.Chalmers@amd.com> Reviewed-by: Tony Cheng <Tony.Cheng@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Charlene Liu authored
[Why] 1. No real HPD plug in/out but HPD happens, the driver notifies OS connection changed. 2. No display in target. When HPD goes low to high, the driver should regard as HPD and enter setmode flow. [How] In this case, even stream didn't change but still retrain. Signed-off-by: Chiawen Huang <chiawen.huang@amd.com> Signed-off-by: Charlene Liu <charlene.liu@amd.com> Reviewed-by: Tony Cheng <Tony.Cheng@amd.com> Acked-by: Anthony Koo <Anthony.Koo@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Joshua Aberback authored
[Why] VTG has a parameter FP2, which is defined as: if VSTARTUP is before VSYNC: FP2 = number of lines in between VSTARTUP and VSYNC else FP2 = 0 Currently, FP2 is only programmed during "program_timing". However, the position of VSTARTUP is affected by the prefetching requirements on all pipes, so the position might change when we do memory request control on another pipe, so we need to make sure that FP2 stays up-to-date whenever we adjust VSTARTUP. [How] - refactor VTG_CONTROL programming into a new function "set_vtg_params" - call it after calling "program_global_sync" - make sure it's called after because it relies on the cached dlg params Signed-off-by: Joshua Aberback <joshua.aberback@amd.com> Reviewed-by: Tony Cheng <Tony.Cheng@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Acked-by: Jun Lei <Jun.Lei@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Dmytro Laktyushkin authored
* add plane state null checks * add and set update surface flags Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com> Reviewed-by: Eric Bernstein <Eric.Bernstein@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Aric Cyr authored
Signed-off-by: Aric Cyr <aric.cyr@amd.com> Reviewed-by: Aric Cyr <Aric.Cyr@amd.com> Acked-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-