Commit 3e234e9f authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-intel-fixes-2021-08-12' of...

Merge tag 'drm-intel-fixes-2021-08-12' of git://anongit.freedesktop.org/drm/drm-intel into drm-fixes

- GVT fix for Windows VM hang.
- Display fix of 12 BPC bits for display 12 and newer.
- Don't try to access some media register for fused off domains.
- Fix kerneldoc build warnings.
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>

From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/YRU/hnQ1sNr+j37x@intel.com
parents bf71bde4 ffd5caa2
...@@ -18,114 +18,5 @@ real, with all the uAPI bits is: ...@@ -18,114 +18,5 @@ real, with all the uAPI bits is:
* Route shmem backend over to TTM SYSTEM for discrete * Route shmem backend over to TTM SYSTEM for discrete
* TTM purgeable object support * TTM purgeable object support
* Move i915 buddy allocator over to TTM * Move i915 buddy allocator over to TTM
* MMAP ioctl mode(see `I915 MMAP`_)
* SET/GET ioctl caching(see `I915 SET/GET CACHING`_)
* Send RFC(with mesa-dev on cc) for final sign off on the uAPI * Send RFC(with mesa-dev on cc) for final sign off on the uAPI
* Add pciid for DG1 and turn on uAPI for real * Add pciid for DG1 and turn on uAPI for real
New object placement and region query uAPI
==========================================
Starting from DG1 we need to give userspace the ability to allocate buffers from
device local-memory. Currently the driver supports gem_create, which can place
buffers in system memory via shmem, and the usual assortment of other
interfaces, like dumb buffers and userptr.
To support this new capability, while also providing a uAPI which will work
beyond just DG1, we propose to offer three new bits of uAPI:
DRM_I915_QUERY_MEMORY_REGIONS
-----------------------------
New query ID which allows userspace to discover the list of supported memory
regions(like system-memory and local-memory) for a given device. We identify
each region with a class and instance pair, which should be unique. The class
here would be DEVICE or SYSTEM, and the instance would be zero, on platforms
like DG1.
Side note: The class/instance design is borrowed from our existing engine uAPI,
where we describe every physical engine in terms of its class, and the
particular instance, since we can have more than one per class.
In the future we also want to expose more information which can further
describe the capabilities of a region.
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_memory_class drm_i915_gem_memory_class_instance drm_i915_memory_region_info drm_i915_query_memory_regions
GEM_CREATE_EXT
--------------
New ioctl which is basically just gem_create but now allows userspace to provide
a chain of possible extensions. Note that if we don't provide any extensions and
set flags=0 then we get the exact same behaviour as gem_create.
Side note: We also need to support PXP[1] in the near future, which is also
applicable to integrated platforms, and adds its own gem_create_ext extension,
which basically lets userspace mark a buffer as "protected".
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_create_ext
I915_GEM_CREATE_EXT_MEMORY_REGIONS
----------------------------------
Implemented as an extension for gem_create_ext, we would now allow userspace to
optionally provide an immutable list of preferred placements at creation time,
in priority order, for a given buffer object. For the placements we expect
them each to use the class/instance encoding, as per the output of the regions
query. Having the list in priority order will be useful in the future when
placing an object, say during eviction.
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_create_ext_memory_regions
One fair criticism here is that this seems a little over-engineered[2]. If we
just consider DG1 then yes, a simple gem_create.flags or something is totally
all that's needed to tell the kernel to allocate the buffer in local-memory or
whatever. However looking to the future we need uAPI which can also support
upcoming Xe HP multi-tile architecture in a sane way, where there can be
multiple local-memory instances for a given device, and so using both class and
instance in our uAPI to describe regions is desirable, although specifically
for DG1 it's uninteresting, since we only have a single local-memory instance.
Existing uAPI issues
====================
Some potential issues we still need to resolve.
I915 MMAP
---------
In i915 there are multiple ways to MMAP GEM object, including mapping the same
object using different mapping types(WC vs WB), i.e multiple active mmaps per
object. TTM expects one MMAP at most for the lifetime of the object. If it
turns out that we have to backpedal here, there might be some potential
userspace fallout.
I915 SET/GET CACHING
--------------------
In i915 we have set/get_caching ioctl. TTM doesn't let us to change this, but
DG1 doesn't support non-snooped pcie transactions, so we can just always
allocate as WB for smem-only buffers. If/when our hw gains support for
non-snooped pcie transactions then we must fix this mode at allocation time as
a new GEM extension.
This is related to the mmap problem, because in general (meaning, when we're
not running on intel cpus) the cpu mmap must not, ever, be inconsistent with
allocation mode.
Possible idea is to let the kernel picks the mmap mode for userspace from the
following table:
smem-only: WB. Userspace does not need to call clflush.
smem+lmem: We only ever allow a single mode, so simply allocate this as uncached
memory, and always give userspace a WC mapping. GPU still does snooped access
here(assuming we can't turn it off like on DG1), which is a bit inefficient.
lmem only: always WC
This means on discrete you only get a single mmap mode, all others must be
rejected. That's probably going to be a new default mode or something like
that.
Links
=====
[1] https://patchwork.freedesktop.org/series/86798/
[2] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5599#note_553791
...@@ -5746,16 +5746,18 @@ static void bdw_set_pipemisc(const struct intel_crtc_state *crtc_state) ...@@ -5746,16 +5746,18 @@ static void bdw_set_pipemisc(const struct intel_crtc_state *crtc_state)
switch (crtc_state->pipe_bpp) { switch (crtc_state->pipe_bpp) {
case 18: case 18:
val |= PIPEMISC_DITHER_6_BPC; val |= PIPEMISC_6_BPC;
break; break;
case 24: case 24:
val |= PIPEMISC_DITHER_8_BPC; val |= PIPEMISC_8_BPC;
break; break;
case 30: case 30:
val |= PIPEMISC_DITHER_10_BPC; val |= PIPEMISC_10_BPC;
break; break;
case 36: case 36:
val |= PIPEMISC_DITHER_12_BPC; /* Port output 12BPC defined for ADLP+ */
if (DISPLAY_VER(dev_priv) > 12)
val |= PIPEMISC_12_BPC_ADLP;
break; break;
default: default:
MISSING_CASE(crtc_state->pipe_bpp); MISSING_CASE(crtc_state->pipe_bpp);
...@@ -5808,15 +5810,27 @@ int bdw_get_pipemisc_bpp(struct intel_crtc *crtc) ...@@ -5808,15 +5810,27 @@ int bdw_get_pipemisc_bpp(struct intel_crtc *crtc)
tmp = intel_de_read(dev_priv, PIPEMISC(crtc->pipe)); tmp = intel_de_read(dev_priv, PIPEMISC(crtc->pipe));
switch (tmp & PIPEMISC_DITHER_BPC_MASK) { switch (tmp & PIPEMISC_BPC_MASK) {
case PIPEMISC_DITHER_6_BPC: case PIPEMISC_6_BPC:
return 18; return 18;
case PIPEMISC_DITHER_8_BPC: case PIPEMISC_8_BPC:
return 24; return 24;
case PIPEMISC_DITHER_10_BPC: case PIPEMISC_10_BPC:
return 30; return 30;
case PIPEMISC_DITHER_12_BPC: /*
return 36; * PORT OUTPUT 12 BPC defined for ADLP+.
*
* TODO:
* For previous platforms with DSI interface, bits 5:7
* are used for storing pipe_bpp irrespective of dithering.
* Since the value of 12 BPC is not defined for these bits
* on older platforms, need to find a workaround for 12 BPC
* MIPI DSI HW readout.
*/
case PIPEMISC_12_BPC_ADLP:
if (DISPLAY_VER(dev_priv) > 12)
return 36;
fallthrough;
default: default:
MISSING_CASE(tmp); MISSING_CASE(tmp);
return 0; return 0;
......
...@@ -3149,6 +3149,7 @@ static int init_bdw_mmio_info(struct intel_gvt *gvt) ...@@ -3149,6 +3149,7 @@ static int init_bdw_mmio_info(struct intel_gvt *gvt)
MMIO_DFH(_MMIO(0xb100), D_BDW, F_CMD_ACCESS, NULL, NULL); MMIO_DFH(_MMIO(0xb100), D_BDW, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0xb10c), D_BDW, F_CMD_ACCESS, NULL, NULL); MMIO_DFH(_MMIO(0xb10c), D_BDW, F_CMD_ACCESS, NULL, NULL);
MMIO_D(_MMIO(0xb110), D_BDW); MMIO_D(_MMIO(0xb110), D_BDW);
MMIO_D(GEN9_SCRATCH_LNCF1, D_BDW_PLUS);
MMIO_F(_MMIO(0x24d0), 48, F_CMD_ACCESS | F_CMD_WRITE_PATCH, 0, 0, MMIO_F(_MMIO(0x24d0), 48, F_CMD_ACCESS | F_CMD_WRITE_PATCH, 0, 0,
D_BDW_PLUS, NULL, force_nonpriv_write); D_BDW_PLUS, NULL, force_nonpriv_write);
......
...@@ -105,6 +105,8 @@ static struct engine_mmio gen9_engine_mmio_list[] __cacheline_aligned = { ...@@ -105,6 +105,8 @@ static struct engine_mmio gen9_engine_mmio_list[] __cacheline_aligned = {
{RCS0, COMMON_SLICE_CHICKEN2, 0xffff, true}, /* 0x7014 */ {RCS0, COMMON_SLICE_CHICKEN2, 0xffff, true}, /* 0x7014 */
{RCS0, GEN9_CS_DEBUG_MODE1, 0xffff, false}, /* 0x20ec */ {RCS0, GEN9_CS_DEBUG_MODE1, 0xffff, false}, /* 0x20ec */
{RCS0, GEN8_L3SQCREG4, 0, false}, /* 0xb118 */ {RCS0, GEN8_L3SQCREG4, 0, false}, /* 0xb118 */
{RCS0, GEN9_SCRATCH1, 0, false}, /* 0xb11c */
{RCS0, GEN9_SCRATCH_LNCF1, 0, false}, /* 0xb008 */
{RCS0, GEN7_HALF_SLICE_CHICKEN1, 0xffff, true}, /* 0xe100 */ {RCS0, GEN7_HALF_SLICE_CHICKEN1, 0xffff, true}, /* 0xe100 */
{RCS0, HALF_SLICE_CHICKEN2, 0xffff, true}, /* 0xe180 */ {RCS0, HALF_SLICE_CHICKEN2, 0xffff, true}, /* 0xe180 */
{RCS0, HALF_SLICE_CHICKEN3, 0xffff, true}, /* 0xe184 */ {RCS0, HALF_SLICE_CHICKEN3, 0xffff, true}, /* 0xe184 */
......
...@@ -727,9 +727,18 @@ static void err_print_gt(struct drm_i915_error_state_buf *m, ...@@ -727,9 +727,18 @@ static void err_print_gt(struct drm_i915_error_state_buf *m,
if (GRAPHICS_VER(m->i915) >= 12) { if (GRAPHICS_VER(m->i915) >= 12) {
int i; int i;
for (i = 0; i < GEN12_SFC_DONE_MAX; i++) for (i = 0; i < GEN12_SFC_DONE_MAX; i++) {
/*
* SFC_DONE resides in the VD forcewake domain, so it
* only exists if the corresponding VCS engine is
* present.
*/
if (!HAS_ENGINE(gt->_gt, _VCS(i * 2)))
continue;
err_printf(m, " SFC_DONE[%d]: 0x%08x\n", i, err_printf(m, " SFC_DONE[%d]: 0x%08x\n", i,
gt->sfc_done[i]); gt->sfc_done[i]);
}
err_printf(m, " GAM_DONE: 0x%08x\n", gt->gam_done); err_printf(m, " GAM_DONE: 0x%08x\n", gt->gam_done);
} }
...@@ -1581,6 +1590,14 @@ static void gt_record_regs(struct intel_gt_coredump *gt) ...@@ -1581,6 +1590,14 @@ static void gt_record_regs(struct intel_gt_coredump *gt)
if (GRAPHICS_VER(i915) >= 12) { if (GRAPHICS_VER(i915) >= 12) {
for (i = 0; i < GEN12_SFC_DONE_MAX; i++) { for (i = 0; i < GEN12_SFC_DONE_MAX; i++) {
/*
* SFC_DONE resides in the VD forcewake domain, so it
* only exists if the corresponding VCS engine is
* present.
*/
if (!HAS_ENGINE(gt->_gt, _VCS(i * 2)))
continue;
gt->sfc_done[i] = gt->sfc_done[i] =
intel_uncore_read(uncore, GEN12_SFC_DONE(i)); intel_uncore_read(uncore, GEN12_SFC_DONE(i));
} }
......
...@@ -6163,11 +6163,17 @@ enum { ...@@ -6163,11 +6163,17 @@ enum {
#define PIPEMISC_HDR_MODE_PRECISION (1 << 23) /* icl+ */ #define PIPEMISC_HDR_MODE_PRECISION (1 << 23) /* icl+ */
#define PIPEMISC_OUTPUT_COLORSPACE_YUV (1 << 11) #define PIPEMISC_OUTPUT_COLORSPACE_YUV (1 << 11)
#define PIPEMISC_PIXEL_ROUNDING_TRUNC REG_BIT(8) /* tgl+ */ #define PIPEMISC_PIXEL_ROUNDING_TRUNC REG_BIT(8) /* tgl+ */
#define PIPEMISC_DITHER_BPC_MASK (7 << 5) /*
#define PIPEMISC_DITHER_8_BPC (0 << 5) * For Display < 13, Bits 5-7 of PIPE MISC represent DITHER BPC with
#define PIPEMISC_DITHER_10_BPC (1 << 5) * valid values of: 6, 8, 10 BPC.
#define PIPEMISC_DITHER_6_BPC (2 << 5) * ADLP+, the bits 5-7 represent PORT OUTPUT BPC with valid values of:
#define PIPEMISC_DITHER_12_BPC (3 << 5) * 6, 8, 10, 12 BPC.
*/
#define PIPEMISC_BPC_MASK (7 << 5)
#define PIPEMISC_8_BPC (0 << 5)
#define PIPEMISC_10_BPC (1 << 5)
#define PIPEMISC_6_BPC (2 << 5)
#define PIPEMISC_12_BPC_ADLP (4 << 5) /* adlp+ */
#define PIPEMISC_DITHER_ENABLE (1 << 4) #define PIPEMISC_DITHER_ENABLE (1 << 4)
#define PIPEMISC_DITHER_TYPE_MASK (3 << 2) #define PIPEMISC_DITHER_TYPE_MASK (3 << 2)
#define PIPEMISC_DITHER_TYPE_SP (0 << 2) #define PIPEMISC_DITHER_TYPE_SP (0 << 2)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment