Commit e96434e1 authored by Matthew Auld's avatar Matthew Auld Committed by Chris Wilson

drm/i915/selftest: assert we get 2M GTT pages

For the LMEM case if we have suitable alignment and 2M physical pages we
should always get 2M GTT pages within the constraints of the hugepages
selftest. If we don't then something might be wrong in our construction
of the backing pages.

References: 330b7d33 ("drm/i915/region: fix order when adding blocks")
Signed-off-by: default avatarMatthew Auld <matthew.auld@intel.com>
Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20201130141809.65330-2-matthew.auld@intel.com
parent 77acab40
......@@ -368,6 +368,27 @@ static int igt_check_page_sizes(struct i915_vma *vma)
err = -EINVAL;
}
/*
* The dma-api is like a box of chocolates when it comes to the
* alignment of dma addresses, however for LMEM we have total control
* and so can guarantee alignment, likewise when we allocate our blocks
* they should appear in descending order, and if we know that we align
* to the largest page size for the GTT address, we should be able to
* assert that if we see 2M physical pages then we should also get 2M
* GTT pages. If we don't then something might be wrong in our
* construction of the backing pages.
*
* Maintaining alignment is required to utilise huge pages in the ppGGT.
*/
if (i915_gem_object_is_lmem(obj) &&
IS_ALIGNED(vma->node.start, SZ_2M) &&
vma->page_sizes.sg & SZ_2M &&
vma->page_sizes.gtt < SZ_2M) {
pr_err("gtt pages mismatch for LMEM, expected 2M GTT pages, sg(%u), gtt(%u)\n",
vma->page_sizes.sg, vma->page_sizes.gtt);
err = -EINVAL;
}
if (obj->mm.page_sizes.gtt) {
pr_err("obj->page_sizes.gtt(%u) should never be set\n",
obj->mm.page_sizes.gtt);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment