Commit 0a3f6b34 authored by Lu Baolu's avatar Lu Baolu Committed by Will Deacon

iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address()

The helper calculate_psi_aligned_address() is used to convert an arbitrary
range into a size-aligned one.

The aligned_pages variable is calculated from input start and end, but is
not adjusted when the start pfn is not aligned and the mask is adjusted,
which results in an incorrect number of pages returned.

The number of pages is used by qi_flush_piotlb() to flush caches for the
first-stage translation. With the wrong number of pages, the cache is not
synchronized, leading to inconsistencies in some cases.

Fixes: c4d27ffa ("iommu/vt-d: Add cache tag invalidation helpers")
Signed-off-by: default avatarLu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: default avatarKevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20240709152643.28109-3-baolu.lu@linux.intel.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
parent c420a2b4
......@@ -246,6 +246,7 @@ static unsigned long calculate_psi_aligned_address(unsigned long start,
*/
shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
mask = shared_bits ? __ffs(shared_bits) : MAX_AGAW_PFN_WIDTH;
aligned_pages = 1UL << mask;
}
*_pages = aligned_pages;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment