Commit 7ba8f2b2 authored by Ard Biesheuvel's avatar Ard Biesheuvel Committed by Will Deacon

arm64: mm: use a 48-bit ID map when possible on 52-bit VA builds

52-bit VA kernels can run on hardware that is only 48-bit capable, but
configure the ID map as 52-bit by default. This was not a problem until
recently, because the special T0SZ value for a 52-bit VA space was never
programmed into the TCR register anwyay, and because a 52-bit ID map
happens to use the same number of translation levels as a 48-bit one.

This behavior was changed by commit 1401bef7 ("arm64: mm: Always update
TCR_EL1 from __cpu_set_tcr_t0sz()"), which causes the unsupported T0SZ
value for a 52-bit VA to be programmed into TCR_EL1. While some hardware
simply ignores this, Mark reports that Amberwing systems choke on this,
resulting in a broken boot. But even before that commit, the unsupported
idmap_t0sz value was exposed to KVM and used to program TCR_EL2 incorrectly
as well.

Given that we already have to deal with address spaces being either 48-bit
or 52-bit in size, the cleanest approach seems to be to simply default to
a 48-bit VA ID map, and only switch to a 52-bit one if the placement of the
kernel in DRAM requires it. This is guaranteed not to happen unless the
system is actually 52-bit VA capable.

Fixes: 90ec95cd ("arm64: mm: Introduce VA_BITS_MIN")
Reported-by: default avatarMark Salter <msalter@redhat.com>
Link: http://lore.kernel.org/r/20210310003216.410037-1-msalter@redhat.comSigned-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210310171515.416643-2-ardb@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
parent 7bb8bc6e
...@@ -65,10 +65,7 @@ extern u64 idmap_ptrs_per_pgd; ...@@ -65,10 +65,7 @@ extern u64 idmap_ptrs_per_pgd;
static inline bool __cpu_uses_extended_idmap(void) static inline bool __cpu_uses_extended_idmap(void)
{ {
if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52)) return unlikely(idmap_t0sz != TCR_T0SZ(vabits_actual));
return false;
return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
} }
/* /*
......
...@@ -319,7 +319,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) ...@@ -319,7 +319,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
*/ */
adrp x5, __idmap_text_end adrp x5, __idmap_text_end
clz x5, x5 clz x5, x5
cmp x5, TCR_T0SZ(VA_BITS) // default T0SZ small enough? cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough?
b.ge 1f // .. then skip VA range extension b.ge 1f // .. then skip VA range extension
adr_l x6, idmap_t0sz adr_l x6, idmap_t0sz
......
...@@ -40,7 +40,7 @@ ...@@ -40,7 +40,7 @@
#define NO_BLOCK_MAPPINGS BIT(0) #define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1) #define NO_CONT_MAPPINGS BIT(1)
u64 idmap_t0sz = TCR_T0SZ(VA_BITS); u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
u64 __section(".mmuoff.data.write") vabits_actual; u64 __section(".mmuoff.data.write") vabits_actual;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment