Commit 7559b7d7 authored by Mark Brown's avatar Mark Brown Committed by Catalin Marinas

arm64/sve: Better handle failure to allocate SVE register storage

Currently we "handle" failure to allocate the SVE register storage by
doing a BUG_ON() and hoping for the best. This is obviously not great and
the memory allocation failure will already be loud enough without the
BUG_ON(). As the comment says it is a corner case but let's try to do a bit
better, remove the BUG_ON() and add code to handle the failure in the
callers.

For the ptrace and signal code we can return -ENOMEM gracefully however
we have no real error reporting path available to us for the SVE access
trap so instead generate a SIGKILL if the allocation fails there. This
at least means that we won't try to soldier on and end up trying to
access the nonexistant state and while it's obviously not ideal for
userspace SIGKILL doesn't allow any handling so minimises the ABI
impact, making it easier to improve the interface later if we come up
with a better idea.
Signed-off-by: default avatarMark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210824153417.18371-1-broonie@kernel.orgSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
parent e3849765
...@@ -520,12 +520,6 @@ void sve_alloc(struct task_struct *task) ...@@ -520,12 +520,6 @@ void sve_alloc(struct task_struct *task)
/* This is a small allocation (maximum ~8KB) and Should Not Fail. */ /* This is a small allocation (maximum ~8KB) and Should Not Fail. */
task->thread.sve_state = task->thread.sve_state =
kzalloc(sve_state_size(task), GFP_KERNEL); kzalloc(sve_state_size(task), GFP_KERNEL);
/*
* If future SVE revisions can have larger vectors though,
* this may cease to be true:
*/
BUG_ON(!task->thread.sve_state);
} }
...@@ -945,6 +939,10 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs) ...@@ -945,6 +939,10 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs)
} }
sve_alloc(current); sve_alloc(current);
if (!current->thread.sve_state) {
force_sig(SIGKILL);
return;
}
get_cpu_fpsimd_context(); get_cpu_fpsimd_context();
......
...@@ -845,6 +845,11 @@ static int sve_set(struct task_struct *target, ...@@ -845,6 +845,11 @@ static int sve_set(struct task_struct *target,
} }
sve_alloc(target); sve_alloc(target);
if (!target->thread.sve_state) {
ret = -ENOMEM;
clear_tsk_thread_flag(target, TIF_SVE);
goto out;
}
/* /*
* Ensure target->thread.sve_state is up to date with target's * Ensure target->thread.sve_state is up to date with target's
......
...@@ -289,6 +289,11 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) ...@@ -289,6 +289,11 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
/* From now, fpsimd_thread_switch() won't touch thread.sve_state */ /* From now, fpsimd_thread_switch() won't touch thread.sve_state */
sve_alloc(current); sve_alloc(current);
if (!current->thread.sve_state) {
clear_thread_flag(TIF_SVE);
return -ENOMEM;
}
err = __copy_from_user(current->thread.sve_state, err = __copy_from_user(current->thread.sve_state,
(char __user const *)user->sve + (char __user const *)user->sve +
SVE_SIG_REGS_OFFSET, SVE_SIG_REGS_OFFSET,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment