Commit 52c7418e authored by David Miller's avatar David Miller Committed by Greg Kroah-Hartman

Fix compat futex hangs.

[FUTEX]: Fix address computation in compat code.

[ Upstream commit: 3c5fd9c7 ]

compat_exit_robust_list() computes a pointer to the
futex entry in userspace as follows:

	(void __user *)entry + futex_offset

'entry' is a 'struct robust_list __user *', and
'futex_offset' is a 'compat_long_t' (typically a 's32').

Things explode if the 32-bit sign bit is set in futex_offset.

Type promotion sign extends futex_offset to a 64-bit value before
adding it to 'entry'.

This triggered a problem on sparc64 running 32-bit applications which
would lock up a cpu looping forever in the fault handling for the
userspace load in handle_futex_death().

Compat userspace runs with address masking (wherein the cpu zeros out
the top 32-bits of every effective address given to a memory operation
instruction) so the sparc64 fault handler accounts for this by
zero'ing out the top 32-bits of the fault address too.

Since the kernel properly uses the compat_uptr interfaces, kernel side
accesses to compat userspace work too since they will only use
addresses with the top 32-bit clear.

Because of this compat futex layer bug we get into the following loop
when executing the get_user() load near the top of handle_futex_death():

1) load from address '0xfffffffff7f16bd8', FAULT
2) fault handler clears upper 32-bits, processes fault
   for address '0xf7f16bd8' which succeeds
3) goto #1

I want to thank Bernd Zeimetz, Josip Rodin, and Fabio Massimo Di Nitto
for their tireless efforts helping me track down this bug.
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
parent 06283bea
...@@ -29,6 +29,15 @@ fetch_robust_entry(compat_uptr_t *uentry, struct robust_list __user **entry, ...@@ -29,6 +29,15 @@ fetch_robust_entry(compat_uptr_t *uentry, struct robust_list __user **entry,
return 0; return 0;
} }
static void __user *futex_uaddr(struct robust_list *entry,
compat_long_t futex_offset)
{
compat_uptr_t base = ptr_to_compat(entry);
void __user *uaddr = compat_ptr(base + futex_offset);
return uaddr;
}
/* /*
* Walk curr->robust_list (very carefully, it's a userspace list!) * Walk curr->robust_list (very carefully, it's a userspace list!)
* and mark any locks found there dead, and notify any waiters. * and mark any locks found there dead, and notify any waiters.
...@@ -61,18 +70,23 @@ void compat_exit_robust_list(struct task_struct *curr) ...@@ -61,18 +70,23 @@ void compat_exit_robust_list(struct task_struct *curr)
if (fetch_robust_entry(&upending, &pending, if (fetch_robust_entry(&upending, &pending,
&head->list_op_pending, &pip)) &head->list_op_pending, &pip))
return; return;
if (pending) if (pending) {
handle_futex_death((void __user *)pending + futex_offset, curr, pip); void __user *uaddr = futex_uaddr(pending,
futex_offset);
handle_futex_death(uaddr, curr, pip);
}
while (entry != (struct robust_list __user *) &head->list) { while (entry != (struct robust_list __user *) &head->list) {
/* /*
* A pending lock might already be on the list, so * A pending lock might already be on the list, so
* dont process it twice: * dont process it twice:
*/ */
if (entry != pending) if (entry != pending) {
if (handle_futex_death((void __user *)entry + futex_offset, void __user *uaddr = futex_uaddr(entry,
curr, pi)) futex_offset);
if (handle_futex_death(uaddr, curr, pi))
return; return;
}
/* /*
* Fetch the next entry in the list: * Fetch the next entry in the list:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment