Commit 3a732b46 authored by Jeremy Kerr's avatar Jeremy Kerr Committed by David S. Miller

mctp: prevent double key removal and unref

Currently, we have a bug where a simultaneous DROPTAG ioctl and socket
close may race, as we attempt to remove a key from lists twice, and
perform an unref for each removal operation. This may result in a uaf
when we attempt the second unref.

This change fixes the race by making __mctp_key_remove tolerant to being
called on a key that has already been removed from the socket/net lists,
and only performs the unref when we do the actual remove. We also need
to hold the list lock on the ioctl cleanup path.

This fix is based on a bug report and comprehensive analysis from
butt3rflyh4ck <butterflyhuangxx@gmail.com>, found via syzkaller.

Cc: stable@vger.kernel.org
Fixes: 63ed1aab ("mctp: Add SIOCMCTP{ALLOC,DROP}TAG ioctls for tag control")
Reported-by: default avatarbutt3rflyh4ck <butterflyhuangxx@gmail.com>
Signed-off-by: default avatarJeremy Kerr <jk@codeconstruct.com.au>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent ed5d1f61
...@@ -295,11 +295,12 @@ __must_hold(&net->mctp.keys_lock) ...@@ -295,11 +295,12 @@ __must_hold(&net->mctp.keys_lock)
mctp_dev_release_key(key->dev, key); mctp_dev_release_key(key->dev, key);
spin_unlock_irqrestore(&key->lock, flags); spin_unlock_irqrestore(&key->lock, flags);
hlist_del(&key->hlist); if (!hlist_unhashed(&key->hlist)) {
hlist_del(&key->sklist); hlist_del_init(&key->hlist);
hlist_del_init(&key->sklist);
/* unref for the lists */ /* unref for the lists */
mctp_key_unref(key); mctp_key_unref(key);
}
kfree_skb(skb); kfree_skb(skb);
} }
...@@ -373,9 +374,17 @@ static int mctp_ioctl_alloctag(struct mctp_sock *msk, unsigned long arg) ...@@ -373,9 +374,17 @@ static int mctp_ioctl_alloctag(struct mctp_sock *msk, unsigned long arg)
ctl.tag = tag | MCTP_TAG_OWNER | MCTP_TAG_PREALLOC; ctl.tag = tag | MCTP_TAG_OWNER | MCTP_TAG_PREALLOC;
if (copy_to_user((void __user *)arg, &ctl, sizeof(ctl))) { if (copy_to_user((void __user *)arg, &ctl, sizeof(ctl))) {
spin_lock_irqsave(&key->lock, flags); unsigned long fl2;
__mctp_key_remove(key, net, flags, MCTP_TRACE_KEY_DROPPED); /* Unwind our key allocation: the keys list lock needs to be
* taken before the individual key locks, and we need a valid
* flags value (fl2) to pass to __mctp_key_remove, hence the
* second spin_lock_irqsave() rather than a plain spin_lock().
*/
spin_lock_irqsave(&net->mctp.keys_lock, flags);
spin_lock_irqsave(&key->lock, fl2);
__mctp_key_remove(key, net, fl2, MCTP_TRACE_KEY_DROPPED);
mctp_key_unref(key); mctp_key_unref(key);
spin_unlock_irqrestore(&net->mctp.keys_lock, flags);
return -EFAULT; return -EFAULT;
} }
......
...@@ -228,12 +228,12 @@ __releases(&key->lock) ...@@ -228,12 +228,12 @@ __releases(&key->lock)
if (!key->manual_alloc) { if (!key->manual_alloc) {
spin_lock_irqsave(&net->mctp.keys_lock, flags); spin_lock_irqsave(&net->mctp.keys_lock, flags);
hlist_del(&key->hlist); if (!hlist_unhashed(&key->hlist)) {
hlist_del(&key->sklist); hlist_del_init(&key->hlist);
hlist_del_init(&key->sklist);
mctp_key_unref(key);
}
spin_unlock_irqrestore(&net->mctp.keys_lock, flags); spin_unlock_irqrestore(&net->mctp.keys_lock, flags);
/* unref for the lists */
mctp_key_unref(key);
} }
/* and one for the local reference */ /* and one for the local reference */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment