Commit ae4ea9a2 authored by Junichi Nomura's avatar Junichi Nomura Committed by Corey Minyard

ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg()

Commit 7ea0ed2b ("ipmi: Make the message handler easier to use for
SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg()
for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list.
That could lead to following list corruption problems:

1) low-level function treats smi_msg as not connected to list

  handle_one_recv_msg() could end up calling smi_send(), which
  assumes the msg is not connected to list.

  For example, the following sequence could corrupt list by
  doing list_add_tail() for the entry still connected to other list.

    handle_new_recv_msgs()
      msg = list_entry(waiting_rcv_msgs)
      handle_one_recv_msg(msg)
        handle_ipmb_get_msg_cmd(msg)
          smi_send(msg)
            spin_lock(xmit_msgs_lock)
            list_add_tail(msg)
            spin_unlock(xmit_msgs_lock)

2) race between multiple handle_new_recv_msgs() instances

  handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling
  handle_one_recv_msg() then retakes the lock and list_del() it.

  If others call handle_new_recv_msgs() during the window shown below
  list_del() will be done twice for the same smi_msg.

  handle_new_recv_msgs()
    spin_lock(waiting_rcv_msgs_lock)
    msg = list_entry(waiting_rcv_msgs)
    spin_unlock(waiting_rcv_msgs_lock)
  |
  | handle_one_recv_msg(msg)
  |
    spin_lock(waiting_rcv_msgs_lock)
    list_del(msg)
    spin_unlock(waiting_rcv_msgs_lock)

Fixes: 7ea0ed2b ("ipmi: Make the message handler easier to use for SMI interfaces")
Signed-off-by: default avatarJun'ichi Nomura <j-nomura@ce.jp.nec.com>
[Added a comment to describe why this works.]
Signed-off-by: default avatarCorey Minyard <cminyard@mvista.com>
Cc: stable@vger.kernel.org # 3.19
Tested-by: default avatarYe Feng <yefeng.yl@alibaba-inc.com>
parent dc03c0f9
...@@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_smi_t intf) ...@@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
while (!list_empty(&intf->waiting_rcv_msgs)) { while (!list_empty(&intf->waiting_rcv_msgs)) {
smi_msg = list_entry(intf->waiting_rcv_msgs.next, smi_msg = list_entry(intf->waiting_rcv_msgs.next,
struct ipmi_smi_msg, link); struct ipmi_smi_msg, link);
list_del(&smi_msg->link);
if (!run_to_completion) if (!run_to_completion)
spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock, spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
flags); flags);
...@@ -3829,11 +3830,14 @@ static void handle_new_recv_msgs(ipmi_smi_t intf) ...@@ -3829,11 +3830,14 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
if (rv > 0) { if (rv > 0) {
/* /*
* To preserve message order, quit if we * To preserve message order, quit if we
* can't handle a message. * can't handle a message. Add the message
* back at the head, this is safe because this
* tasklet is the only thing that pulls the
* messages.
*/ */
list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
break; break;
} else { } else {
list_del(&smi_msg->link);
if (rv == 0) if (rv == 0)
/* Message handled */ /* Message handled */
ipmi_free_smi_msg(smi_msg); ipmi_free_smi_msg(smi_msg);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment