1. 18 May, 2014 6 commits
    • Thomas Graf's avatar
      netfilter: Can't fail and free after table replacement · 8ed40c12
      Thomas Graf authored
      commit c58dd2dd upstream.
      
      All xtables variants suffer from the defect that the copy_to_user()
      to copy the counters to user memory may fail after the table has
      already been exchanged and thus exposed. Return an error at this
      point will result in freeing the already exposed table. Any
      subsequent packet processing will result in a kernel panic.
      
      We can't copy the counters before exposing the new tables as we
      want provide the counter state after the old table has been
      unhooked. Therefore convert this into a silent error.
      
      Cc: Florian Westphal <fw@strlen.de>
      Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8ed40c12
    • Joe Perches's avatar
      net: Add net_ratelimited_function and net_<level>_ratelimited macros · 4ede126e
      Joe Perches authored
      commit 3a3bfb61 upstream.
      
      __ratelimit() can be considered an inverted bool test because
      it returns true when not ratelimited.  Several tests in the
      kernel tree use this __ratelimit() function incorrectly.
      
      No net_ratelimit uses are incorrect currently though.
      
      Most uses of net_ratelimit are to log something via printk or
      pr_<level>.
      
      In order to minimize the uses of net_ratelimit, and to start
      standardizing the code style used for __ratelimit() and net_ratelimit(),
      add a net_ratelimited_function() macro and net_<level>_ratelimited()
      logging macros similar to pr_<level>_ratelimited that use the global
      net_ratelimit instead of a static per call site "struct ratelimit_state".
      Signed-off-by: default avatarJoe Perches <joe@perches.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4ede126e
    • Andrey Vagin's avatar
      netfilter: nf_conntrack: reserve two bytes for nf_ct_ext->len · 4b87f408
      Andrey Vagin authored
      commit 223b02d9 upstream.
      
      "len" contains sizeof(nf_ct_ext) and size of extensions. In a worst
      case it can contain all extensions. Bellow you can find sizes for all
      types of extensions. Their sum is definitely bigger than 256.
      
      nf_ct_ext_types[0]->len = 24
      nf_ct_ext_types[1]->len = 32
      nf_ct_ext_types[2]->len = 24
      nf_ct_ext_types[3]->len = 32
      nf_ct_ext_types[4]->len = 152
      nf_ct_ext_types[5]->len = 2
      nf_ct_ext_types[6]->len = 16
      nf_ct_ext_types[7]->len = 8
      
      I have seen "len" up to 280 and my host has crashes w/o this patch.
      
      The right way to fix this problem is reducing the size of the ecache
      extension (4) and Florian is going to do this, but these changes will
      be quite large to be appropriate for a stable tree.
      
      Fixes: 5b423f6a (netfilter: nf_conntrack: fix racy timer handling with reliable)
      Cc: Pablo Neira Ayuso <pablo@netfilter.org>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: default avatarAndrey Vagin <avagin@openvz.org>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4b87f408
    • Roman Pen's avatar
      blktrace: fix accounting of partially completed requests · 14eee5bd
      Roman Pen authored
      commit af5040da upstream.
      
      trace_block_rq_complete does not take into account that request can
      be partially completed, so we can get the following incorrect output
      of blkparser:
      
        C   R 232 + 240 [0]
        C   R 240 + 232 [0]
        C   R 248 + 224 [0]
        C   R 256 + 216 [0]
      
      but should be:
      
        C   R 232 + 8 [0]
        C   R 240 + 8 [0]
        C   R 248 + 8 [0]
        C   R 256 + 8 [0]
      
      Also, the whole output summary statistics of completed requests and
      final throughput will be incorrect.
      
      This patch takes into account real completion size of the request and
      fixes wrong completion accounting.
      Signed-off-by: default avatarRoman Pen <r.peniaev@gmail.com>
      CC: Steven Rostedt <rostedt@goodmis.org>
      CC: Frederic Weisbecker <fweisbec@gmail.com>
      CC: Ingo Molnar <mingo@redhat.com>
      CC: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      14eee5bd
    • Peter Hurley's avatar
      n_tty: Fix n_tty_write crash when echoing in raw mode · 664c0fc6
      Peter Hurley authored
      commit 4291086b upstream.
      
      The tty atomic_write_lock does not provide an exclusion guarantee for
      the tty driver if the termios settings are LECHO & !OPOST.  And since
      it is unexpected and not allowed to call TTY buffer helpers like
      tty_insert_flip_string concurrently, this may lead to crashes when
      concurrect writers call pty_write. In that case the following two
      writers:
      * the ECHOing from a workqueue and
      * pty_write from the process
      race and can overflow the corresponding TTY buffer like follows.
      
      If we look into tty_insert_flip_string_fixed_flag, there is:
        int space = __tty_buffer_request_room(port, goal, flags);
        struct tty_buffer *tb = port->buf.tail;
        ...
        memcpy(char_buf_ptr(tb, tb->used), chars, space);
        ...
        tb->used += space;
      
      so the race of the two can result in something like this:
                    A                                B
      __tty_buffer_request_room
                                        __tty_buffer_request_room
      memcpy(buf(tb->used), ...)
      tb->used += space;
                                        memcpy(buf(tb->used), ...) ->BOOM
      
      B's memcpy is past the tty_buffer due to the previous A's tb->used
      increment.
      
      Since the N_TTY line discipline input processing can output
      concurrently with a tty write, obtain the N_TTY ldisc output_lock to
      serialize echo output with normal tty writes.  This ensures the tty
      buffer helper tty_insert_flip_string is not called concurrently and
      everything is fine.
      
      Note that this is nicely reproducible by an ordinary user using
      forkpty and some setup around that (raw termios + ECHO). And it is
      present in kernels at least after commit
      d945cb9c (pty: Rework the pty layer to
      use the normal buffering logic) in 2.6.31-rc3.
      
      js: add more info to the commit log
      js: switch to bool
      js: lock unconditionally
      js: lock only the tty->ops->write call
      
      References: CVE-2014-0196
      Reported-and-tested-by: default avatarJiri Slaby <jslaby@suse.cz>
      Signed-off-by: default avatarPeter Hurley <peter@hurleysoftware.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      [bwh: Backported to 3.2: output_lock is a member of struct tty_struct]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      664c0fc6
    • Dan Carpenter's avatar
      SCSI: megaraid: missing bounds check in mimd_to_kioc() · 3307c63b
      Dan Carpenter authored
      commit 3de22601 upstream.
      
      pthru32->dataxferlen comes from the user so we need to check that it's
      not too large so we don't overflow the buffer.
      Reported-by: default avatarNico Golde <nico@ngolde.de>
      Reported-by: default avatarFabian Yamaguchi <fabs@goesec.de>
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Acked-by: default avatarSumit Saxena <sumit.saxena@lsi.com>
      Signed-off-by: default avatarJames Bottomley <JBottomley@Parallels.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3307c63b
  2. 13 May, 2014 23 commits
  3. 06 May, 2014 11 commits
    • Greg Kroah-Hartman's avatar
      Linux 3.4.89 · d89a13cf
      Greg Kroah-Hartman authored
      d89a13cf
    • Aaron Sanders's avatar
      USB: pl2303: add ids for Hewlett-Packard HP POS pole displays · 1361b538
      Aaron Sanders authored
      commit b16c02fb upstream.
      
      Add device ids to pl2303 for the Hewlett-Packard HP POS pole displays:
      
      LD960: 03f0:0B39
      LCM220: 03f0:3139
      LCM960: 03f0:3239
      
      [ Johan: fix indentation and sort PIDs numerically ]
      Signed-off-by: default avatarAaron Sanders <aaron.sanders@hp.com>
      Signed-off-by: default avatarJohan Hovold <jhovold@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1361b538
    • Theodore Ts'o's avatar
      ext4: use i_size_read in ext4_unaligned_aio() · 6b2b2314
      Theodore Ts'o authored
      commit 6e6358fc upstream.
      
      We haven't taken i_mutex yet, so we need to use i_size_read().
      Signed-off-by: default avatar"Theodore Ts'o" <tytso@mit.edu>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6b2b2314
    • alex chen's avatar
      ocfs2: do not put bh when buffer_uptodate failed · 982daeb4
      alex chen authored
      commit f7cf4f5b upstream.
      
      Do not put bh when buffer_uptodate failed in ocfs2_write_block and
      ocfs2_write_super_or_backup, because it will put bh in b_end_io.
      Otherwise it will hit a warning "VFS: brelse: Trying to free free
      buffer".
      Signed-off-by: default avatarAlex Chen <alex.chen@huawei.com>
      Reviewed-by: default avatarJoseph Qi <joseph.qi@huawei.com>
      Reviewed-by: default avatarSrinivas Eeda <srinivas.eeda@oracle.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Acked-by: default avatarJoel Becker <jlbec@evilplan.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      982daeb4
    • Junxiao Bi's avatar
      ocfs2: dlm: fix recovery hung · 8917a40d
      Junxiao Bi authored
      commit ded2cf71 upstream.
      
      There is a race window in dlm_do_recovery() between dlm_remaster_locks()
      and dlm_reset_recovery() when the recovery master nearly finish the
      recovery process for a dead node.  After the master sends FINALIZE_RECO
      message in dlm_remaster_locks(), another node may become the recovery
      master for another dead node, and then send the BEGIN_RECO message to
      all the nodes included the old master, in the handler of this message
      dlm_begin_reco_handler() of old master, dlm->reco.dead_node and
      dlm->reco.new_master will be set to the second dead node and the new
      master, then in dlm_reset_recovery(), these two variables will be reset
      to default value.  This will cause new recovery master can not finish
      the recovery process and hung, at last the whole cluster will hung for
      recovery.
      
      old recovery master:                                 new recovery master:
      dlm_remaster_locks()
                                                        become recovery master for
                                                        another dead node.
                                                        dlm_send_begin_reco_message()
      dlm_begin_reco_handler()
      {
       if (dlm->reco.state & DLM_RECO_STATE_FINALIZE) {
        return -EAGAIN;
       }
       dlm_set_reco_master(dlm, br->node_idx);
       dlm_set_reco_dead_node(dlm, br->dead_node);
      }
      dlm_reset_recovery()
      {
       dlm_set_reco_dead_node(dlm, O2NM_INVALID_NODE_NUM);
       dlm_set_reco_master(dlm, O2NM_INVALID_NODE_NUM);
      }
                                                        will hang in dlm_remaster_locks() for
                                                        request dlm locks info
      
      Before send FINALIZE_RECO message, recovery master should set
      DLM_RECO_STATE_FINALIZE for itself and clear it after the recovery done,
      this can break the race windows as the BEGIN_RECO messages will not be
      handled before DLM_RECO_STATE_FINALIZE flag is cleared.
      
      A similar race may happen between new recovery master and normal node
      which is in dlm_finalize_reco_handler(), also fix it.
      Signed-off-by: default avatarJunxiao Bi <junxiao.bi@oracle.com>
      Reviewed-by: default avatarSrinivas Eeda <srinivas.eeda@oracle.com>
      Reviewed-by: default avatarWengang Wang <wen.gang.wang@oracle.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8917a40d
    • Junxiao Bi's avatar
      ocfs2: dlm: fix lock migration crash · 09400fe4
      Junxiao Bi authored
      commit 34aa8dac upstream.
      
      This issue was introduced by commit 800deef3 ("ocfs2: use
      list_for_each_entry where benefical") in 2007 where it replaced
      list_for_each with list_for_each_entry.  The variable "lock" will point
      to invalid data if "tmpq" list is empty and a panic will be triggered
      due to this.  Sunil advised reverting it back, but the old version was
      also not right.  At the end of the outer for loop, that
      list_for_each_entry will also set "lock" to an invalid data, then in the
      next loop, if the "tmpq" list is empty, "lock" will be an stale invalid
      data and cause the panic.  So reverting the list_for_each back and reset
      "lock" to NULL to fix this issue.
      
      Another concern is that this seemes can not happen because the "tmpq"
      list should not be empty.  Let me describe how.
      
      old lock resource owner(node 1):                                  migratation target(node 2):
      image there's lockres with a EX lock from node 2 in
      granted list, a NR lock from node x with convert_type
      EX in converting list.
      dlm_empty_lockres() {
       dlm_pick_migration_target() {
         pick node 2 as target as its lock is the first one
         in granted list.
       }
       dlm_migrate_lockres() {
         dlm_mark_lockres_migrating() {
           res->state |= DLM_LOCK_RES_BLOCK_DIRTY;
           wait_event(dlm->ast_wq, !dlm_lockres_is_dirty(dlm, res));
      	 //after the above code, we can not dirty lockres any more,
           // so dlm_thread shuffle list will not run
                                                                         downconvert lock from EX to NR
                                                                         upconvert lock from NR to EX
      <<< migration may schedule out here, then
      <<< node 2 send down convert request to convert type from EX to
      <<< NR, then send up convert request to convert type from NR to
      <<< EX, at this time, lockres granted list is empty, and two locks
      <<< in the converting list, node x up convert lock followed by
      <<< node 2 up convert lock.
      
      	 // will set lockres RES_MIGRATING flag, the following
      	 // lock/unlock can not run
           dlm_lockres_release_ast(dlm, res);
         }
      
         dlm_send_one_lockres()
                                                                       dlm_process_recovery_data()
                                                                         for (i=0; i<mres->num_locks; i++)
                                                                           if (ml->node == dlm->node_num)
                                                                             for (j = DLM_GRANTED_LIST; j <= DLM_BLOCKED_LIST; j++) {
                                                                              list_for_each_entry(lock, tmpq, list)
                                                                              if (lock) break; <<< lock is invalid as grant list is empty.
                                                                             }
                                                                             if (lock->ml.node != ml->node)
                                                                               BUG() >>> crash here
       }
      
      I see the above locks status from a vmcore of our internal bug.
      Signed-off-by: default avatarJunxiao Bi <junxiao.bi@oracle.com>
      Reviewed-by: default avatarWengang Wang <wen.gang.wang@oracle.com>
      Cc: Sunil Mushran <sunil.mushran@gmail.com>
      Reviewed-by: default avatarSrinivas Eeda <srinivas.eeda@oracle.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      09400fe4
    • Liu Hua's avatar
      hung_task: check the value of "sysctl_hung_task_timeout_sec" · 96f6aea2
      Liu Hua authored
      commit 80df2847 upstream.
      
      As sysctl_hung_task_timeout_sec is unsigned long, when this value is
      larger then LONG_MAX/HZ, the function schedule_timeout_interruptible in
      watchdog will return immediately without sleep and with print :
      
        schedule_timeout: wrong timeout value ffffffffffffff83
      
      and then the funtion watchdog will call schedule_timeout_interruptible
      again and again.  The screen will be filled with
      
      	"schedule_timeout: wrong timeout value ffffffffffffff83"
      
      This patch does some check and correction in sysctl, to let the function
      schedule_timeout_interruptible allways get the valid parameter.
      Signed-off-by: default avatarLiu Hua <sdu.liu@huawei.com>
      Tested-by: default avatarSatoru Takeuchi <satoru.takeuchi@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      96f6aea2
    • Mizuma, Masayoshi's avatar
      mm: hugetlb: fix softlockup when a large number of hugepages are freed. · af4acfaf
      Mizuma, Masayoshi authored
      commit 55f67141 upstream.
      
      When I decrease the value of nr_hugepage in procfs a lot, softlockup
      happens.  It is because there is no chance of context switch during this
      process.
      
      On the other hand, when I allocate a large number of hugepages, there is
      some chance of context switch.  Hence softlockup doesn't happen during
      this process.  So it's necessary to add the context switch in the
      freeing process as same as allocating process to avoid softlockup.
      
      When I freed 12 TB hugapages with kernel-2.6.32-358.el6, the freeing
      process occupied a CPU over 150 seconds and following softlockup message
      appeared twice or more.
      
      $ echo 6000000 > /proc/sys/vm/nr_hugepages
      $ cat /proc/sys/vm/nr_hugepages
      6000000
      $ grep ^Huge /proc/meminfo
      HugePages_Total:   6000000
      HugePages_Free:    6000000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      Hugepagesize:       2048 kB
      $ echo 0 > /proc/sys/vm/nr_hugepages
      
      BUG: soft lockup - CPU#16 stuck for 67s! [sh:12883] ...
      Pid: 12883, comm: sh Not tainted 2.6.32-358.el6.x86_64 #1
      Call Trace:
        free_pool_huge_page+0xb8/0xd0
        set_max_huge_pages+0x128/0x190
        hugetlb_sysctl_handler_common+0x113/0x140
        hugetlb_sysctl_handler+0x1e/0x20
        proc_sys_call_handler+0x97/0xd0
        proc_sys_write+0x14/0x20
        vfs_write+0xb8/0x1a0
        sys_write+0x51/0x90
        __audit_syscall_exit+0x265/0x290
        system_call_fastpath+0x16/0x1b
      
      I have not confirmed this problem with upstream kernels because I am not
      able to prepare the machine equipped with 12TB memory now.  However I
      confirmed that the amount of decreasing hugepages was directly
      proportional to the amount of required time.
      
      I measured required times on a smaller machine.  It showed 130-145
      hugepages decreased in a millisecond.
      
        Amount of decreasing     Required time      Decreasing rate
        hugepages                     (msec)         (pages/msec)
        ------------------------------------------------------------
        10,000 pages == 20GB         70 -  74          135-142
        30,000 pages == 60GB        208 - 229          131-144
      
      It means decrement of 6TB hugepages will trigger softlockup with the
      default threshold 20sec, in this decreasing rate.
      Signed-off-by: default avatarMasayoshi Mizuma <m.mizuma@jp.fujitsu.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      af4acfaf
    • Matt Fleming's avatar
      sh: fix format string bug in stack tracer · 5650bff7
      Matt Fleming authored
      commit a0c32761 upstream.
      
      Kees reported the following error:
      
         arch/sh/kernel/dumpstack.c: In function 'print_trace_address':
         arch/sh/kernel/dumpstack.c:118:2: error: format not a string literal and no format arguments [-Werror=format-security]
      
      Use the "%s" format so that it's impossible to interpret 'data' as a
      format string.
      Signed-off-by: default avatarMatt Fleming <matt.fleming@intel.com>
      Reported-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5650bff7
    • Alan Stern's avatar
      USB: unbind all interfaces before rebinding any · d6f6fc7a
      Alan Stern authored
      commit 6aec044c upstream.
      
      When a driver doesn't have pre_reset, post_reset, or reset_resume
      methods, the USB core unbinds that driver when its device undergoes a
      reset or a reset-resume, and then rebinds it afterward.
      
      The existing straightforward implementation can lead to problems,
      because each interface gets unbound and rebound before the next
      interface is handled.  If a driver claims additional interfaces, the
      claim may fail because the old binding instance may still own the
      additional interface when the new instance tries to claim it.
      
      This patch fixes the problem by first unbinding all the interfaces
      that are marked (i.e., their needs_binding flag is set) and then
      rebinding all of them.
      
      The patch also makes the helper functions in driver.c a little more
      uniform and adjusts some out-of-date comments.
      Signed-off-by: default avatarAlan Stern <stern@rowland.harvard.edu>
      Reported-and-tested-by: default avatar"Poulain, Loic" <loic.poulain@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d6f6fc7a
    • Paul Gortmaker's avatar
      hvc: ensure hvc_init is only ever called once in hvc_console.c · 216583b5
      Paul Gortmaker authored
      commit f76a1cbe upstream.
      
      Commit 3e6c6f63 ("Delay creation of
      khcvd thread") moved the call of hvc_init from being a device_initcall
      into hvc_alloc, and used a non-null hvc_driver as indication of whether
      hvc_init had already been called.
      
      The problem with this is that hvc_driver is only assigned a value
      at the bottom of hvc_init, and so there is a window where multiple
      hvc_alloc calls can be in progress at the same time and hence try
      and call hvc_init multiple times.  Previously the use of device_init
      guaranteed that hvc_init was only called once.
      
      This manifests itself as sporadic instances of two hvc_init calls
      racing each other, and with the loser of the race getting -EBUSY
      from tty_register_driver() and hence that virtual console fails:
      
          Couldn't register hvc console driver
          virtio-ports vport0p1: error -16 allocating hvc for port
      
      Here we add an atomic_t to guarantee we'll never run hvc_init twice.
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Fixes: 3e6c6f63 ("Delay creation of khcvd thread")
      Reported-by: default avatarJim Somerville <Jim.Somerville@windriver.com>
      Tested-by: default avatarJim Somerville <Jim.Somerville@windriver.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      216583b5