1. 29 Aug, 2013 5 commits
    • Chris Wilson's avatar
      drm/i915: Invalidate TLBs for the rings after a reset · 05dd7086
      Chris Wilson authored
      commit 884020bf upstream.
      
      After any "soft gfx reset" we must manually invalidate the TLBs
      associated with each ring. Empirically, it seems that a
      suspend/resume or D3-D0 cycle count as a "soft reset". The symptom is
      that the hardware would fail to note the new address for its status
      page, and so it would continue to write the shadow registers and
      breadcrumbs into the old physical address (now used by something
      completely different, scary). Whereas the driver would read the new
      status page and never see any progress, it would appear that the GPU
      hung immediately upon resume.
      
      Based on a patch by naresh kumar kachhi <naresh.kumar.kacchi@intel.com>
      Reported-by: default avatarThiago Macieira <thiago@kde.org>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=64725Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Tested-by: default avatarThiago Macieira <thiago@kde.org>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      05dd7086
    • David Vrabel's avatar
      xen/events: initialize local per-cpu mask for all possible events · a3479446
      David Vrabel authored
      commit 84ca7a8e upstream.
      
      The sizeof() argument in init_evtchn_cpu_bindings() is incorrect
      resulting in only the first 64 (or 32 in 32-bit guests) ports having
      their bindings being initialized to VCPU 0.
      
      In most cases this does not cause a problem as request_irq() will set
      the irq affinity which will set the correct local per-cpu mask.
      However, if the request_irq() is called on a VCPU other than 0, there
      is a window between the unmasking of the event and the affinity being
      set were an event may be lost because it is not locally unmasked on
      any VCPU. If request_irq() is called on VCPU 0 then local irqs are
      disabled during the window and the race does not occur.
      
      Fix this by initializing all NR_EVENT_CHANNEL bits in the local
      per-cpu masks.
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a3479446
    • Jussi Kivilinna's avatar
      zd1201: do not use stack as URB transfer_buffer · 523578f7
      Jussi Kivilinna authored
      commit 1206ff4f upstream.
      
      Patch fixes zd1201 not to use stack as URB transfer_buffer. URB buffers need
      to be DMA-able, which stack is not.
      
      Patch is only compile tested.
      Signed-off-by: default avatarJussi Kivilinna <jussi.kivilinna@iki.fi>
      Signed-off-by: default avatarJohn W. Linville <linville@tuxdriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      523578f7
    • Tejun Heo's avatar
      workqueue: consider work function when searching for busy work items · 55e3e1f4
      Tejun Heo authored
      commit a2c1c57b upstream.
      
      To avoid executing the same work item concurrenlty, workqueue hashes
      currently busy workers according to their current work items and looks
      up the the table when it wants to execute a new work item.  If there
      already is a worker which is executing the new work item, the new item
      is queued to the found worker so that it gets executed only after the
      current execution finishes.
      
      Unfortunately, a work item may be freed while being executed and thus
      recycled for different purposes.  If it gets recycled for a different
      work item and queued while the previous execution is still in
      progress, workqueue may make the new work item wait for the old one
      although the two aren't really related in any way.
      
      In extreme cases, this false dependency may lead to deadlock although
      it's extremely unlikely given that there aren't too many self-freeing
      work item users and they usually don't wait for other work items.
      
      To alleviate the problem, record the current work function in each
      busy worker and match it together with the work item address in
      find_worker_executing_work().  While this isn't complete, it ensures
      that unrelated work items don't interact with each other and in the
      very unlikely case where a twisted wq user triggers it, it's always
      onto itself making the culprit easy to spot.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarAndrey Isakov <andy51@gmx.ru>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=51701
      [lizf: Backported to 3.4:
       - Adjust context
       - Incorporate earlier logging cleanup in process_one_work() from
         044c782c ('workqueue: fix checkpatch issues')]
      Signed-off-by: default avatarLi Zefan <lizefan@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      55e3e1f4
    • Lai Jiangshan's avatar
      workqueue: fix possible stall on try_to_grab_pending() of a delayed work item · 31eafff4
      Lai Jiangshan authored
      commit 3aa62497 upstream.
      
      Currently, when try_to_grab_pending() grabs a delayed work item, it
      leaves its linked work items alone on the delayed_works.  The linked
      work items are always NO_COLOR and will cause future
      cwq_activate_first_delayed() increase cwq->nr_active incorrectly, and
      may cause the whole cwq to stall.  For example,
      
      state: cwq->max_active = 1, cwq->nr_active = 1
             one work in cwq->pool, many in cwq->delayed_works.
      
      step1: try_to_grab_pending() removes a work item from delayed_works
             but leaves its NO_COLOR linked work items on it.
      
      step2: Later on, cwq_activate_first_delayed() activates the linked
             work item increasing ->nr_active.
      
      step3: cwq->nr_active = 1, but all activated work items of the cwq are
             NO_COLOR.  When they finish, cwq->nr_active will not be
             decreased due to NO_COLOR, and no further work items will be
             activated from cwq->delayed_works. the cwq stalls.
      
      Fix it by ensuring the target work item is activated before stealing
      PENDING in try_to_grab_pending().  This ensures that all the linked
      work items are activated without incorrectly bumping cwq->nr_active.
      
      tj: Updated comment and description.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      [lizf: backported to 3.4: adjust context]
      Signed-off-by: default avatarLi Zefan <lizefan@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      31eafff4
  2. 20 Aug, 2013 34 commits
  3. 15 Aug, 2013 1 commit