1. 28 Oct, 2015 40 commits
    • Alex Gartrell's avatar
      ipvs: skb_orphan in case of forwarding · 4cf3ff31
      Alex Gartrell authored
      [ Upstream commit 71563f34 ]
      
      It is possible that we bind against a local socket in early_demux when we
      are actually going to want to forward it.  In this case, the socket serves
      no purpose and only serves to confuse things (particularly functions which
      implicitly expect sk_fullsock to be true, like ip_local_out).
      Additionally, skb_set_owner_w is totally broken for non full-socks.
      Signed-off-by: default avatarAlex Gartrell <agartrell@fb.com>
      Fixes: 41063e9d ("ipv4: Early TCP socket demux.")
      Acked-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      4cf3ff31
    • Julian Anastasov's avatar
      ipvs: fix crash if scheduler is changed · c803fddd
      Julian Anastasov authored
      [ Upstream commit 05f00505 ]
      
      I overlooked the svc->sched_data usage from schedulers
      when the services were converted to RCU in 3.10. Now
      the rare ipvsadm -E command can change the scheduler
      but due to the reverse order of ip_vs_bind_scheduler
      and ip_vs_unbind_scheduler we provide new sched_data
      to the old scheduler resulting in a crash.
      
      To fix it without changing the scheduler methods we
      have to use synchronize_rcu() only for the editing case.
      It means all svc->scheduler readers should expect a
      NULL value. To avoid breakage for the service listing
      and ipvsadm -R we can use the "none" name to indicate
      that scheduler is not assigned, a state when we drop
      new connections.
      Reported-by: default avatarAlexander Vasiliev <a.vasylev@404-group.com>
      Fixes: ceec4c38 ("ipvs: convert services to rcu")
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c803fddd
    • Julian Anastasov's avatar
      ipvs: do not use random local source address for tunnels · e89e6533
      Julian Anastasov authored
      [ Upstream commit 4754957f ]
      
      Michael Vallaly reports about wrong source address used
      in rare cases for tunneled traffic. Looks like
      __ip_vs_get_out_rt in 3.10+ is providing uninitialized
      dest_dst->dst_saddr.ip because ip_vs_dest_dst_alloc uses
      kmalloc. While we retry after seeing EINVAL from routing
      for data that does not look like valid local address, it
      still succeeded when this memory was previously used from
      other dests and with different local addresses. As result,
      we can use valid local address that is not suitable for
      our real server.
      
      Fix it by providing 0.0.0.0 every time our cache is refreshed.
      By this way we will get preferred source address from routing.
      Reported-by: default avatarMichael Vallaly <lvs@nolatency.com>
      Fixes: 026ace06 ("ipvs: optimize dst usage for real server")
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      e89e6533
    • Ben Segall's avatar
      sched/fair: Prevent throttling in early pick_next_task_fair() · d6a27fd8
      Ben Segall authored
      [ Upstream commit 54d27365 ]
      
      The optimized task selection logic optimistically selects a new task
      to run without first doing a full put_prev_task(). This is so that we
      can avoid a put/set on the common ancestors of the old and new task.
      
      Similarly, we should only call check_cfs_rq_runtime() to throttle
      eligible groups if they're part of the common ancestry, otherwise it
      is possible to end up with no eligible task in the simple task
      selection.
      
      Imagine:
      		/root
      	/prev		/next
      	/A		/B
      
      If our optimistic selection ends up throttling /next, we goto simple
      and our put_prev_task() ends up throttling /prev, after which we're
      going to bug out in set_next_entity() because there aren't any tasks
      left.
      
      Avoid this scenario by only throttling common ancestors.
      Reported-by: default avatarMohammed Naser <mnaser@vexxhost.com>
      Reported-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Signed-off-by: default avatarBen Segall <bsegall@google.com>
      [ munged Changelog ]
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: pjt@google.com
      Fixes: 678d5718 ("sched/fair: Optimize cgroup pick_next_task_fair()")
      Link: http://lkml.kernel.org/r/xm26wq1oswoq.fsf@sword-of-the-dawn.mtv.corp.google.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      d6a27fd8
    • Linus Torvalds's avatar
      Initialize msg/shm IPC objects before doing ipc_addid() · b5495ddc
      Linus Torvalds authored
      [ Upstream commit b9a53227 ]
      
      As reported by Dmitry Vyukov, we really shouldn't do ipc_addid() before
      having initialized the IPC object state.  Yes, we initialize the IPC
      object in a locked state, but with all the lockless RCU lookup work,
      that IPC object lock no longer means that the state cannot be seen.
      
      We already did this for the IPC semaphore code (see commit e8577d1f:
      "ipc/sem.c: fully initialize sem_array before making it visible") but we
      clearly forgot about msg and shm.
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: Davidlohr Bueso <dbueso@suse.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      b5495ddc
    • Reyad Attiyat's avatar
      usb: xhci: Add support for URB_ZERO_PACKET to bulk/sg transfers · df8a261d
      Reyad Attiyat authored
      [ Upstream commit 4758dcd1 ]
      
      This commit checks for the URB_ZERO_PACKET flag and creates an extra
      zero-length td if the urb transfer length is a multiple of the endpoint's
      max packet length.
      Signed-off-by: default avatarReyad Attiyat <reyad.attiyat@gmail.com>
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      df8a261d
    • Mathias Nyman's avatar
      xhci: init command timeout timer earlier to avoid deleting it uninitialized · aafb9ef3
      Mathias Nyman authored
      [ Upstream commit cc8e4fc0 ]
      
      Don't check if timer is running with a timer_pending() before
      deleting it with del_timer_sync(), this defies the whole point of
      the sync part and can cause a possible race.
      
      Instead we just want to make sure the timer is initialized early enough
      before we have a chance to delete it.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: default avatarOliver Neukum <oneukum@suse.com>
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      aafb9ef3
    • Mathias Nyman's avatar
      xhci: change xhci 1.0 only restrictions to support xhci 1.1 · bf5b2951
      Mathias Nyman authored
      [ Upstream commit dca77945 ]
      
      Some changes between xhci 0.96 and xhci 1.0 specifications forced us to
      check the hci version in code, some of these checks were implemented as
      hci_version == 1.0, which will not work with new xhci 1.1 controllers.
      
      xhci 1.1 behaves similar to xhci 1.0 in these cases, so change these
      checks to hci_version >= 1.0
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      bf5b2951
    • Roger Quadros's avatar
      usb: xhci: exit early in xhci_setup_device() if we're halted or dying · ef2b6a7e
      Roger Quadros authored
      [ Upstream commit 448116bf ]
      
      During quick plug/removal of OTG adapter during dual-role testing
      it can happen that xhci_alloc_device() is called for the newly
      detected device after the DRD library has called xhci_stop to
      remove the HCD.
      
      If that is the case, just fail early to prevent the following warning.
      
      [  154.732649] hub 4-0:1.0: USB hub found
      [  154.742204] hub 4-0:1.0: 1 port detected
      [  154.824458] hub 3-0:1.0: state 7 ports 1 chg 0002 evt 0000
      [  154.854609] hub 4-0:1.0: state 7 ports 1 chg 0000 evt 0000
      [  154.944430] usb 3-1: new high-speed USB device number 2 using xhci-hcd
      [  154.951009] xhci-hcd xhci-hcd.0.auto: xhci_setup_device
      [  155.038191] xhci-hcd xhci-hcd.0.auto: remove, state 4
      [  155.043315] usb usb4: USB disconnect, device number 1
      [  155.055270] xhci-hcd xhci-hcd.0.auto: xhci_stop
      [  155.060094] xhci-hcd xhci-hcd.0.auto: USB bus 4 deregistered
      [  155.066576] xhci-hcd xhci-hcd.0.auto: remove, state 1
      [  155.071710] usb usb3: USB disconnect, device number 1
      [  155.077124] xhci-hcd xhci-hcd.0.auto: xhci_setup_device
      [  155.082389] ------------[ cut here ]------------
      [  155.087690] WARNING: CPU: 0 PID: 72 at drivers/usb/host/xhci.c:3800 xhci_setup_device+0x410/0x484 [xhci_hcd]()
      [  155.097861] Modules linked in: sd_mod usb_storage scsi_mod usb_f_ss_lb g_zero libcomposite ipv6 xhci_plat_hcd xhci_hcd usbcore dwc3 udc_core evdev ti_am335x_adc joydev kfifo_buf industrialio snd_soc_simple_cc
      [  155.146734] CPU: 0 PID: 72 Comm: kworker/0:3 Tainted: G        W       4.1.4-00834-gcd9380b-dirty #50
      [  155.156073] Hardware name: Generic AM43 (Flattened Device Tree)
      [  155.162117] Workqueue: usb_hub_wq hub_event [usbcore]
      [  155.167249] Backtrace:
      [  155.169751] [<c0012af0>] (dump_backtrace) from [<c0012c8c>] (show_stack+0x18/0x1c)
      [  155.177390]  r6:c089d4a4 r5:ffffffff r4:00000000 r3:ee46c000
      [  155.183137] [<c0012c74>] (show_stack) from [<c05f7c14>] (dump_stack+0x84/0xd0)
      [  155.190446] [<c05f7b90>] (dump_stack) from [<c00439ac>] (warn_slowpath_common+0x80/0xbc)
      [  155.198605]  r7:00000009 r6:00000ed8 r5:bf27eb70 r4:00000000
      [  155.204348] [<c004392c>] (warn_slowpath_common) from [<c0043a0c>] (warn_slowpath_null+0x24/0x2c)
      [  155.213202]  r8:ee49f000 r7:ee7c0004 r6:00000000 r5:ee7c0158 r4:ee7c0000
      [  155.220051] [<c00439e8>] (warn_slowpath_null) from [<bf27eb70>] (xhci_setup_device+0x410/0x484 [xhci_hcd])
      [  155.229816] [<bf27e760>] (xhci_setup_device [xhci_hcd]) from [<bf27ec10>] (xhci_address_device+0x14/0x18 [xhci_hcd])
      [  155.240415]  r10:ee598200 r9:00000001 r8:00000002 r7:00000001 r6:00000003 r5:00000002
      [  155.248363]  r4:ee49f000
      [  155.250978] [<bf27ebfc>] (xhci_address_device [xhci_hcd]) from [<bf20cb94>] (hub_port_init+0x1b8/0xa9c [usbcore])
      [  155.261403] [<bf20c9dc>] (hub_port_init [usbcore]) from [<bf2101e0>] (hub_event+0x738/0x1020 [usbcore])
      [  155.270874]  r10:ee598200 r9:ee7c0000 r8:ee7c0038 r7:ee518800 r6:ee49f000 r5:00000001
      [  155.278822]  r4:00000000
      [  155.281426] [<bf20faa8>] (hub_event [usbcore]) from [<c005754c>] (process_one_work+0x128/0x340)
      [  155.290196]  r10:00000000 r9:00000003 r8:00000000 r7:fedfa000 r6:eeec5400 r5:ee598314
      [  155.298151]  r4:ee434380
      [  155.300718] [<c0057424>] (process_one_work) from [<c00578f8>] (worker_thread+0x158/0x49c)
      [  155.308963]  r10:ee434380 r9:00000003 r8:eeec5400 r7:00000008 r6:ee434398 r5:eeec5400
      [  155.316913]  r4:eeec5414
      [  155.319482] [<c00577a0>] (worker_thread) from [<c005cc40>] (kthread+0xdc/0xf8)
      [  155.326765]  r10:00000000 r9:00000000 r8:00000000 r7:c00577a0 r6:ee434380 r5:ee4441c0
      [  155.334713]  r4:00000000 r3:00000000
      [  155.338341] [<c005cb64>] (kthread) from [<c000fc08>] (ret_from_fork+0x14/0x2c)
      [  155.345626]  r7:00000000 r6:00000000 r5:c005cb64 r4:ee4441c0
      [  155.356108] ---[ end trace a58d34c223b190e6 ]---
      [  155.360783] xhci-hcd xhci-hcd.0.auto: Virt dev invalid for slot_id 0x1!
      [  155.574404] xhci-hcd xhci-hcd.0.auto: xhci_setup_device
      [  155.579667] ------------[ cut here ]------------
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarRoger Quadros <rogerq@ti.com>
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      ef2b6a7e
    • Roger Quadros's avatar
      usb: xhci: Clear XHCI_STATE_DYING on start · c48a27a4
      Roger Quadros authored
      [ Upstream commit e5bfeab0 ]
      
      For whatever reason if XHCI died in the previous instant
      then it will never recover on the next xhci_start unless we
      clear the DYING flag.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarRoger Quadros <rogerq@ti.com>
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c48a27a4
    • Johan Hovold's avatar
      USB: whiteheat: fix potential null-deref at probe · b57a9f68
      Johan Hovold authored
      [ Upstream commit cbb4be65 ]
      
      Fix potential null-pointer dereference at probe by making sure that the
      required endpoints are present.
      
      The whiteheat driver assumes there are at least five pairs of bulk
      endpoints, of which the final pair is used for the "command port". An
      attempt to bind to an interface with fewer bulk endpoints would
      currently lead to an oops.
      
      Fixes CVE-2015-5257.
      Reported-by: default avatarMoein Ghasemzadeh <moein@istuary.com>
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: default avatarJohan Hovold <johan@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      b57a9f68
    • Michel Dänzer's avatar
      drm/amdgpu: Restore LCD backlight level on resume · c2be986b
      Michel Dänzer authored
      [ Upstream commit 74b3112e ]
      
      Instead of only enabling the backlight (which seems to set it to max
      brightness), just re-set the current backlight level, which also takes
      care of enabling the backlight if necessary.
      
      Port of radeon commit:
      drm/radeon: Restore LCD backlight level on resume (>= R5xx)
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c2be986b
    • Daniel Vetter's avatar
      drm: Reject DRI1 hw lock ioctl functions for kms drivers · a15b34af
      Daniel Vetter authored
      [ Upstream commit da168d81 ]
      
      I've done some extensive history digging across libdrm, mesa and
      xf86-video-{intel,nouveau,ati}. The only potential user of this with
      kms drivers I could find was ttmtest, which once used drmGetLock
      still. But that mistake was quickly fixed up. Even the intel xvmc
      library (which otherwise was really good with using dri1 stuff in kms
      mode) managed to never take the hw lock for dri2 (and hence kms).
      
      Hence it should be save to unconditionally disallow this.
      
      Cc: Peter Antoine <peter.antoine@intel.com>
      Reviewed-by: default avatarPeter Antoine <peter.antoine@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@intel.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      a15b34af
    • Jani Nikula's avatar
      drm/i915/bios: handle MIPI Sequence Block v3+ gracefully · 0548f19d
      Jani Nikula authored
      [ Upstream commit cd67d226 ]
      
      The VBT MIPI Sequence Block version 3 has forward incompatible changes:
      
      First, the block size in the header has been specified reserved, and the
      actual size is a separate 32-bit value within the block. The current
      find_section() function to will only look at the size in the block
      header, and, depending on what's in that now reserved size field,
      continue looking for other sections in the wrong place.
      
      Fix this by taking the new block size field into account. This will
      ensure that the lookups for other sections will work properly, as long
      as the new 32-bit size does not go beyond the opregion VBT mailbox size.
      
      Second, the contents of the block have been completely
      changed. Gracefully refuse parsing the yet unknown data version.
      
      Cc: Deepak M <m.deepak@intel.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarDeepak M <m.deepak@intel.com>
      Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      0548f19d
    • Fabiano Fidêncio's avatar
      drm/qxl: recreate the primary surface when the bo is not primary · f2e976bc
      Fabiano Fidêncio authored
      [ Upstream commit 8d0d9401 ]
      
      When disabling/enabling a crtc the primary area must be updated
      independently of which crtc has been disabled/enabled.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1264735Signed-off-by: default avatarFabiano Fidêncio <fidencio@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      f2e976bc
    • Dave Airlie's avatar
      drm/qxl: only report first monitor as connected if we have no state · 13923ed3
      Dave Airlie authored
      [ Upstream commit 69e5d3f8 ]
      
      If the server isn't new enough to give us state, report the first
      monitor as always connected, otherwise believe the server side.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      13923ed3
    • Steve French's avatar
      [SMB3] Do not fall back to SMBWriteX in set_file_size error cases · ec9dec9f
      Steve French authored
      [ Upstream commit 646200a0 ]
      
      The error paths in set_file_size for cifs and smb3 are incorrect.
      
      In the unlikely event that a server did not support set file info
      of the file size, the code incorrectly falls back to trying SMBWriteX
      (note that only the original core SMB Write, used for example by DOS,
      can set the file size this way - this actually  does not work for the more
      recent SMBWriteX).  The idea was since the old DOS SMB Write could set
      the file size if you write zero bytes at that offset then use that if
      server rejects the normal set file info call.
      
      Fortunately the SMBWriteX will never be sent on the wire (except when
      file size is zero) since the length and offset fields were reversed
      in the two places in this function that call SMBWriteX causing
      the fall back path to return an error. It is also important to never call
      an SMB request from an SMB2/sMB3 session (which theoretically would
      be possible, and can cause a brief session drop, although the client
      recovers) so this should be fixed.  In practice this path does not happen
      with modern servers but the error fall back to SMBWriteX is clearly wrong.
      
      Removing the calls to SMBWriteX in the error paths in cifs_set_file_size
      
      Pointed out by PaX/grsecurity team
      Signed-off-by: default avatarSteve French <steve.french@primarydata.com>
      Reported-by: default avatarPaX Team <pageexec@freemail.hu>
      CC: Emese Revfy <re.emese@gmail.com>
      CC: Brad Spengler <spender@grsecurity.net>
      CC: Stable <stable@vger.kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      ec9dec9f
    • Steve French's avatar
      disabling oplocks/leases via module parm enable_oplocks broken for SMB3 · 4409cd18
      Steve French authored
      [ Upstream commit e0ddde9d ]
      
      leases (oplocks) were always requested for SMB2/SMB3 even when oplocks
      disabled in the cifs.ko module.
      Signed-off-by: default avatarSteve French <steve.french@primarydata.com>
      Reviewed-by: default avatarChandrika Srinivasan <chandrika.srinivasan@citrix.com>
      CC: Stable <stable@vger.kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      4409cd18
    • Peng Tao's avatar
      nfs: fix pg_test page count calculation · f67da137
      Peng Tao authored
      [ Upstream commit 048883e0 ]
      
      We really want sizeof(struct page *) instead. Otherwise we limit
      maximum IO size to 64 pages rather than 512 pages on a 64bit system.
      
      Fixes 2e11f829(nfs: cap request size to fit a kmalloced page array).
      
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarPeng Tao <tao.peng@primarydata.com>
      Fixes: 2e11f829 ("nfs: cap request size to fit a kmalloced page array")
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      f67da137
    • Florian Westphal's avatar
      netfilter: nf_log: don't zap all loggers on unregister · 8bf6c729
      Florian Westphal authored
      [ Upstream commit 205ee117 ]
      
      like nf_log_unset, nf_log_unregister must not reset the list of loggers.
      Otherwise, a call to nf_log_unregister() will render loggers of other nf
      protocols unusable:
      
      iptables -A INPUT -j LOG
      modprobe nf_log_arp ; rmmod nf_log_arp
      iptables -A INPUT -j LOG
      iptables: No chain/target/match by that name
      
      Fixes: 30e0c6a6 ("netfilter: nf_log: prepare net namespace support for loggers")
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      8bf6c729
    • Marcelo Leitner's avatar
      netfilter: nf_log: Introduce nft_log_dereference() macro · 2f6e5594
      Marcelo Leitner authored
      [ Upstream commit 0c26ed1c ]
      
      Wrap up a common call pattern in an easier to handle call.
      Signed-off-by: default avatarMarcelo Ricardo Leitner <mleitner@redhat.com>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      2f6e5594
    • Pablo Neira Ayuso's avatar
      netfilter: nft_compat: skip family comparison in case of NFPROTO_UNSPEC · 98a38395
      Pablo Neira Ayuso authored
      [ Upstream commit ba378ca9 ]
      
      Fix lookup of existing match/target structures in the corresponding list
      by skipping the family check if NFPROTO_UNSPEC is used.
      
      This is resulting in the allocation and insertion of one match/target
      structure for each use of them. So this not only bloats memory
      consumption but also severely affects the time to reload the ruleset
      from the iptables-compat utility.
      
      After this patch, iptables-compat-restore and iptables-compat take
      almost the same time to reload large rulesets.
      
      Fixes: 0ca743a5 ("netfilter: nf_tables: add compatibility layer for x_tables")
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      98a38395
    • Pablo Neira Ayuso's avatar
      netfilter: nf_log: wait for rcu grace after logger unregistration · 8dafc993
      Pablo Neira Ayuso authored
      [ Upstream commit ad5001cc ]
      
      The nf_log_unregister() function needs to call synchronize_rcu() to make sure
      that the objects are not dereferenced anymore on module removal.
      
      Fixes: 5962815a ("netfilter: nf_log: use an array of loggers instead of list")
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      8dafc993
    • Pablo Neira Ayuso's avatar
      netfilter: ctnetlink: put back references to master ct and expect objects · ba1fa01d
      Pablo Neira Ayuso authored
      [ Upstream commit 95dd8653 ]
      
      We have to put back the references to the master conntrack and the expectation
      that we just created, otherwise we'll leak them.
      
      Fixes: 0ef71ee1 ("netfilter: ctnetlink: refactor ctnetlink_create_expect")
      Reported-by: default avatarTim Wiess <Tim.Wiess@watchguard.com>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      ba1fa01d
    • Joe Stringer's avatar
      netfilter: nf_conntrack: Support expectations in different zones · f17d9f15
      Joe Stringer authored
      [ Upstream commit 4b31814d ]
      
      When zones were originally introduced, the expectation functions were
      all extended to perform lookup using the zone. However, insertion was
      not modified to check the zone. This means that two expectations which
      are intended to apply for different connections that have the same tuple
      but exist in different zones cannot both be tracked.
      
      Fixes: 5d0aa2cc (netfilter: nf_conntrack: add support for "conntrack zones")
      Signed-off-by: default avatarJoe Stringer <joestringer@nicira.com>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      f17d9f15
    • Pablo Neira Ayuso's avatar
      netfilter: nfnetlink: work around wrong endianess in res_id field · 3ad1bd82
      Pablo Neira Ayuso authored
      [ Upstream commit a9de9777 ]
      
      The convention in nfnetlink is to use network byte order in every header field
      as well as in the attribute payload. The initial version of the batching
      infrastructure assumes that res_id comes in host byte order though.
      
      The only client of the batching infrastructure is nf_tables, so let's add a
      workaround to address this inconsistency. We currently have 11 nfnetlink
      subsystems according to NFNL_SUBSYS_COUNT, so we can assume that the subsystem
      2560, ie. htons(10), will not be allocated anytime soon, so it can be an alias
      of nf_tables from the nfnetlink batching path when interpreting the res_id
      field.
      
      Based on original patch from Florian Westphal.
      Reported-by: default avatarFlorian Westphal <fw@strlen.de>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      3ad1bd82
    • Mikulas Patocka's avatar
      dm raid: fix round up of default region size · 545d2525
      Mikulas Patocka authored
      [ Upstream commit 042745ee ]
      
      Commit 3a0f9aae ("dm raid: round region_size to power of two")
      intended to make sure that the default region size is a power of two.
      However, the logic in that commit is incorrect and sets the variable
      region_size to 0 or 1, depending on whether min_region_size is a power
      of two.
      
      Fix this logic, using roundup_pow_of_two(), so that region_size is
      properly rounded up to the next power of two.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Fixes: 3a0f9aae ("dm raid: round region_size to power of two")
      Cc: stable@vger.kernel.org # v3.8+
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      545d2525
    • Liu.Zhao's avatar
      USB: option: add ZTE PIDs · 798191c2
      Liu.Zhao authored
      [ Upstream commit 19ab6bc5 ]
      
      This is intended to add ZTE device PIDs on kernel.
      Signed-off-by: default avatarLiu.Zhao <lzsos369@163.com>
      Cc: stable <stable@vger.kernel.org>
      [johan: sort the new entries ]
      Signed-off-by: default avatarJohan Hovold <johan@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      798191c2
    • Shawn Lin's avatar
      staging: ion: fix corruption of ion_import_dma_buf · 606c9512
      Shawn Lin authored
      [ Upstream commit 6fa92e2b ]
      
      we found this issue but still exit in lastest kernel. Simply
      keep ion_handle_create under mutex_lock to avoid this race.
      
      WARNING: CPU: 2 PID: 2648 at drivers/staging/android/ion/ion.c:512 ion_handle_add+0xb4/0xc0()
      ion_handle_add: buffer already found.
      Modules linked in: iwlmvm iwlwifi mac80211 cfg80211 compat
      CPU: 2 PID: 2648 Comm: TimedEventQueue Tainted: G        W    3.14.0 #7
       00000000 00000000 9a3efd2c 80faf273 9a3efd6c 9a3efd5c 80935dc9 811d7fd3
       9a3efd88 00000a58 812208a0 00000200 80e128d4 80e128d4 8d4ae00c a8cd8600
       a8cd8094 9a3efd74 80935e0e 00000009 9a3efd6c 811d7fd3 9a3efd88 9a3efd9c
      Call Trace:
        [<80faf273>] dump_stack+0x48/0x69
        [<80935dc9>] warn_slowpath_common+0x79/0x90
        [<80e128d4>] ? ion_handle_add+0xb4/0xc0
        [<80e128d4>] ? ion_handle_add+0xb4/0xc0
        [<80935e0e>] warn_slowpath_fmt+0x2e/0x30
        [<80e128d4>] ion_handle_add+0xb4/0xc0
        [<80e144cc>] ion_import_dma_buf+0x8c/0x110
        [<80c517c4>] reg_init+0x364/0x7d0
        [<80993363>] ? futex_wait+0x123/0x210
        [<80992e0e>] ? get_futex_key+0x16e/0x1e0
        [<8099308f>] ? futex_wake+0x5f/0x120
        [<80c51e19>] vpu_service_ioctl+0x1e9/0x500
        [<80994aec>] ? do_futex+0xec/0x8e0
        [<80971080>] ? prepare_to_wait_event+0xc0/0xc0
        [<80c51c30>] ? reg_init+0x7d0/0x7d0
        [<80a22562>] do_vfs_ioctl+0x2d2/0x4c0
        [<80b198ad>] ? inode_has_perm.isra.41+0x2d/0x40
        [<80b199cf>] ? file_has_perm+0x7f/0x90
        [<80b1a5f7>] ? selinux_file_ioctl+0x47/0xf0
        [<80a227a8>] SyS_ioctl+0x58/0x80
        [<80fb45e8>] syscall_call+0x7/0x7
        [<80fb0000>] ? mmc_do_calc_max_discard+0xab/0xe4
      
      Fixes: 83271f62 ("ion: hold reference to handle...")
      Signed-off-by: default avatarShawn Lin <shawn.lin@rock-chips.com>
      Reviewed-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: stable <stable@vger.kernel.org> # 3.14+
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      606c9512
    • Joe Thornber's avatar
      dm btree: add ref counting ops for the leaves of top level btrees · f2670858
      Joe Thornber authored
      [ Upstream commit b0dc3c8b ]
      
      When using nested btrees, the top leaves of the top levels contain
      block addresses for the root of the next tree down.  If we shadow a
      shared leaf node the leaf values (sub tree roots) should be incremented
      accordingly.
      
      This is only an issue if there is metadata sharing in the top levels.
      Which only occurs if metadata snapshots are being used (as is possible
      with dm-thinp).  And could result in a block from the thinp metadata
      snap being reused early, thus corrupting the thinp metadata snap.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      f2670858
    • Chuck Lever's avatar
      svcrdma: Fix send_reply() scatter/gather set-up · e8b81595
      Chuck Lever authored
      [ Upstream commit 9d11b51c ]
      
      The Linux NFS server returns garbage in the data payload of inline
      NFS/RDMA READ replies. These are READs of under 1000 bytes or so
      where the client has not provided either a reply chunk or a write
      list.
      
      The NFS server delivers the data payload for an NFS READ reply to
      the transport in an xdr_buf page list. If the NFS client did not
      provide a reply chunk or a write list, send_reply() is supposed to
      set up a separate sge for the page containing the READ data, and
      another sge for XDR padding if needed, then post all of the sges via
      a single SEND Work Request.
      
      The problem is send_reply() does not advance through the xdr_buf
      when setting up scatter/gather entries for SEND WR. It always calls
      dma_map_xdr with xdr_off set to zero. When there's more than one
      sge, dma_map_xdr() sets up the SEND sge's so they all point to the
      xdr_buf's head.
      
      The current Linux NFS/RDMA client always provides a reply chunk or
      a write list when performing an NFS READ over RDMA. Therefore, it
      does not exercise this particular case. The Linux server has never
      had to use more than one extra sge for building RPC/RDMA replies
      with a Linux client.
      
      However, an NFS/RDMA client _is_ allowed to send small NFS READs
      without setting up a write list or reply chunk. The NFS READ reply
      fits entirely within the inline reply buffer in this case. This is
      perhaps a more efficient way of performing NFS READs that the Linux
      NFS/RDMA client may some day adopt.
      
      Fixes: b432e6b3 ('svcrdma: Change DMA mapping logic to . . .')
      BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=285Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      e8b81595
    • Michal Kazior's avatar
      ath10k: fix dma_mapping_error() handling · fa1b77ba
      Michal Kazior authored
      [ Upstream commit 5e55e3cb ]
      
      The function returns 1 when DMA mapping fails. The
      driver would return bogus values and could
      possibly confuse itself if DMA failed.
      
      Fixes: 767d34fc ("ath10k: remove DMA mapping wrappers")
      Reported-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarMichal Kazior <michal.kazior@tieto.com>
      Signed-off-by: default avatarKalle Valo <kvalo@qca.qualcomm.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      fa1b77ba
    • Filipe Manana's avatar
      Btrfs: update fix for read corruption of compressed and shared extents · 089699ed
      Filipe Manana authored
      [ Upstream commit 808f80b4 ]
      
      My previous fix in commit 005efedf ("Btrfs: fix read corruption of
      compressed and shared extents") was effective only if the compressed
      extents cover a file range with a length that is not a multiple of 16
      pages. That's because the detection of when we reached a different range
      of the file that shares the same compressed extent as the previously
      processed range was done at extent_io.c:__do_contiguous_readpages(),
      which covers subranges with a length up to 16 pages, because
      extent_readpages() groups the pages in clusters no larger than 16 pages.
      So fix this by tracking the start of the previously processed file
      range's extent map at extent_readpages().
      
      The following test case for fstests reproduces the issue:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_cloner
      
        rm -f $seqres.full
      
        test_clone_and_read_compressed_extent()
        {
            local mount_opts=$1
      
            _scratch_mkfs >>$seqres.full 2>&1
            _scratch_mount $mount_opts
      
            # Create our test file with a single extent of 64Kb that is going to
            # be compressed no matter which compression algo is used (zlib/lzo).
            $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 64K" \
                $SCRATCH_MNT/foo | _filter_xfs_io
      
            # Now clone the compressed extent into an adjacent file offset.
            $CLONER_PROG -s 0 -d $((64 * 1024)) -l $((64 * 1024)) \
                $SCRATCH_MNT/foo $SCRATCH_MNT/foo
      
            echo "File digest before unmount:"
            md5sum $SCRATCH_MNT/foo | _filter_scratch
      
            # Remount the fs or clear the page cache to trigger the bug in
            # btrfs. Because the extent has an uncompressed length that is a
            # multiple of 16 pages, all the pages belonging to the second range
            # of the file (64K to 128K), which points to the same extent as the
            # first range (0K to 64K), had their contents full of zeroes instead
            # of the byte 0xaa. This was a bug exclusively in the read path of
            # compressed extents, the correct data was stored on disk, btrfs
            # just failed to fill in the pages correctly.
            _scratch_remount
      
            echo "File digest after remount:"
            # Must match the digest we got before.
            md5sum $SCRATCH_MNT/foo | _filter_scratch
        }
      
        echo -e "\nTesting with zlib compression..."
        test_clone_and_read_compressed_extent "-o compress=zlib"
      
        _scratch_unmount
      
        echo -e "\nTesting with lzo compression..."
        test_clone_and_read_compressed_extent "-o compress=lzo"
      
        status=0
        exit
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Tested-by: default avatarTimofey Titovets <nefelim4ag@gmail.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      089699ed
    • Filipe Manana's avatar
      Btrfs: fix read corruption of compressed and shared extents · 3c62114f
      Filipe Manana authored
      [ Upstream commit 005efedf ]
      
      If a file has a range pointing to a compressed extent, followed by
      another range that points to the same compressed extent and a read
      operation attempts to read both ranges (either completely or part of
      them), the pages that correspond to the second range are incorrectly
      filled with zeroes.
      
      Consider the following example:
      
        File layout
        [0 - 8K]                      [8K - 24K]
            |                             |
            |                             |
         points to extent X,         points to extent X,
         offset 4K, length of 8K     offset 0, length 16K
      
        [extent X, compressed length = 4K uncompressed length = 16K]
      
      If a readpages() call spans the 2 ranges, a single bio to read the extent
      is submitted - extent_io.c:submit_extent_page() would only create a new
      bio to cover the second range pointing to the extent if the extent it
      points to had a different logical address than the extent associated with
      the first range. This has a consequence of the compressed read end io
      handler (compression.c:end_compressed_bio_read()) finish once the extent
      is decompressed into the pages covering the first range, leaving the
      remaining pages (belonging to the second range) filled with zeroes (done
      by compression.c:btrfs_clear_biovec_end()).
      
      So fix this by submitting the current bio whenever we find a range
      pointing to a compressed extent that was preceded by a range with a
      different extent map. This is the simplest solution for this corner
      case. Making the end io callback populate both ranges (or more, if we
      have multiple pointing to the same extent) is a much more complex
      solution since each bio is tightly coupled with a single extent map and
      the extent maps associated to the ranges pointing to the shared extent
      can have different offsets and lengths.
      
      The following test case for fstests triggers the issue:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_cloner
      
        rm -f $seqres.full
      
        test_clone_and_read_compressed_extent()
        {
            local mount_opts=$1
      
            _scratch_mkfs >>$seqres.full 2>&1
            _scratch_mount $mount_opts
      
            # Create a test file with a single extent that is compressed (the
            # data we write into it is highly compressible no matter which
            # compression algorithm is used, zlib or lzo).
            $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K"        \
                            -c "pwrite -S 0xbb 4K 8K"        \
                            -c "pwrite -S 0xcc 12K 4K"       \
                            $SCRATCH_MNT/foo | _filter_xfs_io
      
            # Now clone our extent into an adjacent offset.
            $CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
                $SCRATCH_MNT/foo $SCRATCH_MNT/foo
      
            # Same as before but for this file we clone the extent into a lower
            # file offset.
            $XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K"         \
                            -c "pwrite -S 0xbb 12K 8K"        \
                            -c "pwrite -S 0xcc 20K 4K"        \
                            $SCRATCH_MNT/bar | _filter_xfs_io
      
            $CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
                $SCRATCH_MNT/bar $SCRATCH_MNT/bar
      
            echo "File digests before unmounting filesystem:"
            md5sum $SCRATCH_MNT/foo | _filter_scratch
            md5sum $SCRATCH_MNT/bar | _filter_scratch
      
            # Evicting the inode or clearing the page cache before reading
            # again the file would also trigger the bug - reads were returning
            # all bytes in the range corresponding to the second reference to
            # the extent with a value of 0, but the correct data was persisted
            # (it was a bug exclusively in the read path). The issue happened
            # only if the same readpages() call targeted pages belonging to the
            # first and second ranges that point to the same compressed extent.
            _scratch_remount
      
            echo "File digests after mounting filesystem again:"
            # Must match the same digests we got before.
            md5sum $SCRATCH_MNT/foo | _filter_scratch
            md5sum $SCRATCH_MNT/bar | _filter_scratch
        }
      
        echo -e "\nTesting with zlib compression..."
        test_clone_and_read_compressed_extent "-o compress=zlib"
      
        _scratch_unmount
      
        echo -e "\nTesting with lzo compression..."
        test_clone_and_read_compressed_extent "-o compress=lzo"
      
        status=0
        exit
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
      Reviewed-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      3c62114f
    • Jeff Mahoney's avatar
      btrfs: skip waiting on ordered range for special files · b0849b6a
      Jeff Mahoney authored
      [ Upstream commit a30e577c ]
      
      In btrfs_evict_inode, we properly truncate the page cache for evicted
      inodes but then we call btrfs_wait_ordered_range for every inode as well.
      It's the right thing to do for regular files but results in incorrect
      behavior for device inodes for block devices.
      
      filemap_fdatawrite_range gets called with inode->i_mapping which gets
      resolved to the block device inode before getting passed to
      wbc_attach_fdatawrite_inode and ultimately to inode_to_bdi.  What happens
      next depends on whether there's an open file handle associated with the
      inode.  If there is, we write to the block device, which is unexpected
      behavior.  If there isn't, we through normally and inode->i_data is used.
      We can also end up racing against open/close which can result in crashes
      when i_mapping points to a block device inode that has been closed.
      
      Since there can't be any page cache associated with special file inodes,
      it's safe to skip the btrfs_wait_ordered_range call entirely and avoid
      the problem.
      
      Cc: <stable@vger.kernel.org>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=100911Tested-by: default avatarChristoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      b0849b6a
    • Yitian Bu's avatar
      ASoC: dwc: correct irq clear method · b03abc8b
      Yitian Bu authored
      [ Upstream commit 4873867e ]
      
      from Designware I2S datasheet, tx/rx XRUN irq is cleared by
      reading register TOR/ROR, rather than by writing into them.
      Signed-off-by: default avatarYitian Bu <yitian.bu@tangramtek.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      b03abc8b
    • Robert Jarzmik's avatar
      ASoC: fix broken pxa SoC support · 8df65445
      Robert Jarzmik authored
      [ Upstream commit 3c8f7710 ]
      
      The previous fix of pxa library support, which was introduced to fix the
      library dependency, broke the previous SoC behavior, where a machine
      code binding pxa2xx-ac97 with a coded relied on :
       - sound/soc/pxa/pxa2xx-ac97.c
       - sound/soc/codecs/XXX.c
      
      For example, the mioa701_wm9713.c machine code is currently broken. The
      "select ARM" statement wrongly selects the soc/arm/pxa2xx-ac97 for
      compilation, as per an unfortunate fate SND_PXA2XX_AC97 is both declared
      in sound/arm/Kconfig and sound/soc/pxa/Kconfig.
      
      Fix this by ensuring that SND_PXA2XX_SOC correctly triggers the correct
      pxa2xx-ac97 compilation.
      
      Fixes: 846172df ("ASoC: fix SND_PXA2XX_LIB Kconfig warning")
      Signed-off-by: default avatarRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      8df65445
    • Robert Jarzmik's avatar
      ASoC: pxa: pxa2xx-ac97: fix dma requestor lines · b496c804
      Robert Jarzmik authored
      [ Upstream commit 8811191f ]
      
      PCM receive and transmit DMA requestor lines were reverted, breaking the
      PCM playback interface for PXA platforms using the sound/soc/ variant
      instead of the sound/arm variant.
      
      The commit below shows the inversion in the requestor lines.
      
      Fixes: d65a1458 ("ASoC: pxa: use snd_dmaengine_dai_dma_data")
      Signed-off-by: default avatarRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      b496c804
    • John Flatness's avatar
      ALSA: hda - Apply SPDIF pin ctl to MacBookPro 12,1 · acd1288e
      John Flatness authored
      [ Upstream commit e8ff581f ]
      
      The MacBookPro 12,1 has the same setup as the 11 for controlling the
      status of the optical audio light. Simply apply the existing workaround
      to the subsystem ID for the 12,1.
      
      [sorted the fixup entry by tiwai]
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=105401Signed-off-by: default avatarJohn Flatness <john@zerocrates.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      acd1288e
    • Laura Abbott's avatar
      ALSA: hda: Add dock support for ThinkPad T550 · 91b15aa1
      Laura Abbott authored
      [ Upstream commit d05ea7da ]
      
      Much like all the other Lenovo laptops, add a quirk to make
      sound work with docking.
      
      Reported-and-tested-by: lacknerflo@gmail.com
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      91b15aa1