- 22 Oct, 2015 40 commits
-
-
Julian Anastasov authored
commit 4754957f upstream. Michael Vallaly reports about wrong source address used in rare cases for tunneled traffic. Looks like __ip_vs_get_out_rt in 3.10+ is providing uninitialized dest_dst->dst_saddr.ip because ip_vs_dest_dst_alloc uses kmalloc. While we retry after seeing EINVAL from routing for data that does not look like valid local address, it still succeeded when this memory was previously used from other dests and with different local addresses. As result, we can use valid local address that is not suitable for our real server. Fix it by providing 0.0.0.0 every time our cache is refreshed. By this way we will get preferred source address from routing. Reported-by: Michael Vallaly <lvs@nolatency.com> Fixes: 026ace06 ("ipvs: optimize dst usage for real server") Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Martin authored
pl011_tx_softirq() currently uses spin_{,un}lock(), which are not sufficient to inhibit pl011_int() from being triggered by a local IRQ and trying to re-take the same lock. This can lead to deadlocks. This patch uses the _irq() locking variants instead to ensure that pl011_int() handling for a given port is deferred until any pl011_tx_softirq() work for that port is complete. Notes for stable: This patch fixes an issue that is fixed by the following upstream commit, which is a more substantial rewrite of the affected code, fixing multiple, mostly more minor issues: 1e84d223 serial/amba-pl011: Refactor and simplify TX FIFO handling The upstream patch was rejected for stable on the reasonable grounds that it was too big and complex a patch. The original buggy code was merged in v4.1, and the rewrite was merged in v4.2, leaving only v4.1 affected. This patch replaces the 1e84d223, for 4.1.x only. Fixes: 734745ca serial/amba-pl011: Activate TX IRQ passively Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Segall authored
commit 54d27365 upstream. The optimized task selection logic optimistically selects a new task to run without first doing a full put_prev_task(). This is so that we can avoid a put/set on the common ancestors of the old and new task. Similarly, we should only call check_cfs_rq_runtime() to throttle eligible groups if they're part of the common ancestry, otherwise it is possible to end up with no eligible task in the simple task selection. Imagine: /root /prev /next /A /B If our optimistic selection ends up throttling /next, we goto simple and our put_prev_task() ends up throttling /prev, after which we're going to bug out in set_next_entity() because there aren't any tasks left. Avoid this scenario by only throttling common ancestors. Reported-by: Mohammed Naser <mnaser@vexxhost.com> Reported-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: Ben Segall <bsegall@google.com> [ munged Changelog ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Roman Gushchin <klamm@yandex-team.ru> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pjt@google.com Fixes: 678d5718 ("sched/fair: Optimize cgroup pick_next_task_fair()") Link: http://lkml.kernel.org/r/xm26wq1oswoq.fsf@sword-of-the-dawn.mtv.corp.google.comSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Linus Torvalds authored
commit b9a53227 upstream. As reported by Dmitry Vyukov, we really shouldn't do ipc_addid() before having initialized the IPC object state. Yes, we initialize the IPC object in a locked state, but with all the lockless RCU lookup work, that IPC object lock no longer means that the state cannot be seen. We already did this for the IPC semaphore code (see commit e8577d1f: "ipc/sem.c: fully initialize sem_array before making it visible") but we clearly forgot about msg and shm. Reported-by: Dmitry Vyukov <dvyukov@google.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Reyad Attiyat authored
commit 4758dcd1 upstream. This commit checks for the URB_ZERO_PACKET flag and creates an extra zero-length td if the urb transfer length is a multiple of the endpoint's max packet length. Signed-off-by: Reyad Attiyat <reyad.attiyat@gmail.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Cc: Oliver Neukum <oneukum@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mathias Nyman authored
commit cc8e4fc0 upstream. Don't check if timer is running with a timer_pending() before deleting it with del_timer_sync(), this defies the whole point of the sync part and can cause a possible race. Instead we just want to make sure the timer is initialized early enough before we have a chance to delete it. Reported-by: Oliver Neukum <oneukum@suse.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mathias Nyman authored
commit dca77945 upstream. Some changes between xhci 0.96 and xhci 1.0 specifications forced us to check the hci version in code, some of these checks were implemented as hci_version == 1.0, which will not work with new xhci 1.1 controllers. xhci 1.1 behaves similar to xhci 1.0 in these cases, so change these checks to hci_version >= 1.0 Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Roger Quadros authored
commit 448116bf upstream. During quick plug/removal of OTG adapter during dual-role testing it can happen that xhci_alloc_device() is called for the newly detected device after the DRD library has called xhci_stop to remove the HCD. If that is the case, just fail early to prevent the following warning. [ 154.732649] hub 4-0:1.0: USB hub found [ 154.742204] hub 4-0:1.0: 1 port detected [ 154.824458] hub 3-0:1.0: state 7 ports 1 chg 0002 evt 0000 [ 154.854609] hub 4-0:1.0: state 7 ports 1 chg 0000 evt 0000 [ 154.944430] usb 3-1: new high-speed USB device number 2 using xhci-hcd [ 154.951009] xhci-hcd xhci-hcd.0.auto: xhci_setup_device [ 155.038191] xhci-hcd xhci-hcd.0.auto: remove, state 4 [ 155.043315] usb usb4: USB disconnect, device number 1 [ 155.055270] xhci-hcd xhci-hcd.0.auto: xhci_stop [ 155.060094] xhci-hcd xhci-hcd.0.auto: USB bus 4 deregistered [ 155.066576] xhci-hcd xhci-hcd.0.auto: remove, state 1 [ 155.071710] usb usb3: USB disconnect, device number 1 [ 155.077124] xhci-hcd xhci-hcd.0.auto: xhci_setup_device [ 155.082389] ------------[ cut here ]------------ [ 155.087690] WARNING: CPU: 0 PID: 72 at drivers/usb/host/xhci.c:3800 xhci_setup_device+0x410/0x484 [xhci_hcd]() [ 155.097861] Modules linked in: sd_mod usb_storage scsi_mod usb_f_ss_lb g_zero libcomposite ipv6 xhci_plat_hcd xhci_hcd usbcore dwc3 udc_core evdev ti_am335x_adc joydev kfifo_buf industrialio snd_soc_simple_cc [ 155.146734] CPU: 0 PID: 72 Comm: kworker/0:3 Tainted: G W 4.1.4-00834-gcd9380b-dirty #50 [ 155.156073] Hardware name: Generic AM43 (Flattened Device Tree) [ 155.162117] Workqueue: usb_hub_wq hub_event [usbcore] [ 155.167249] Backtrace: [ 155.169751] [<c0012af0>] (dump_backtrace) from [<c0012c8c>] (show_stack+0x18/0x1c) [ 155.177390] r6:c089d4a4 r5:ffffffff r4:00000000 r3:ee46c000 [ 155.183137] [<c0012c74>] (show_stack) from [<c05f7c14>] (dump_stack+0x84/0xd0) [ 155.190446] [<c05f7b90>] (dump_stack) from [<c00439ac>] (warn_slowpath_common+0x80/0xbc) [ 155.198605] r7:00000009 r6:00000ed8 r5:bf27eb70 r4:00000000 [ 155.204348] [<c004392c>] (warn_slowpath_common) from [<c0043a0c>] (warn_slowpath_null+0x24/0x2c) [ 155.213202] r8:ee49f000 r7:ee7c0004 r6:00000000 r5:ee7c0158 r4:ee7c0000 [ 155.220051] [<c00439e8>] (warn_slowpath_null) from [<bf27eb70>] (xhci_setup_device+0x410/0x484 [xhci_hcd]) [ 155.229816] [<bf27e760>] (xhci_setup_device [xhci_hcd]) from [<bf27ec10>] (xhci_address_device+0x14/0x18 [xhci_hcd]) [ 155.240415] r10:ee598200 r9:00000001 r8:00000002 r7:00000001 r6:00000003 r5:00000002 [ 155.248363] r4:ee49f000 [ 155.250978] [<bf27ebfc>] (xhci_address_device [xhci_hcd]) from [<bf20cb94>] (hub_port_init+0x1b8/0xa9c [usbcore]) [ 155.261403] [<bf20c9dc>] (hub_port_init [usbcore]) from [<bf2101e0>] (hub_event+0x738/0x1020 [usbcore]) [ 155.270874] r10:ee598200 r9:ee7c0000 r8:ee7c0038 r7:ee518800 r6:ee49f000 r5:00000001 [ 155.278822] r4:00000000 [ 155.281426] [<bf20faa8>] (hub_event [usbcore]) from [<c005754c>] (process_one_work+0x128/0x340) [ 155.290196] r10:00000000 r9:00000003 r8:00000000 r7:fedfa000 r6:eeec5400 r5:ee598314 [ 155.298151] r4:ee434380 [ 155.300718] [<c0057424>] (process_one_work) from [<c00578f8>] (worker_thread+0x158/0x49c) [ 155.308963] r10:ee434380 r9:00000003 r8:eeec5400 r7:00000008 r6:ee434398 r5:eeec5400 [ 155.316913] r4:eeec5414 [ 155.319482] [<c00577a0>] (worker_thread) from [<c005cc40>] (kthread+0xdc/0xf8) [ 155.326765] r10:00000000 r9:00000000 r8:00000000 r7:c00577a0 r6:ee434380 r5:ee4441c0 [ 155.334713] r4:00000000 r3:00000000 [ 155.338341] [<c005cb64>] (kthread) from [<c000fc08>] (ret_from_fork+0x14/0x2c) [ 155.345626] r7:00000000 r6:00000000 r5:c005cb64 r4:ee4441c0 [ 155.356108] ---[ end trace a58d34c223b190e6 ]--- [ 155.360783] xhci-hcd xhci-hcd.0.auto: Virt dev invalid for slot_id 0x1! [ 155.574404] xhci-hcd xhci-hcd.0.auto: xhci_setup_device [ 155.579667] ------------[ cut here ]------------ Signed-off-by: Roger Quadros <rogerq@ti.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Roger Quadros authored
commit e5bfeab0 upstream. For whatever reason if XHCI died in the previous instant then it will never recover on the next xhci_start unless we clear the DYING flag. Signed-off-by: Roger Quadros <rogerq@ti.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Roger Quadros authored
commit 85ac90f8 upstream. Else it races with xhci_setup_device Signed-off-by: Roger Quadros <rogerq@ti.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mathias Nyman authored
commit a6809ffd upstream. We want to give the command abortion an additional try to stop the command ring before we completely hose xhci. Tested-by: Vincent Pelletier <plr.vincent@gmail.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit cbb4be65 upstream. Fix potential null-pointer dereference at probe by making sure that the required endpoints are present. The whiteheat driver assumes there are at least five pairs of bulk endpoints, of which the final pair is used for the "command port". An attempt to bind to an interface with fewer bulk endpoints would currently lead to an oops. Fixes CVE-2015-5257. Reported-by: Moein Ghasemzadeh <moein@istuary.com> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Airlie authored
commit 274d8352 upstream. Since 9eb1e57f drm/dp/mst: make sure mst_primary mstb is valid in work function we validate the mstb structs in the work function, and doing that takes a reference. So we should never get here with the work function running using the mstb device, only if the work function hasn't run yet or is running for another mstb. So we don't need to sync the work here, this was causing lockdep spew as below. [ +0.000160] ============================================= [ +0.000001] [ INFO: possible recursive locking detected ] [ +0.000002] 3.10.0-320.el7.rhel72.stable.backport.3.x86_64.debug #1 Tainted: G W ------------ [ +0.000001] --------------------------------------------- [ +0.000001] kworker/4:2/1262 is trying to acquire lock: [ +0.000001] ((&mgr->work)){+.+.+.}, at: [<ffffffff810b29a5>] flush_work+0x5/0x2e0 [ +0.000007] but task is already holding lock: [ +0.000001] ((&mgr->work)){+.+.+.}, at: [<ffffffff810b57e4>] process_one_work+0x1b4/0x710 [ +0.000004] other info that might help us debug this: [ +0.000001] Possible unsafe locking scenario: [ +0.000002] CPU0 [ +0.000000] ---- [ +0.000001] lock((&mgr->work)); [ +0.000002] lock((&mgr->work)); [ +0.000001] *** DEADLOCK *** [ +0.000001] May be due to missing lock nesting notation [ +0.000002] 2 locks held by kworker/4:2/1262: [ +0.000001] #0: (events_long){.+.+.+}, at: [<ffffffff810b57e4>] process_one_work+0x1b4/0x710 [ +0.000004] #1: ((&mgr->work)){+.+.+.}, at: [<ffffffff810b57e4>] process_one_work+0x1b4/0x710 [ +0.000003] stack backtrace: [ +0.000003] CPU: 4 PID: 1262 Comm: kworker/4:2 Tainted: G W ------------ 3.10.0-320.el7.rhel72.stable.backport.3.x86_64.debug #1 [ +0.000001] Hardware name: LENOVO 20EGS0R600/20EGS0R600, BIOS GNET71WW (2.19 ) 02/05/2015 [ +0.000008] Workqueue: events_long drm_dp_mst_link_probe_work [drm_kms_helper] [ +0.000001] ffffffff82c26c90 00000000a527b914 ffff88046399bae8 ffffffff816fe04d [ +0.000004] ffff88046399bb58 ffffffff8110f47f ffff880461438000 0001009b840fc003 [ +0.000002] ffff880461438a98 0000000000000000 0000000804dc26e1 ffffffff824a2c00 [ +0.000003] Call Trace: [ +0.000004] [<ffffffff816fe04d>] dump_stack+0x19/0x1b [ +0.000004] [<ffffffff8110f47f>] __lock_acquire+0x115f/0x1250 [ +0.000002] [<ffffffff8110fd49>] lock_acquire+0x99/0x1e0 [ +0.000002] [<ffffffff810b29a5>] ? flush_work+0x5/0x2e0 [ +0.000002] [<ffffffff810b29ee>] flush_work+0x4e/0x2e0 [ +0.000002] [<ffffffff810b29a5>] ? flush_work+0x5/0x2e0 [ +0.000004] [<ffffffff81025905>] ? native_sched_clock+0x35/0x80 [ +0.000002] [<ffffffff81025959>] ? sched_clock+0x9/0x10 [ +0.000002] [<ffffffff810da1f5>] ? local_clock+0x25/0x30 [ +0.000002] [<ffffffff8110dca9>] ? mark_held_locks+0xb9/0x140 [ +0.000003] [<ffffffff810b4ed5>] ? __cancel_work_timer+0x95/0x160 [ +0.000002] [<ffffffff810b4ee8>] __cancel_work_timer+0xa8/0x160 [ +0.000002] [<ffffffff810b4fb0>] cancel_work_sync+0x10/0x20 [ +0.000007] [<ffffffffa0160d17>] drm_dp_destroy_mst_branch_device+0x27/0x120 [drm_kms_helper] [ +0.000006] [<ffffffffa0163968>] drm_dp_mst_link_probe_work+0x78/0xa0 [drm_kms_helper] [ +0.000002] [<ffffffff810b5850>] process_one_work+0x220/0x710 [ +0.000002] [<ffffffff810b57e4>] ? process_one_work+0x1b4/0x710 [ +0.000005] [<ffffffff810b5e5b>] worker_thread+0x11b/0x3a0 [ +0.000003] [<ffffffff810b5d40>] ? process_one_work+0x710/0x710 [ +0.000002] [<ffffffff810beced>] kthread+0xed/0x100 [ +0.000003] [<ffffffff810bec00>] ? insert_kthread_work+0x80/0x80 [ +0.000003] [<ffffffff817121d8>] ret_from_fork+0x58/0x90 v2: add flush_work. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michel Dänzer authored
commit 4281f46e upstream. Instead of only enabling the backlight (which seems to set it to max brightness), just re-set the current backlight level, which also takes care of enabling the backlight if necessary. Only the radeon_atom_encoder_dpms_dig part tested on a Kaveri laptop, the radeon_atom_encoder_dpms_avivo part is only compile tested. Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Vetter authored
commit da168d81 upstream. I've done some extensive history digging across libdrm, mesa and xf86-video-{intel,nouveau,ati}. The only potential user of this with kms drivers I could find was ttmtest, which once used drmGetLock still. But that mistake was quickly fixed up. Even the intel xvmc library (which otherwise was really good with using dri1 stuff in kms mode) managed to never take the hw lock for dri2 (and hence kms). Hence it should be save to unconditionally disallow this. Cc: Peter Antoine <peter.antoine@intel.com> Reviewed-by: Peter Antoine <peter.antoine@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jani Nikula authored
commit cd67d226 upstream. The VBT MIPI Sequence Block version 3 has forward incompatible changes: First, the block size in the header has been specified reserved, and the actual size is a separate 32-bit value within the block. The current find_section() function to will only look at the size in the block header, and, depending on what's in that now reserved size field, continue looking for other sections in the wrong place. Fix this by taking the new block size field into account. This will ensure that the lookups for other sections will work properly, as long as the new 32-bit size does not go beyond the opregion VBT mailbox size. Second, the contents of the block have been completely changed. Gracefully refuse parsing the yet unknown data version. Cc: Deepak M <m.deepak@intel.com> Reviewed-by: Deepak M <m.deepak@intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Fabiano Fidêncio authored
commit 8d0d9401 upstream. When disabling/enabling a crtc the primary area must be updated independently of which crtc has been disabled/enabled. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1264735Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Airlie authored
commit 69e5d3f8 upstream. If the server isn't new enough to give us state, report the first monitor as always connected, otherwise believe the server side. Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steve French authored
commit 646200a0 upstream. The error paths in set_file_size for cifs and smb3 are incorrect. In the unlikely event that a server did not support set file info of the file size, the code incorrectly falls back to trying SMBWriteX (note that only the original core SMB Write, used for example by DOS, can set the file size this way - this actually does not work for the more recent SMBWriteX). The idea was since the old DOS SMB Write could set the file size if you write zero bytes at that offset then use that if server rejects the normal set file info call. Fortunately the SMBWriteX will never be sent on the wire (except when file size is zero) since the length and offset fields were reversed in the two places in this function that call SMBWriteX causing the fall back path to return an error. It is also important to never call an SMB request from an SMB2/sMB3 session (which theoretically would be possible, and can cause a brief session drop, although the client recovers) so this should be fixed. In practice this path does not happen with modern servers but the error fall back to SMBWriteX is clearly wrong. Removing the calls to SMBWriteX in the error paths in cifs_set_file_size Pointed out by PaX/grsecurity team Signed-off-by: Steve French <steve.french@primarydata.com> Reported-by: PaX Team <pageexec@freemail.hu> CC: Emese Revfy <re.emese@gmail.com> CC: Brad Spengler <spender@grsecurity.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steve French authored
commit e0ddde9d upstream. leases (oplocks) were always requested for SMB2/SMB3 even when oplocks disabled in the cifs.ko module. Signed-off-by: Steve French <steve.french@primarydata.com> Reviewed-by: Chandrika Srinivasan <chandrika.srinivasan@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steve French authored
commit ceb1b0b9 upstream. Kerberos, which is very important for security, was only enabled for CIFS not SMB2/SMB3 mounts (e.g. vers=3.0) Patch based on the information detailed in http://thread.gmane.org/gmane.linux.kernel.cifs/10081/focus=10307 to enable Kerberized SMB2/SMB3 a) SMB2_negotiate: enable/use decode_negTokenInit in SMB2_negotiate b) SMB2_sess_setup: handle Kerberos sectype and replicate Kerberos SMB1 processing done in sess_auth_kerberos Signed-off-by: Noel Power <noel.power@suse.com> Signed-off-by: Jim McDonough <jmcd@samba.org> Signed-off-by: Steve French <steve.french@primarydata.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
commit 8fa4592a upstream. If all other conditions in nfs_can_extend_write() are met, and there are no locks, then we should be able to assume close-to-open semantics and the ability to extend our write to cover the whole page. With this patch, the xfstests generic/074 test completes in 242s instead of >1400s on my test rig. Fixes: bd61e0a9 ("locks: convert posix locks to file_lock_context") Cc: Jeff Layton <jlayton@primarydata.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peng Tao authored
commit 048883e0 upstream. We really want sizeof(struct page *) instead. Otherwise we limit maximum IO size to 64 pages rather than 512 pages on a 64bit system. Fixes 2e11f829(nfs: cap request size to fit a kmalloced page array). Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Peng Tao <tao.peng@primarydata.com> Fixes: 2e11f829 ("nfs: cap request size to fit a kmalloced page array") Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kinglong Mee authored
commit 6f29b9bb upstream. There is a reference leak of layout segment after resetting pageio read/write to mds. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hedberg authored
commit d8949aad upstream. There are several actions that smp_conn_security() might make that do not require a valid SMP context (conn->smp pointer). One of these actions is to encrypt the link with an existing LTK. If the SMP context wasn't initialized properly we should still allow the independent actions to be done, i.e. the check for the context should only be done at the last possible moment. Reported-by: Chuck Ebbert <cebbert.lkml@gmail.com> Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Florian Westphal authored
commit 205ee117 upstream. like nf_log_unset, nf_log_unregister must not reset the list of loggers. Otherwise, a call to nf_log_unregister() will render loggers of other nf protocols unusable: iptables -A INPUT -j LOG modprobe nf_log_arp ; rmmod nf_log_arp iptables -A INPUT -j LOG iptables: No chain/target/match by that name Fixes: 30e0c6a6 ("netfilter: nf_log: prepare net namespace support for loggers") Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pablo Neira Ayuso authored
commit ba378ca9 upstream. Fix lookup of existing match/target structures in the corresponding list by skipping the family check if NFPROTO_UNSPEC is used. This is resulting in the allocation and insertion of one match/target structure for each use of them. So this not only bloats memory consumption but also severely affects the time to reload the ruleset from the iptables-compat utility. After this patch, iptables-compat-restore and iptables-compat take almost the same time to reload large rulesets. Fixes: 0ca743a5 ("netfilter: nf_tables: add compatibility layer for x_tables") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pablo Neira Ayuso authored
commit ad5001cc upstream. The nf_log_unregister() function needs to call synchronize_rcu() to make sure that the objects are not dereferenced anymore on module removal. Fixes: 5962815a ("netfilter: nf_log: use an array of loggers instead of list") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric W. Biederman authored
commit fdab6a4c upstream. Currenlty nf_tables chains added in one network namespace are being run in all network namespace. The issues are myriad with the simplest being an unprivileged user can cause any network packets to be dropped. Address this by simply not running nf_tables chains in the wrong network namespace. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric W. Biederman authored
commit 8405a8ff upstream. Add code to nf_unregister_hook to flush the nf_queue when a hook is unregistered. This guarantees that the pointer that the nf_queue code retains into the nf_hook list will remain valid while a packet is queued. I tested what would happen if we do not flush queued packets and was trivially able to obtain the oops below. All that was required was to stop the nf_queue listening process, to delete all of the nf_tables, and to awaken the nf_queue listening process. > BUG: unable to handle kernel paging request at 0000000100000001 > IP: [<0000000100000001>] 0x100000001 > PGD b9c35067 PUD 0 > Oops: 0010 [#1] SMP > Modules linked in: > CPU: 0 PID: 519 Comm: lt-nfqnl_test Not tainted > task: ffff8800b9c8c050 ti: ffff8800ba9d8000 task.ti: ffff8800ba9d8000 > RIP: 0010:[<0000000100000001>] [<0000000100000001>] 0x100000001 > RSP: 0018:ffff8800ba9dba40 EFLAGS: 00010a16 > RAX: ffff8800bab48a00 RBX: ffff8800ba9dba90 RCX: ffff8800ba9dba90 > RDX: ffff8800b9c10128 RSI: ffff8800ba940900 RDI: ffff8800bab48a00 > RBP: ffff8800b9c10128 R08: ffffffff82976660 R09: ffff8800ba9dbb28 > R10: dead000000100100 R11: dead000000200200 R12: ffff8800ba940900 > R13: ffffffff8313fd50 R14: ffff8800b9c95200 R15: 0000000000000000 > FS: 00007fb91fc34700(0000) GS:ffff8800bfa00000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 0000000100000001 CR3: 00000000babfb000 CR4: 00000000000007f0 > Stack: > ffffffff8206ab0f ffffffff82982240 ffff8800bab48a00 ffff8800b9c100a8 > ffff8800b9c10100 0000000000000001 ffff8800ba940900 ffff8800b9c10128 > ffffffff8206bd65 ffff8800bfb0d5e0 ffff8800bab48a00 0000000000014dc0 > Call Trace: > [<ffffffff8206ab0f>] ? nf_iterate+0x4f/0xa0 > [<ffffffff8206bd65>] ? nf_reinject+0x125/0x190 > [<ffffffff8206dee5>] ? nfqnl_recv_verdict+0x255/0x360 > [<ffffffff81386290>] ? nla_parse+0x80/0xf0 > [<ffffffff8206c42c>] ? nfnetlink_rcv_msg+0x13c/0x240 > [<ffffffff811b2fec>] ? __memcg_kmem_get_cache+0x4c/0x150 > [<ffffffff8206c2f0>] ? nfnl_lock+0x20/0x20 > [<ffffffff82068159>] ? netlink_rcv_skb+0xa9/0xc0 > [<ffffffff820677bf>] ? netlink_unicast+0x12f/0x1c0 > [<ffffffff82067ade>] ? netlink_sendmsg+0x28e/0x650 > [<ffffffff81fdd814>] ? sock_sendmsg+0x44/0x50 > [<ffffffff81fde07b>] ? ___sys_sendmsg+0x2ab/0x2c0 > [<ffffffff810e8f73>] ? __wake_up+0x43/0x70 > [<ffffffff8141a134>] ? tty_write+0x1c4/0x2a0 > [<ffffffff81fde9f4>] ? __sys_sendmsg+0x44/0x80 > [<ffffffff823ff8d7>] ? system_call_fastpath+0x12/0x6a > Code: Bad RIP value. > RIP [<0000000100000001>] 0x100000001 > RSP <ffff8800ba9dba40> > CR2: 0000000100000001 > ---[ end trace 08eb65d42362793f ]--- Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pablo Neira Ayuso authored
commit 95dd8653 upstream. We have to put back the references to the master conntrack and the expectation that we just created, otherwise we'll leak them. Fixes: 0ef71ee1 ("netfilter: ctnetlink: refactor ctnetlink_create_expect") Reported-by: Tim Wiess <Tim.Wiess@watchguard.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joe Stringer authored
commit 4b31814d upstream. When zones were originally introduced, the expectation functions were all extended to perform lookup using the zone. However, insertion was not modified to check the zone. This means that two expectations which are intended to apply for different connections that have the same tuple but exist in different zones cannot both be tracked. Fixes: 5d0aa2cc (netfilter: nf_conntrack: add support for "conntrack zones") Signed-off-by: Joe Stringer <joestringer@nicira.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pablo Neira Ayuso authored
commit bf798657 upstream. nft_type_to_reg() needs to return the register in the new 32 bit addressing, otherwise we hit EINVAL when using mappings. Fixes: 49499c3e ("netfilter: nf_tables: switch registers to 32 bit addressing") Reported-by: Andreas Schultz <aschultz@tpip.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pablo Neira Ayuso authored
commit a9de9777 upstream. The convention in nfnetlink is to use network byte order in every header field as well as in the attribute payload. The initial version of the batching infrastructure assumes that res_id comes in host byte order though. The only client of the batching infrastructure is nf_tables, so let's add a workaround to address this inconsistency. We currently have 11 nfnetlink subsystems according to NFNL_SUBSYS_COUNT, so we can assume that the subsystem 2560, ie. htons(10), will not be allocated anytime soon, so it can be an alias of nf_tables from the nfnetlink batching path when interpreting the res_id field. Based on original patch from Florian Westphal. Reported-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 042745ee upstream. Commit 3a0f9aae ("dm raid: round region_size to power of two") intended to make sure that the default region size is a power of two. However, the logic in that commit is incorrect and sets the variable region_size to 0 or 1, depending on whether min_region_size is a power of two. Fix this logic, using roundup_pow_of_two(), so that region_size is properly rounded up to the next power of two. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Fixes: 3a0f9aae ("dm raid: round region_size to power of two") Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
NeilBrown authored
commit 66eefe5d upstream. Calling e.g. blk_queue_max_hw_sectors() after calls to disk_stack_limits() discards the settings determined by disk_stack_limits(). So we need to make those calls first. Fixes: 199dc6ed ("md/raid0: update queue parameter in a safer location.") Reported-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
NeilBrown authored
commit 199dc6ed upstream. When a (e.g.) RAID5 array is reshaped to RAID0, the updating of queue parameters (e.g. max number of sectors per bio) is done in the wrong place. It should be part of ->run, but it is actually part of ->takeover. This means it happens before level_store() calls: blk_set_stacking_limits(&mddev->queue->limits); and so it ineffective. This can lead to errors from underlying devices. So move all the relevant settings out of create_stripe_zones() and into raid0_run(). As this can lead to a bug-on it is suitable for any -stable kernel which supports reshape to RAID0. So 2.6.35 or later. As the bug has been present for five years there is no urgency, so no need to rush into -stable. Fixes: 9af204cf ("md: Add support for Raid5->Raid0 and Raid10->Raid0 takeover") Reported-by: Yi Zhang <yizhan@redhat.com> Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Liu.Zhao authored
commit 19ab6bc5 upstream. This is intended to add ZTE device PIDs on kernel. Signed-off-by: Liu.Zhao <lzsos369@163.com> [johan: sort the new entries ] Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shawn Lin authored
commit 6fa92e2b upstream. we found this issue but still exit in lastest kernel. Simply keep ion_handle_create under mutex_lock to avoid this race. WARNING: CPU: 2 PID: 2648 at drivers/staging/android/ion/ion.c:512 ion_handle_add+0xb4/0xc0() ion_handle_add: buffer already found. Modules linked in: iwlmvm iwlwifi mac80211 cfg80211 compat CPU: 2 PID: 2648 Comm: TimedEventQueue Tainted: G W 3.14.0 #7 00000000 00000000 9a3efd2c 80faf273 9a3efd6c 9a3efd5c 80935dc9 811d7fd3 9a3efd88 00000a58 812208a0 00000200 80e128d4 80e128d4 8d4ae00c a8cd8600 a8cd8094 9a3efd74 80935e0e 00000009 9a3efd6c 811d7fd3 9a3efd88 9a3efd9c Call Trace: [<80faf273>] dump_stack+0x48/0x69 [<80935dc9>] warn_slowpath_common+0x79/0x90 [<80e128d4>] ? ion_handle_add+0xb4/0xc0 [<80e128d4>] ? ion_handle_add+0xb4/0xc0 [<80935e0e>] warn_slowpath_fmt+0x2e/0x30 [<80e128d4>] ion_handle_add+0xb4/0xc0 [<80e144cc>] ion_import_dma_buf+0x8c/0x110 [<80c517c4>] reg_init+0x364/0x7d0 [<80993363>] ? futex_wait+0x123/0x210 [<80992e0e>] ? get_futex_key+0x16e/0x1e0 [<8099308f>] ? futex_wake+0x5f/0x120 [<80c51e19>] vpu_service_ioctl+0x1e9/0x500 [<80994aec>] ? do_futex+0xec/0x8e0 [<80971080>] ? prepare_to_wait_event+0xc0/0xc0 [<80c51c30>] ? reg_init+0x7d0/0x7d0 [<80a22562>] do_vfs_ioctl+0x2d2/0x4c0 [<80b198ad>] ? inode_has_perm.isra.41+0x2d/0x40 [<80b199cf>] ? file_has_perm+0x7f/0x90 [<80b1a5f7>] ? selinux_file_ioctl+0x47/0xf0 [<80a227a8>] SyS_ioctl+0x58/0x80 [<80fb45e8>] syscall_call+0x7/0x7 [<80fb0000>] ? mmc_do_calc_max_discard+0xab/0xe4 Fixes: 83271f62 ("ion: hold reference to handle...") Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com> Reviewed-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joe Thornber authored
commit b0dc3c8b upstream. When using nested btrees, the top leaves of the top levels contain block addresses for the root of the next tree down. If we shadow a shared leaf node the leaf values (sub tree roots) should be incremented accordingly. This is only an issue if there is metadata sharing in the top levels. Which only occurs if metadata snapshots are being used (as is possible with dm-thinp). And could result in a block from the thinp metadata snap being reused early, thus corrupting the thinp metadata snap. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-