- 13 Feb, 2014 2 commits
-
-
Steven Rostedt (Red Hat) authored
commit dced341b upstream. The trace buffer has a descriptor pointer that goes back to the trace array. But it was never assigned. Luckily, nothing uses it (yet), but it will in the future. Although nothing currently uses this, if any of the new features get backported to older kernels, and because this is such a simple change, I'm marking it for stable too. Fixes: 12883efb "tracing: Consolidate max_tr into main trace_array structure" Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tetsuo Handa authored
commit 8ed81460 upstream. Hello. I got below leak with linux-3.10.0-54.0.1.el7.x86_64 . [ 681.903890] kmemleak: 5538 new suspected memory leaks (see /sys/kernel/debug/kmemleak) Below is a patch, but I don't know whether we need special handing for undoing ebitmap_set_bit() call. ---------- >>From fe97527a90fe95e2239dfbaa7558f0ed559c0992 Mon Sep 17 00:00:00 2001 From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Date: Mon, 6 Jan 2014 16:30:21 +0900 Subject: SELinux: Fix memory leak upon loading policy Commit 2463c26d "SELinux: put name based create rules in a hashtable" did not check return value from hashtab_insert() in filename_trans_read(). It leaks memory if hashtab_insert() returns error. unreferenced object 0xffff88005c9160d0 (size 8): comm "systemd", pid 1, jiffies 4294688674 (age 235.265s) hex dump (first 8 bytes): 57 0b 00 00 6b 6b 6b a5 W...kkk. backtrace: [<ffffffff816604ae>] kmemleak_alloc+0x4e/0xb0 [<ffffffff811cba5e>] kmem_cache_alloc_trace+0x12e/0x360 [<ffffffff812aec5d>] policydb_read+0xd1d/0xf70 [<ffffffff812b345c>] security_load_policy+0x6c/0x500 [<ffffffff812a623c>] sel_write_load+0xac/0x750 [<ffffffff811eb680>] vfs_write+0xc0/0x1f0 [<ffffffff811ec08c>] SyS_write+0x4c/0xa0 [<ffffffff81690419>] system_call_fastpath+0x16/0x1b [<ffffffffffffffff>] 0xffffffffffffffff However, we should not return EEXIST error to the caller, or the systemd will show below message and the boot sequence freezes. systemd[1]: Failed to load SELinux policy. Freezing. Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Acked-by: Eric Paris <eparis@redhat.com> Signed-off-by: Paul Moore <pmoore@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 06 Feb, 2014 38 commits
-
-
Greg Kroah-Hartman authored
-
Borislav Petkov authored
commit 3b564968 upstream. This adds the workaround for erratum 793 as a precaution in case not every BIOS implements it. This addresses CVE-2013-6885. Erratum text: [Revision Guide for AMD Family 16h Models 00h-0Fh Processors, document 51810 Rev. 3.04 November 2013] 793 Specific Combination of Writes to Write Combined Memory Types and Locked Instructions May Cause Core Hang Description Under a highly specific and detailed set of internal timing conditions, a locked instruction may trigger a timing sequence whereby the write to a write combined memory type is not flushed, causing the locked instruction to stall indefinitely. Potential Effect on System Processor core hang. Suggested Workaround BIOS should set MSR C001_1020[15] = 1b. Fix Planned No fix planned [ hpa: updated description, fixed typo in MSR name ] Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/20140114230711.GS29865@pd.tnicTested-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paul Mackerras authored
commit 91b973f9 upstream. The code in remove_cache_dir() is supposed to remove the "cache" subdirectory from the sysfs directory for a CPU when that CPU is being offlined. It tries to do this by calling kobject_put() on the kobject for the subdirectory. However, the subdirectory only gets removed once the last reference goes away, and the reference being put here may well not be the last reference. That means that the "cache" subdirectory may still exist when the offlining operation has finished. If the same CPU subsequently gets onlined, the code tries to add a new "cache" subdirectory. If the old subdirectory has not yet been removed, we get a WARN_ON in the sysfs code, with stack trace, and an error message printed on the console. Further, we ultimately end up with an online cpu with no "cache" subdirectory. This fixes it by doing an explicit kobject_del() at the point where we want the subdirectory to go away. kobject_del() removes the sysfs directory even though the object still exists in memory. The object will get freed at some point in the future. A subsequent onlining operation can create a new sysfs directory, even if the old object still exists in memory, without causing any problems. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Srivatsa S. Bhat authored
commit d4edc5b6 upstream. On POWER platforms, the hypervisor can notify the guest kernel about dynamic changes in the cpu-numa associativity (VPHN topology update). Hence the cpu-to-node mappings that we got from the firmware during boot, may no longer be valid after such updates. This is handled using the arch_update_cpu_topology() hook in the scheduler, and the sched-domains are rebuilt according to the new mappings. But unfortunately, at the moment, CPU hotplug ignores these updated mappings and instead queries the firmware for the cpu-to-numa relationships and uses them during CPU online. So the kernel can end up assigning wrong NUMA nodes to CPUs during subsequent CPU hotplug online operations (after booting). Further, a particularly problematic scenario can result from this bug: On POWER platforms, the SMT mode can be switched between 1, 2, 4 (and even 8) threads per core. The switch to Single-Threaded (ST) mode is performed by offlining all except the first CPU thread in each core. Switching back to SMT mode involves onlining those other threads back, in each core. Now consider this scenario: 1. During boot, the kernel gets the cpu-to-node mappings from the firmware and assigns the CPUs to NUMA nodes appropriately, during CPU online. 2. Later on, the hypervisor updates the cpu-to-node mappings dynamically and communicates this update to the kernel. The kernel in turn updates its cpu-to-node associations and rebuilds its sched domains. Everything is fine so far. 3. Now, the user switches the machine from SMT to ST mode (say, by running ppc64_cpu --smt=1). This involves offlining all except 1 thread in each core. 4. The user then tries to switch back from ST to SMT mode (say, by running ppc64_cpu --smt=4), and this involves onlining those threads back. Since CPU hotplug ignores the new mappings, it queries the firmware and tries to associate the newly onlined sibling threads to the old NUMA nodes. This results in sibling threads within the same core getting associated with different NUMA nodes, which is incorrect. The scheduler's build-sched-domains code gets thoroughly confused with this and enters an infinite loop and causes soft-lockups, as explained in detail in commit 3be7db6a (powerpc: VPHN topology change updates all siblings). So to fix this, use the numa_cpu_lookup_table to remember the updated cpu-to-node mappings, and use them during CPU hotplug online operations. Further, we also need to ensure that all threads in a core are assigned to a common NUMA node, irrespective of whether all those threads were online during the topology update. To achieve this, we take care not to use cpu_sibling_mask() since it is not hotplug invariant. Instead, we use cpu_first_sibling_thread() and set up the mappings manually using the 'threads_per_core' value for that particular platform. This helps us ensure that we don't hit this bug with any combination of CPU hotplug and SMT mode switching. Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Sterba authored
commit d0242061 upstream. Currently, any user can snapshot any subvolume if the path is accessible and thus indirectly create and keep files he does not own under his direcotries. This is not possible with traditional directories. In security context, a user can snapshot root filesystem and pin any potentially buggy binaries, even if the updates are applied. All the snapshots are visible to the administrator, so it's possible to verify if there are suspicious snapshots. Another more practical problem is that any user can pin the space used by eg. root and cause ENOSPC. Original report: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/484786Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Wang Shilong authored
commit 90515e7f upstream. We may return early in btrfs_drop_snapshot(), we shouldn't call btrfs_std_err() for this case, fix it. Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com> Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andy Grover authored
commit ee291e63 upstream. When creating network portals rapidly, such as when restoring a configuration, LIO's code to reuse existing portals can return a false negative if the thread hasn't run yet and set np_thread_state to ISCSI_NP_THREAD_ACTIVE. This causes an error in the network stack when attempting to bind to the same address/port. This patch sets NP_THREAD_ACTIVE before the np is placed on g_np_list, so even if the thread hasn't run yet, iscsit_get_np will return the existing np. Also, convert np_lock -> np_mutex + hold across adding new net portal to g_np_list to prevent a race where two threads may attempt to create the same network portal, resulting in one of them failing. (nab: Add missing mutex_unlocks in iscsit_add_np failure paths) (DanC: Fix incorrect spin_unlock -> spin_unlock_bh) Signed-off-by: Andy Grover <agrover@redhat.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Asias He authored
commit f466f753 upstream. vqs are freed in virtscsi_freeze but the hotcpu_notifier is not unregistered. We will have a use-after-free usage when the notifier callback is called after virtscsi_freeze. Fixes: 285e71ea ("virtio-scsi: reset virtqueue affinity when doing cpu hotplug") Signed-off-by: Asias He <asias.hejun@gmail.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vijaya Mohan Guvva authored
commit dcaf9aed upstream. Bfa driver crash is observed while pushing the firmware on to chinook quad port card due to uninitialized bfi_image_ct2 access which gets initialized only for CT2 ASIC based cards after request_firmware(). For quard port chinook (CT2 ASIC based), bfi_image_ct2 is not getting initialized as there is no check for chinook PCI device ID before request_firmware and instead bfi_image_cb is initialized as it is the default case for card type check. This patch includes changes to read the right firmware for quad port chinook. Signed-off-by: Vijaya Mohan Guvva <vmohan@brocade.com> Signed-off-by: James Bottomley <JBottomley@Parallels.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Pugliese authored
commit 83e83ecb upstream. There is no need to skip querying the config and string descriptors for unauthorized WUSB devices when usb_new_device is called. It is allowed by WUSB spec. The only action that needs to be delayed until authorization time is the set config. This change allows user mode tools to see the config and string descriptors earlier in enumeration which is needed for some WUSB devices to function properly on Android systems. It also reduces the amount of divergent code paths needed for WUSB devices. Signed-off-by: Thomas Pugliese <thomas.pugliese@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
commit 6960a059 upstream. We changed the timeout for the interrupt coealescing for calibration, but that wasn't effective since we changed that value back before loading the firmware. Since calibrations are notification from firmware and not Rx packets, this doesn't change anyway - the firmware will fire an interrupt straight away regardless of the interrupt coalescing value. Also, a HW issue has been discovered in 7000 devices series. The work around is to disable the new interrupt coalescing timeout feature - do this by setting bit 31 in CSR_INT_COALESCING. This has been fixed in 7265 which means that we can't rely on the device family and must have a hint in the iwl_cfg structure. Fixes: 99cd4714 ("iwlwifi: add 7000 series device configuration") Reviewed-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stephen Warren authored
(This is upstream 75fae117 "ALSA: hda/hdmi - allow PIN_OUT to be dynamically enabled", backported to stable 3.10 through 3.12. 3.13 and later can take the original patch.) Commit 384a48d7 "ALSA: hda: HDMI: Support codecs with fewer cvts than pins" dynamically enabled each pin widget's PIN_OUT only when the pin was actively in use. This was required on certain NVIDIA CODECs for correct operation. Specifically, if multiple pin widgets each had their mux input select the same audio converter widget and each pin widget had PIN_OUT enabled, then only one of the pin widgets would actually receive the audio, and often not the one the user wanted! However, this apparently broke some Intel systems, and commit 6169b673 "ALSA: hda - Always turn on pins for HDMI/DP" reverted the dynamic setting of PIN_OUT. This in turn broke the afore-mentioned NVIDIA CODECs. This change supports either dynamic or static handling of PIN_OUT, selected by a flag set up during CODEC initialization. This flag is enabled for all recent NVIDIA GPUs. Reported-by: Uosis <uosisl@gmail.com> Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Anssi Hannula authored
(This is a backport of *part* of upstream 611885bc "ALSA: hda - hdmi: Disallow unsupported 2ch remapping on NVIDIA codecs" to stable 3.10 through 3.12. Later stable already contain all of the original patch.) Mainline commit 611885bc "ALSA: hda - hdmi: Disallow unsupported 2ch remapping on NVIDIA codecs" introduces function patch_nvhdmi(). That function is edited by 75fae117 "ALSA: hda/hdmi - allow PIN_OUT to be dynamically enabled". In order to backport the PIN_OUT patch, I am first back-porting just the addition of function patch_nvhdmi(), so that the conflicts applying the PIN_OUT patch are simplified. Ideally, one might backport all of 611885bc. However, that commit doesn't apply to stable kernels, since it relies on a chain of other patches which implement new features. Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi> Signed-off-by: Takashi Iwai <tiwai@suse.de> [swarren, extracted just a small part of the original patch] Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mihai Caraman authored
commit 70713fe3 upstream. Use gva_t instead of unsigned int for eaddr in deliver_tlb_miss(). Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andreas Schwab authored
commit 48eaef05 upstream. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit 57737c49 upstream. This commit: f8dae006: parisc: Ensure full cache coherency for kmap/kunmap caused negative caching side-effects, e.g. hanging processes with expect and too many inequivalent alias messages from flush_dcache_page() on Debian 5 systems. This patch now partly reverts it and has been in production use on our debian buildd makeservers since a week without any major problems. Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: John David Anglin <dave.anglin@bell.net> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
commit 2d93aee1 upstream. Enabling the oscillator consumes slightly more power (100uA) but allows to make sure that we exit from L1 on time. Not doing so might lead to a PCIe specification violation since we might wake up from L1 at the wrong time. This issue has been identified on 3160 and 7260 only. On older NICs L1 off is not enabled, on newer NICs (7265), the issue is fixed. When the bug occurs the user sees that the NIC has disappeared from the PCI bridge, any access to the device returns 0xff. This fixes: https://bugzilla.kernel.org/show_bug.cgi?id=64541 and has been extensively discussed here: http://markmail.org/thread/mfmpzqt3r333n4bo Fixes: 99cd4714 ("iwlwifi: add 7000 series device configuration") Reported-and-tested-by: wzyboy <wzyboy@wzyboy.org> Reviewed-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicolas Dichtel authored
[ No relevant upstream commit. ] This problem was fixed upstream by commit 1e9f3d6f ("ip6tnl: fix use after free of fb_tnl_dev"). The upstream patch depends on upstream commit 0bd87628 ("ip6tnl: add x-netns support"), which was not backported into 3.10 branch. First, explain the problem: when the ip6_tunnel module is unloaded, ip6_tunnel_cleanup() is called. rmmod ip6_tunnel => ip6_tunnel_cleanup() => rtnl_link_unregister() => __rtnl_kill_links() => for_each_netdev(net, dev) { if (dev->rtnl_link_ops == ops) ops->dellink(dev, &list_kill); } At this point, the FB device is deleted (and all ip6tnl tunnels). => unregister_pernet_device() => unregister_pernet_operations() => ops_exit_list() => ip6_tnl_exit_net() => ip6_tnl_destroy_tunnels() => t = rtnl_dereference(ip6n->tnls_wc[0]); unregister_netdevice_queue(t->dev, &list); We delete the FB device a second time here! The previous fix removes these lines, which fix this double free. But the patch introduces a memory leak when a netns is destroyed, because the FB device is never deleted. By adding an rtnl ops which delete all ip6tnl device excepting the FB device, we can keep this exlicit removal in ip6_tnl_destroy_tunnels(). CC: Steven Rostedt <rostedt@goodmis.org> CC: Willem de Bruijn <willemb@google.com> Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Reported-by: Steven Rostedt <srostedt@redhat.com> Tested-by: Steven Rostedt <srostedt@redhat.com> (and our entire MRG team) Tested-by: "Luis Claudio R. Goncalves" <lgoncalv@redhat.com> Tested-by: John Kacur <jkacur@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicolas Dichtel authored
[ No relevant upstream commit. ] This reverts commit 22c3ec55. This patch is not the right fix, it introduces a memory leak when a netns is destroyed (the FB device is never deleted). Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Reported-by: Steven Rostedt <srostedt@redhat.com> Tested-by: Steven Rostedt <srostedt@redhat.com> (and our entire MRG team) Tested-by: "Luis Claudio R. Goncalves" <lgoncalv@redhat.com> Tested-by: John Kacur <jkacur@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nicolas Dichtel authored
[ No relevant upstream commit. ] This problem was fixed upstream by commit 9434266f ("sit: fix use after free of fb_tunnel_dev"). The upstream patch depends on upstream commit 5e6700b3 ("sit: add support of x-netns"), which was not backported into 3.10 branch. First, explain the problem: when the sit module is unloaded, sit_cleanup() is called. rmmod sit => sit_cleanup() => rtnl_link_unregister() => __rtnl_kill_links() => for_each_netdev(net, dev) { if (dev->rtnl_link_ops == ops) ops->dellink(dev, &list_kill); } At this point, the FB device is deleted (and all sit tunnels). => unregister_pernet_device() => unregister_pernet_operations() => ops_exit_list() => sit_exit_net() => sit_destroy_tunnels() In this function, no tunnel is found. => unregister_netdevice_queue(sitn->fb_tunnel_dev, &list); We delete the FB device a second time here! Because we cannot simply remove the second deletion (sit_exit_net() must remove the FB device when a netns is deleted), we add an rtnl ops which delete all sit device excepting the FB device and thus we can keep the explicit deletion in sit_exit_net(). CC: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Acked-by: Willem de Bruijn <willemb@google.com> Reported-by: Steven Rostedt <srostedt@redhat.com> Tested-by: Steven Rostedt <srostedt@redhat.com> (and our entire MRG team) Tested-by: "Luis Claudio R. Goncalves" <lgoncalv@redhat.com> Tested-by: John Kacur <jkacur@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Annie Li authored
[ Upstream commit cefe0078 ] This patch removes grant transfer releasing code from netfront, and uses gnttab_end_foreign_access to end grant access since gnttab_end_foreign_access_ref may fail when the grant entry is currently used for reading or writing. * clean up grant transfer code kept from old netfront(2.6.18) which grants pages for access/map and transfer. But grant transfer is deprecated in current netfront, so remove corresponding release code for transfer. * fix resource leak, release grant access (through gnttab_end_foreign_access) and skb for tx/rx path, use get_page to ensure page is released when grant access is completed successfully. Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches for them will be created separately. V6: Correct subject line and commit message. V5: Remove unecessary change in xennet_end_access. V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in single patch. V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill grant acess is ended. V2: Improve patch comments. Signed-off-by: Annie Li <annie.li@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Holger Eitzenberger authored
[ Upstream commit a452ce34 ] I see a memory leak when using a transparent HTTP proxy using TPROXY together with TCP early demux and Kernel v3.8.13.15 (Ubuntu stable): unreferenced object 0xffff88008cba4a40 (size 1696): comm "softirq", pid 0, jiffies 4294944115 (age 8907.520s) hex dump (first 32 bytes): 0a e0 20 6a 40 04 1b 37 92 be 32 e2 e8 b4 00 00 .. j@..7..2..... 02 00 07 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff810b710a>] kmem_cache_alloc+0xad/0xb9 [<ffffffff81270185>] sk_prot_alloc+0x29/0xc5 [<ffffffff812702cf>] sk_clone_lock+0x14/0x283 [<ffffffff812aaf3a>] inet_csk_clone_lock+0xf/0x7b [<ffffffff8129a893>] netlink_broadcast+0x14/0x16 [<ffffffff812c1573>] tcp_create_openreq_child+0x1b/0x4c3 [<ffffffff812c033e>] tcp_v4_syn_recv_sock+0x38/0x25d [<ffffffff812c13e4>] tcp_check_req+0x25c/0x3d0 [<ffffffff812bf87a>] tcp_v4_do_rcv+0x287/0x40e [<ffffffff812a08a7>] ip_route_input_noref+0x843/0xa55 [<ffffffff812bfeca>] tcp_v4_rcv+0x4c9/0x725 [<ffffffff812a26f4>] ip_local_deliver_finish+0xe9/0x154 [<ffffffff8127a927>] __netif_receive_skb+0x4b2/0x514 [<ffffffff8127aa77>] process_backlog+0xee/0x1c5 [<ffffffff8127c949>] net_rx_action+0xa7/0x200 [<ffffffff81209d86>] add_interrupt_randomness+0x39/0x157 But there are many more, resulting in the machine going OOM after some days. From looking at the TPROXY code, and with help from Florian, I see that the memory leak is introduced in tcp_v4_early_demux(): void tcp_v4_early_demux(struct sk_buff *skb) { /* ... */ iph = ip_hdr(skb); th = tcp_hdr(skb); if (th->doff < sizeof(struct tcphdr) / 4) return; sk = __inet_lookup_established(dev_net(skb->dev), &tcp_hashinfo, iph->saddr, th->source, iph->daddr, ntohs(th->dest), skb->skb_iif); if (sk) { skb->sk = sk; where the socket is assigned unconditionally to skb->sk, also bumping the refcnt on it. This is problematic, because in our case the skb has already a socket assigned in the TPROXY target. This then results in the leak I see. The very same issue seems to be with IPv6, but haven't tested. Reviewed-by: Florian Westphal <fw@strlen.de> Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oliver Hartkopp authored
[ Upstream commit a0065f26 ] The two commits 0115e8e3 (net: remove delay at device dismantle) and 748e2d93 (net: reinstate rtnl in call_netdevice_notifiers()) silently removed a NULL pointer check for in_dev since Linux 3.7. This patch re-introduces this check as it causes crashing the kernel when setting small mtu values on non-ip capable netdevices. Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Duan Jiong authored
[ Upstream commit 11c21a30 ] commit a6222602("ip_tunnel: fix kernel panic with icmp_dest_unreach") clear IPCB in ip_tunnel_xmit() , or else skb->cb[] may contain garbage from GSO segmentation layer. But commit 0e6fbc5b("ip_tunnels: extend iptunnel_xmit()") refactor codes, and it clear IPCB behind the dst_link_failure(). So clear IPCB in ip_tunnel_xmit() just like commti a6222602("ip_tunnel: fix kernel panic with icmp_dest_unreach"). Signed-off-by: Duan Jiong <duanj.fnst@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Heiko Carstens authored
[ Upstream commit 3af57f78 ] The s390 bpf jit compiler emits the signed divide instructions "dr" and "d" for unsigned divisions. This can cause problems: the dividend will be zero extended to a 64 bit value and the divisor is the 32 bit signed value as specified A or X accumulator, even though A and X are supposed to be treated as unsigned values. The divide instrunctions will generate an exception if the result cannot be expressed with a 32 bit signed value. This is the case if e.g. the dividend is 0xffffffff and the divisor either 1 or also 0xffffffff (signed: -1). To avoid all these issues simply use unsigned divide instructions. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit aee636c4 ] At first Jakub Zawadzki noticed that some divisions by reciprocal_divide were not correct. (off by one in some cases) http://www.wireshark.org/~darkjames/reciprocal-buggy.c He could also show this with BPF: http://www.wireshark.org/~darkjames/set-and-dump-filter-k-bug.c The reciprocal divide in linux kernel is not generic enough, lets remove its use in BPF, as it is not worth the pain with current cpus. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Jakub Zawadzki <darkjames-ws@darkjames.pl> Cc: Mircea Gherzan <mgherzan@gmail.com> Cc: Daniel Borkmann <dxchgb@gmail.com> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Matt Evans <matt@ozlabs.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christoph Paasch authored
[ Upstream commit 77f99ad1 ] Because the tcp-metrics is an RCU-list, it may be that two soft-interrupts are inside __tcp_get_metrics() for the same destination-IP at the same time. If this destination-IP is not yet part of the tcp-metrics, both soft-interrupts will end up in tcpm_new and create a new entry for this IP. So, we will have two tcp-metrics with the same destination-IP in the list. This patch checks twice __tcp_get_metrics(). First without holding the lock, then while holding the lock. The second one is there to confirm that the entry has not been added by another soft-irq while waiting for the spin-lock. Fixes: 51c5d0c4 (tcp: Maintain dynamic metrics in local cache.) Signed-off-by: Christoph Paasch <christoph.paasch@uclouvain.be> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gerald Schaefer authored
[ Upstream commit c196403b ] commit ae4b46e9 "net: rds: use this_cpu_* per-cpu helper" broke per-cpu handling for rds. chpfirst is the result of __this_cpu_read(), so it is an absolute pointer and not __percpu. Therefore, __this_cpu_write() should not operate on chpfirst, but rather on cache->percpu->first, just like __this_cpu_read() did before. Signed-off-byd Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Richard Weinberger authored
[ Upstream commit a926592f ] rhine_reset_task() misses to disable the tx scheduler upon reset, this can lead to a crash if work is still scheduled while we're resetting the tx queue. Fixes: [ 93.591707] BUG: unable to handle kernel NULL pointer dereference at 0000004c [ 93.595514] IP: [<c119d10d>] rhine_napipoll+0x491/0x6 Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hannes Frederic Sowa authored
[ Upstream commit 95f4a45d ] Bob Falken reported that after 4G packets, multicast forwarding stopped working. This was because of a rule reference counter overflow which freed the rule as soon as the overflow happend. This patch solves this by adding the FIB_LOOKUP_NOREF flag to fib_rules_lookup calls. This is safe even from non-rcu locked sections as in this case the flag only implies not taking a reference to the rule, which we don't need at all. Rules only hold references to the namespace, which are guaranteed to be available during the call of the non-rcu protected function reg_vif_xmit because of the interface reference which itself holds a reference to the net namespace. Fixes: f0ad0860 ("ipv4: ipmr: support multiple tables") Fixes: d1db275d ("ipv6: ip6mr: support multiple tables") Reported-by: Bob Falken <NetFestivalHaveFun@gmx.com> Cc: Patrick McHardy <kaber@trash.net> Cc: Thomas Graf <tgraf@suug.ch> Cc: Julian Anastasov <ja@ssi.bg> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian Engelmayer authored
[ Upstream commit 267d29a6 ] Fix a memory leak in the ieee802154_add_iface() error handling path. Detected by Coverity: CID 710490. Signed-off-by: Christian Engelmayer <cengelma@gmx.at> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Neal Cardwell authored
[ Based upon upstream commit 70315d22 ] Fix inet_diag_dump_icsk() to reflect the fact that both TIME_WAIT and FIN_WAIT2 connections are represented by inet_timewait_sock (not just TIME_WAIT). Thus: (a) We need to iterate through the time_wait buckets if the user wants either TIME_WAIT or FIN_WAIT2. (Before fixing this, "ss -nemoi state fin-wait-2" would not return any sockets, even if there were some in FIN_WAIT2.) (b) We need to check tw_substate to see if the user wants to dump sockets in the particular substate (TIME_WAIT or FIN_WAIT2) that a given connection is in. (Before fixing this, "ss -nemoi state time-wait" would actually return sockets in state FIN_WAIT2.) An analogous fix is in v3.13: 70315d22 ("inet_diag: fix inet_diag_dump_icsk() to use correct state for timewait sockets") but that patch is quite different because 3.13 code is very different in this area due to the unification of TCP hash tables in 05dbc7b5 ("tcp/dccp: remove twchain") in v3.13-rc1. I tested that this applies cleanly between v3.3 and v3.12, and tested that it works in both 3.3 and 3.12. It does not apply cleanly to 3.2 and earlier (though it makes semantic sense), and semantically is not the right fix for 3.13 and beyond (as mentioned above). Signed-off-by: Neal Cardwell <ncardwell@google.com> Cc: Eric Dumazet <edumazet@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michal Schmidt authored
[ Upstream commit 95e92fd4 ] bnx2x triggers warnings with CONFIG_DMA_API_DEBUG=y: WARNING: CPU: 0 PID: 2253 at lib/dma-debug.c:887 check_unmap+0xf8/0x920() bnx2x 0000:28:00.0: DMA-API: device driver frees DMA memory with different size [device address=0x00000000da2b389e] [map size=1490 bytes] [unmap size=66 bytes] The reason is that bnx2x splits a TSO BD into two BDs (headers + data) using one DMA mapping for both, but it uses only the length of the first BD when unmapping. This patch fixes the bug by unmapping the whole length of the two BDs. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit b0ad4ff3 upstream. The DriveGuard chips on the new HP laptops are with a new PnP ID "HPQ6007". It should be compatible with older chips. Acked-by: Éric Piel <eric.piel@tremplin-utc.net> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Matthew Garrett <matthew.garrett@nebula.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kent Overstreet authored
commit ef71ec00 upstream. The code that handles overlapping extents that we've just read back in from disk was depending on the behaviour of the code that handles overlapping extents as we're inserting into a btree node in the case of an insert that forced an existing extent to be split: on insert, if we had to split we'd also insert a new extent to represent the top part of the old extent - and then that new extent would get written out. The code that read the extents back in thus not bother with splitting extents - if it saw an extent that ovelapped in the middle of an older extent, it would trim the old extent to only represent the bottom part, assuming that the original insert would've inserted a new extent to represent the top part. I still haven't figured out _how_ it can happen, but I'm now pretty convinced (and testing has confirmed) that there's some kind of an obscure corner case (probably involving extent merging, and multiple overwrites in different sets) that breaks this. The fix is to change the mergesort fixup code to split extents itself when required. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric W. Biederman authored
commit 260a459d upstream. A bug was introduced with the is_mounted helper function in commit f7a99c5b Author: Al Viro <viro@zeniv.linux.org.uk> Date: Sat Jun 9 00:59:08 2012 -0400 get rid of ->mnt_longterm it's enough to set ->mnt_ns of internal vfsmounts to something distinct from all struct mnt_namespace out there; then we can just use the check for ->mnt_ns != NULL in the fast path of mntput_no_expire() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> The intent was to test if the real_mount(vfsmount)->mnt_ns was NULL_OR_ERR but the code is actually testing real_mount(vfsmount) and always returning true. The result is d_absolute_path returning paths it should be hiding. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Theodore Ts'o authored
commit 09c455aa upstream. A missing cast means that when we are truncating a file which is less than 60 bytes, we don't clear the correct area of memory, and in fact we can end up truncating the next inode in the inode table, or worse yet, some other kernel data structure. Addresses-Coverity-Id: #751987 Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tejun Heo authored
commit ecd75ad5 upstream. For some reason, some early WD drives spin up and down drives erratically when the link is put into slumber mode which can reduce the life expectancy of the device significantly. Unfortunately, we don't have full list of devices and given the nature of the issue it'd be better to err on the side of false positives than the other way around. Let's disable LPM on all WD devices which match one of the known problematic model prefixes and are SATA-I. As horkage list doesn't support matching SATA capabilities, this is implemented as two horkages - WD_BROKEN_LPM and NOLPM. The former is set for the known prefixes and sets the latter if the matched device is SATA-I. Note that this isn't optimal as this disables all LPM operations and partial link power state reportedly works fine on these; however, the way LPM is implemented in libata makes it difficult to precisely map libata LPM setting to specific link power state. Well, these devices are already fairly outdated. Let's just disable whole LPM for now. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-and-tested-by: Nikos Barkas <levelwol@gmail.com> Reported-and-tested-by: Ioannis Barkas <risc4all@yahoo.com> References: https://bugzilla.kernel.org/show_bug.cgi?id=57211Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-