- 28 Mar, 2013 26 commits
-
-
Steven Rostedt (Red Hat) authored
commit 740466bc upstream. Because function tracing is very invasive, and can even trace calls to rcu_read_lock(), RCU access in function tracing is done with preempt_disable_notrace(). This requires a synchronize_sched() for updates and not a synchronize_rcu(). Function probes (traceon, traceoff, etc) must be freed after a synchronize_sched() after its entry has been removed from the hash. But call_rcu() is used. Fix this by using call_rcu_sched(). Also fix the usage to use hlist_del_rcu() instead of hlist_del(). Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steven Rostedt (Red Hat) authored
commit 2721e72d upstream. Although the swap is wrapped with a spin_lock, the assignment of the temp buffer used to swap is not within that lock. It needs to be moved into that lock, otherwise two swaps happening on two different CPUs, can end up using the wrong temp buffer to assign in the swap. Luckily, all current callers of the swap function appear to have their own locks. But in case something is added that allows two different callers to call the swap, then there's a chance that this race can trigger and corrupt the buffers. New code is coming soon that will allow for this race to trigger. I've Cc'd stable, so this bug will not show up if someone backports one of the changes that can trigger this bug. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kees Cook authored
commit 2563a452 upstream. Masks kernel address info-leak in object dumps with the %pK suffix, so they cannot be used to target kernel memory corruption attacks if the kptr_restrict sysctl is set. Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Mack authored
commit 83ea5d18 upstream. Creation of individual mixer controls may fail, but that shouldn't cause the entire mixer creation to fail. Even worse, if the mixer creation fails, that will error out the entire device probing. All the functions called by parse_audio_unit() should return -EINVAL if they find descriptors that are unsupported or believed to be malformed, so we can safely handle this error code as a non-fatal condition in snd_usb_mixer_controls(). That fixes a long standing bug which is commonly worked around by adding quirks which make the driver ignore entire interfaces. Some of them might now be unnecessary. Signed-off-by:
Daniel Mack <zonque@gmail.com> Reported-and-tested-by:
Rodolfo Thomazelli <pe.soberbo@gmail.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Mack authored
commit 4d7b86c9 upstream. In check_input_term() and parse_audio_feature_unit(), propagate the error value that has been returned by a failing function instead of -EINVAL. That helps cleaning up the error pathes in the mixer. Signed-off-by:
Daniel Mack <zonque@gmail.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit a686fd14 upstream. There is a typo in convert_to_spdif_status() about checking the emphasis IEC958 status bit. It should check the given value instead of the resultant value. Reported-by:
Martin Weishart <martin.weishart@telosalliance.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commit fae8563b ] Using TX push when notifying the NIC of multiple new descriptors in the ring will very occasionally cause the TX DMA engine to re-use an old descriptor. This can result in a duplicated or partly duplicated packet (new headers with old data), or an IOMMU page fault. This does not happen when the pushed descriptor is the only one written. TX push also provides little latency benefit when a packet requires more than one descriptor. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commit 35205b21 ] efx_device_detach_sync() locks all TX queues before marking the device detached and thus disabling further TX scheduling. But it can still be interrupted by TX completions which then result in TX scheduling in soft interrupt context. This will deadlock when it tries to acquire a TX queue lock that efx_device_detach_sync() already acquired. To avoid deadlock, we must use netif_tx_{,un}lock_bh(). Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commit 29c69a48 ] We must only ever stop TX queues when they are full or the net device is not 'ready' so far as the net core, and specifically the watchdog, is concerned. Otherwise, the watchdog may fire *immediately* if no packets have been added to the queue in the last 5 seconds. The device is ready if all the following are true: (a) It has a qdisc (b) It is marked present (c) It is running (d) The link is reported up (a) and (c) are normally true, and must not be changed by a driver. (d) is under our control, but fake link changes may disturb userland. This leaves (b). We already mark the device absent during reset and self-test, but we need to do the same during MTU changes and ring reallocation. We don't need to do this when the device is brought down because then (c) is already false. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.0: adjust context] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commits 06e63c57, b590ace0 and c73e787a ] We assume that the mapping between DMA and virtual addresses is done on whole pages, so we can find the page offset of an RX buffer using the lower bits of the DMA address. However, swiotlb maps in units of 2K, breaking this assumption. Add an explicit page_offset field to struct efx_rx_buffer. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.0: adjust context] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commit 3a68f19d ] We may currently allocate two RX DMA buffers to a page, and only unmap the page when the second is completed. We do not sync the first RX buffer to be completed; this can result in packet loss or corruption if the last RX buffer completed in a NAPI poll is the first in a page and is not DMA-coherent. (In the middle of a NAPI poll, we will handle the following RX completion and unmap the page *before* looking at the content of the first buffer.) Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.0: adjust context] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commit ebf98e79 ] efx_mcdi_poll() uses get_seconds() to read the current time and to implement a polling timeout. The use of this function was chosen partly because it could easily be replaced in a co-sim environment with a macro that read the simulated time. Unfortunately the real get_seconds() returns the system time (real time) which is subject to adjustment by e.g. ntpd. If the system time is adjusted forward during a polled MCDI operation, the effective timeout can be shorter than the intended 10 seconds, resulting in a spurious failure. It is also possible for a backward adjustment to delay detection of a areal failure. Use jiffies instead, and change MCDI_RPC_TIMEOUT to be denominated in jiffies. Also correct rounding of the timeout: check time > finish (or rather time_after(time, finish)) and not time >= finish. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.0: adjust context] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Pieczko authored
[ Upstream commit c2f3b8e3 ] The assertion of netif_device_present() at the top of efx_hard_start_xmit() may fail if we don't do this. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.0: adjust context] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commits a606f432, d5e8cc6c, 525d9e82 ] The TX DMA engine issues upstream read requests when there is room in the TX FIFO for the completion. However, the fetches for the rest of the packet might be delayed by any back pressure. Since a flush must wait for an EOP, the entire flush may be delayed by back pressure. Mitigate this by disabling flow control before the flushes are started. Since PF and VF flushes run in parallel introduce fc_disable, a reference count of the number of flushes outstanding. The same principle could be applied to Falcon, but that would bring with it its own testing. We sometimes hit a "failed to flush" timeout on some TX queues, but the flushes have completed and the flush completion events seem to go missing. In this case, we can check the TX_DESC_PTR_TBL register and drain the queues if the flushes had finished. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.0: - Call efx_nic_type::finish_flush() on both success and failure paths - Check the TX_DESC_PTR_TBL registers in the polling loop - Declare efx_mcdi_set_mac() extern] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commit bfeed902 ] On big-endian systems the MTD partition names currently have mangled subtype numbers and are not recognised by the firmware update tool (sfupdate). Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.0: use old macros for length of firmware subtype array] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stuart Hodgson authored
[ Upstream commit 3dca9d2d ] efx_nic_fatal_interrupt() disables DMA before scheduling a reset. After this, we need not and *cannot* flush queues. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steve Hodgson authored
[ Upstream commit a659b2a9 ] [bwh: Use __force in the one place it's needed] Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
[ Upstream commit 4017dbdc ] efx_filter_remove_filter() fails to remove inserted filters in some cases. For example: 1. Two filters A and B have specifications that result in an initial hash collision. 2. A is inserted first, followed by B. 3. An attempt to remove B first succeeds, but if A is removed first a subsequent attempt to remove B fails. When searching for an existing filter (!for_insert), efx_filter_search() must always continue to the maximum search depth for the given type rather than stopping at the first unused entry. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hannes Frederic Sowa authored
[ Upstream commit 5a3da1fe ] This patch introduces a constant limit of the fragment queue hash table bucket list lengths. Currently the limit 128 is choosen somewhat arbitrary and just ensures that we can fill up the fragment cache with empty packets up to the default ip_frag_high_thresh limits. It should just protect from list iteration eating considerable amounts of cpu. If we reach the maximum length in one hash bucket a warning is printed. This is implemented on the caller side of inet_frag_find to distinguish between the different users of inet_fragment.c. I dropped the out of memory warning in the ipv4 fragment lookup path, because we already get a warning by the slab allocator. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jesper Dangaard Brouer <jbrouer@redhat.com> Signed-off-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vlad Yasevich authored
[ Upstream commit a5b8db91 ] Range/validity checks on rta_type in rtnetlink_rcv_msg() do not account for flags that may be set. This causes the function to return -EINVAL when flags are set on the type (for example NLA_F_NESTED). Signed-off-by:
Vlad Yasevich <vyasevic@redhat.com> Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Denis V. Lunev authored
[ Upstream commit 5b9e12db ] a long time ago by the commit commit 93456b6d Author: Denis V. Lunev <den@openvz.org> Date: Thu Jan 10 03:23:38 2008 -0800 [IPV4]: Unify access to the routing tables. the defenition of FIB_HASH_TABLE size has obtained wrong dependency: it should depend upon CONFIG_IP_MULTIPLE_TABLES (as was in the original code) but it was depended from CONFIG_IP_ROUTE_MULTIPATH This patch returns the situation to the original state. The problem was spotted by Tingwei Liu. Signed-off-by:
Denis V. Lunev <den@openvz.org> CC: Tingwei Liu <tingw.liu@gmail.com> CC: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Xufeng Zhang authored
[ Upstream commit 2317f449 ] sctp_assoc_lookup_tsn() function searchs which transport a certain TSN was sent on, if not found in the active_path transport, then go search all the other transports in the peer's transport_addr_list, however, we should continue to the next entry rather than break the loop when meet the active_path transport. Signed-off-by:
Xufeng Zhang <xufeng.zhang@windriver.com> Acked-by:
Neil Horman <nhorman@tuxdriver.com> Acked-by:
Vlad Yasevich <vyasevich@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Veaceslav Falico authored
[ Upstream commit 3f315bef ] __netpoll_cleanup() is called in netconsole_netdev_event() while holding a spinlock. Release/acquire the spinlock before/after it and restart the loop. Also, disable the netconsole completely, because we won't have chance after the restart of the loop, and might end up in a situation where nt->enabled == 1 and nt->np.dev == NULL. Signed-off-by:
Veaceslav Falico <vfalico@redhat.com> Acked-by:
Neil Horman <nhorman@tuxdriver.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Ward authored
[ Upstream commit 4660c7f4 ] This is needed in order to detect if the timestamp option appears more than once in a packet, to remove the option if the packet is fragmented, etc. My previous change neglected to store the option location when the router addresses were prespecified and Pointer > Length. But now the option location is also stored when Flag is an unrecognized value, to ensure these option handling behaviors are still performed. Signed-off-by:
David Ward <david.ward@ll.mit.edu> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tkhai Kirill authored
[ Upstream commit cb29529e ] If a machine has X (X < 4) sunsu ports and cmdline option "console=ttySY" is passed, where X < Y <= 4, than the following panic happens: Unable to handle kernel NULL pointer dereference TPC: <sunsu_console_setup+0x78/0xe0> RPC: <sunsu_console_setup+0x74/0xe0> I7: <register_console+0x378/0x3e0> Call Trace: [0000000000453a38] register_console+0x378/0x3e0 [0000000000576fa0] uart_add_one_port+0x2e0/0x340 [000000000057af40] su_probe+0x160/0x2e0 [00000000005b8a4c] platform_drv_probe+0xc/0x20 [00000000005b6c2c] driver_probe_device+0x12c/0x220 [00000000005b6da8] __driver_attach+0x88/0xa0 [00000000005b4df4] bus_for_each_dev+0x54/0xa0 [00000000005b5a54] bus_add_driver+0x154/0x260 [00000000005b7190] driver_register+0x50/0x180 [00000000006d250c] sunsu_init+0x18c/0x1e0 [00000000006c2668] do_one_initcall+0xe8/0x160 [00000000006c282c] kernel_init_freeable+0x12c/0x1e0 [0000000000603764] kernel_init+0x4/0x100 [0000000000405f64] ret_from_syscall+0x1c/0x2c [0000000000000000] (null) 1)Fix the panic; 2)Increment registered port number every successful probe. Signed-off-by:
Kirill Tkhai <tkhai@yandex.ru> CC: David Miller <davem@davemloft.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Greg Kroah-Hartman authored
This reverts commit 0319f990, which is commit feca7746 upstream. It shouldn't have gone into this stable release. Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Joseph Salisbury <joseph.salisbury@canonical.com> Cc: Stephen Thirlwall <sdt@dr.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 20 Mar, 2013 14 commits
-
-
Greg Kroah-Hartman authored
-
Mathias Krause authored
[ Upstream commit 29cd8ae0 ] The dcb netlink interface leaks stack memory in various places: * perm_addr[] buffer is only filled at max with 12 of the 32 bytes but copied completely, * no in-kernel driver fills all fields of an IEEE 802.1Qaz subcommand, so we're leaking up to 58 bytes for ieee_ets structs, up to 136 bytes for ieee_pfc structs, etc., * the same is true for CEE -- no in-kernel driver fills the whole struct, Prevent all of the above stack info leaks by properly initializing the buffers/structures involved. Signed-off-by:
Mathias Krause <minipli@googlemail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mathias Krause authored
[ Upstream commit 84d73cd3 ] Initialize the mac address buffer with 0 as the driver specific function will probably not fill the whole buffer. In fact, all in-kernel drivers fill only ETH_ALEN of the MAX_ADDR_LEN bytes, i.e. 6 of the 32 possible bytes. Therefore we currently leak 26 bytes of stack memory to userland via the netlink interface. Signed-off-by:
Mathias Krause <minipli@googlemail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hannes Frederic Sowa authored
[ Upstream commit ddf64354 ] v2: a) used struct ipv6_addr_props v3: a) reverted changes for ipv6_addr_props v4: a) do not use __ipv6_addr_needs_scope_id Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Acked-by:
YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Cristian Bercaru authored
[ Upstream commit 3bc1b1ad ] The frames for which rx_handlers return RX_HANDLER_CONSUMED are no longer counted as dropped. They are counted as successfully received by 'netif_receive_skb'. This allows network interface drivers to correctly update their RX-OK and RX-DRP counters based on the result of 'netif_receive_skb'. Signed-off-by:
Cristian Bercaru <B43982@freescale.com> Signed-off-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paul Moore authored
[ Upstream commits 0c1233ab and a6a8fe95 ] When we have a large number of static label mappings that spill across the netlink message boundary we fail to properly save our state in the netlink_callback struct which causes us to repeat the same listings. This patch fixes this problem by saving the state correctly between calls to the NetLabel static label netlink "dumpit" routines. Signed-off-by:
Paul Moore <pmoore@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit f8af75f3 ] Dave reported following crash : general protection fault: 0000 [#1] SMP CPU 2 Pid: 25407, comm: qemu-kvm Not tainted 3.7.9-205.fc18.x86_64 #1 Hewlett-Packard HP Z400 Workstation/0B4Ch RIP: 0010:[<ffffffffa0399bd5>] [<ffffffffa0399bd5>] destroy_conntrack+0x35/0x120 [nf_conntrack] RSP: 0018:ffff880276913d78 EFLAGS: 00010206 RAX: 50626b6b7876376c RBX: ffff88026e530d68 RCX: ffff88028d158e00 RDX: ffff88026d0d5470 RSI: 0000000000000011 RDI: 0000000000000002 RBP: ffff880276913d88 R08: 0000000000000000 R09: ffff880295002900 R10: 0000000000000000 R11: 0000000000000003 R12: ffffffff81ca3b40 R13: ffffffff8151a8e0 R14: ffff880270875000 R15: 0000000000000002 FS: 00007ff3bce38a00(0000) GS:ffff88029fc40000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00007fd1430bd000 CR3: 000000027042b000 CR4: 00000000000027e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process qemu-kvm (pid: 25407, threadinfo ffff880276912000, task ffff88028c369720) Stack: ffff880156f59100 ffff880156f59100 ffff880276913d98 ffffffff815534f7 ffff880276913db8 ffffffff8151a74b ffff880270875000 ffff880156f59100 ffff880276913dd8 ffffffff8151a5a6 ffff880276913dd8 ffff88026d0d5470 Call Trace: [<ffffffff815534f7>] nf_conntrack_destroy+0x17/0x20 [<ffffffff8151a74b>] skb_release_head_state+0x7b/0x100 [<ffffffff8151a5a6>] __kfree_skb+0x16/0xa0 [<ffffffff8151a666>] kfree_skb+0x36/0xa0 [<ffffffff8151a8e0>] skb_queue_purge+0x20/0x40 [<ffffffffa02205f7>] __tun_detach+0x117/0x140 [tun] [<ffffffffa022184c>] tun_chr_close+0x3c/0xd0 [tun] [<ffffffff8119669c>] __fput+0xec/0x240 [<ffffffff811967fe>] ____fput+0xe/0x10 [<ffffffff8107eb27>] task_work_run+0xa7/0xe0 [<ffffffff810149e1>] do_notify_resume+0x71/0xb0 [<ffffffff81640152>] int_signal+0x12/0x17 Code: 00 00 04 48 89 e5 41 54 53 48 89 fb 4c 8b a7 e8 00 00 00 0f 85 de 00 00 00 0f b6 73 3e 0f b7 7b 2a e8 10 40 00 00 48 85 c0 74 0e <48> 8b 40 28 48 85 c0 74 05 48 89 df ff d0 48 c7 c7 08 6a 3a a0 RIP [<ffffffffa0399bd5>] destroy_conntrack+0x35/0x120 [nf_conntrack] RSP <ffff880276913d78> This is because tun_net_xmit() needs to call nf_reset() before queuing skb into receive_queue Reported-by:
Dave Jones <davej@redhat.com> Signed-off-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Neal Cardwell authored
[ Upstream commit aab2b4bf ] We should not update ts_recent and call tcp_rcv_rtt_measure_ts() both before and after going to step5. That wastes CPU and double-counts the receiver-side RTT sample. Signed-off-by:
Neal Cardwell <ncardwell@google.com> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lorenzo Colitti authored
[ Upstream commit 3e8b0ac3 ] Setting net.ipv6.conf.<interface>.accept_ra=2 causes the kernel to accept RAs even when forwarding is enabled. However, enabling forwarding purges all default routes on the system, breaking connectivity until the next RA is received. Fix this by not purging default routes on interfaces that have accept_ra=2. Signed-off-by:
Lorenzo Colitti <lorenzo@google.com> Acked-by:
YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Cong Wang authored
[ Upstream commit ece6b0a2 ] Dave Jones reported the following bug: "When fed mangled socket data, rds will trust what userspace gives it, and tries to allocate enormous amounts of memory larger than what kmalloc can satisfy." WARNING: at mm/page_alloc.c:2393 __alloc_pages_nodemask+0xa0d/0xbe0() Hardware name: GA-MA78GM-S2H Modules linked in: vmw_vsock_vmci_transport vmw_vmci vsock fuse bnep dlci bridge 8021q garp stp mrp binfmt_misc l2tp_ppp l2tp_core rfcomm s Pid: 24652, comm: trinity-child2 Not tainted 3.8.0+ #65 Call Trace: [<ffffffff81044155>] warn_slowpath_common+0x75/0xa0 [<ffffffff8104419a>] warn_slowpath_null+0x1a/0x20 [<ffffffff811444ad>] __alloc_pages_nodemask+0xa0d/0xbe0 [<ffffffff8100a196>] ? native_sched_clock+0x26/0x90 [<ffffffff810b2128>] ? trace_hardirqs_off_caller+0x28/0xc0 [<ffffffff810b21cd>] ? trace_hardirqs_off+0xd/0x10 [<ffffffff811861f8>] alloc_pages_current+0xb8/0x180 [<ffffffff8113eaaa>] __get_free_pages+0x2a/0x80 [<ffffffff811934fe>] kmalloc_order_trace+0x3e/0x1a0 [<ffffffff81193955>] __kmalloc+0x2f5/0x3a0 [<ffffffff8104df0c>] ? local_bh_enable_ip+0x7c/0xf0 [<ffffffffa0401ab3>] rds_message_alloc+0x23/0xb0 [rds] [<ffffffffa04043a1>] rds_sendmsg+0x2b1/0x990 [rds] [<ffffffff810b21cd>] ? trace_hardirqs_off+0xd/0x10 [<ffffffff81564620>] sock_sendmsg+0xb0/0xe0 [<ffffffff810b2052>] ? get_lock_stats+0x22/0x70 [<ffffffff810b24be>] ? put_lock_stats.isra.23+0xe/0x40 [<ffffffff81567f30>] sys_sendto+0x130/0x180 [<ffffffff810b872d>] ? trace_hardirqs_on+0xd/0x10 [<ffffffff816c547b>] ? _raw_spin_unlock_irq+0x3b/0x60 [<ffffffff816cd767>] ? sysret_check+0x1b/0x56 [<ffffffff810b8695>] ? trace_hardirqs_on_caller+0x115/0x1a0 [<ffffffff81341d8e>] ? trace_hardirqs_on_thunk+0x3a/0x3f [<ffffffff816cd742>] system_call_fastpath+0x16/0x1b ---[ end trace eed6ae990d018c8b ]--- Reported-by:
Dave Jones <davej@redhat.com> Cc: Dave Jones <davej@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by:
Cong Wang <amwang@redhat.com> Acked-by:
Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guillaume Nault authored
[ Upstream commit 8b82547e ] The sendmsg() syscall handler for PPPoL2TP doesn't decrease the socket reference counter after successful transmissions. Any successful sendmsg() call from userspace will then increase the reference counter forever, thus preventing the kernel's session and tunnel data from being freed later on. The problem only happens when writing directly on L2TP sockets. PPP sockets attached to L2TP are unaffected as the PPP subsystem uses pppol2tp_xmit() which symmetrically increase/decrease reference counters. This patch adds the missing call to sock_put() before returning from pppol2tp_sendmsg(). Signed-off-by:
Guillaume Nault <g.nault@alphalink.fr> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guo Chao authored
commit 5370019d upstream. bd_mutex and lo_ctl_mutex can be held in different order. Path #1: blkdev_open blkdev_get __blkdev_get (hold bd_mutex) lo_open (hold lo_ctl_mutex) Path #2: blkdev_ioctl lo_ioctl (hold lo_ctl_mutex) lo_set_capacity (hold bd_mutex) Lockdep does not report it, because path #2 actually holds a subclass of lo_ctl_mutex. This subclass seems creep into the code by mistake. The patch author actually just mentioned it in the changelog, see commit f028f3b2 ("loop: fix circular locking in loop_clr_fd()"), also see: http://marc.info/?l=linux-kernel&m=123806169129727&w=2 Path #2 hold bd_mutex to call bd_set_size(), I've protected it with i_mutex in a previous patch, so drop bd_mutex at this site. Signed-off-by:
Guo Chao <yan@linux.vnet.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Guo Chao <yan@linux.vnet.ibm.com> Cc: M. Hindess <hindessm@uk.ibm.com> Cc: Nikanth Karthikesan <knikanth@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Jens Axboe <axboe@kernel.dk> Acked-by:
Jeff Mahoney <jeffm@suse.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guo Chao authored
commit d646a02a upstream. blkdev_ioctl(GETBLKSIZE) uses i_size_read() to read size of block device. If we update block size directly, reader may see intermediate result in some machines and configurations. Use i_size_write() instead. Signed-off-by:
Guo Chao <yan@linux.vnet.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Guo Chao <yan@linux.vnet.ibm.com> Cc: M. Hindess <hindessm@uk.ibm.com> Cc: Nikanth Karthikesan <knikanth@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Jens Axboe <axboe@kernel.dk> Acked-by:
Jeff Mahoney <jeffm@suse.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
Commit 3e78080f ('hwmon: (sht15) Check return value of regulator_enable()') depends on the use of devm_kmalloc() for automatic resource cleanup in the failure cases, which was introduced in 3.7. In older stable branches, explicit cleanup is needed. Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-