- 27 Mar, 2013 40 commits
-
-
Takashi Iwai authored
commit a86b1a2c upstream. The argument passed to snd_hda_attach_beep_device() is a widget NID while spec->beep_amp holds the composed value for amp controls. Signed-off-by: Takashi Iwai <tiwai@suse.de> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Alex Deucher authored
commit fa8d387d upstream. Fixes a segfault on asics without a blit callback. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=62239Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> [bwh: Backported to 3.2: s/copy\.blit/copy_blit/] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Dmitry Artamonow authored
commit 29f86e66 upstream. Device stucks on filesystem writes, unless following quirk is passed: echo 04e8:5136:m > /sys/module/usb_storage/parameters/quirks Add corresponding entry to unusual_devs.h Signed-off-by: Dmitry Artamonow <mad_soft@inbox.ru> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Hannes Reinecke authored
commit 00eed9c8 upstream. xhci has its own interrupt enabling routine, which will try to use MSI-X/MSI if present. So the usb core shouldn't try to enable legacy interrupts; on some machines the xhci legacy IRQ setting is invalid. v3: Be careful to not break XHCI_BROKEN_MSI workaround (by trenn) Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Oliver Neukum <oneukum@suse.de> Cc: Thomas Renninger <trenn@suse.de> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Frederik Himpe <fhimpe@vub.ac.be> Cc: David Haerdeman <david@hardeman.nu> Cc: Alan Stern <stern@rowland.harvard.edu> Acked-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Reviewed-by: Thomas Renninger <trenn@suse.de> Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Steven Rostedt (Red Hat) authored
commit 613f04a0 upstream. The latency tracers require the buffers to be in overwrite mode, otherwise they get screwed up. Force the buffers to stay in overwrite mode when latency tracers are enabled. Added a flag_changed() method to the tracer structure to allow the tracers to see what flags are being changed, and also be able to prevent the change from happing. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> [bwh: Backported to 3.2: - Adjust context - Drop some changes that are not needed because trace_set_options() is not separate from tracing_trace_options_write()] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Steven Rostedt (Red Hat) authored
commit 80902822 upstream. Changing the overwrite mode for the ring buffer via the trace option only sets the normal buffer. But the snapshot buffer could swap with it, and then the snapshot would be in non overwrite mode and the normal buffer would be in overwrite mode, even though the option flag states otherwise. Keep the two buffers overwrite modes in sync. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Steven Rostedt (Red Hat) authored
commit 69d34da2 upstream. Seems that the tracer flags have never been protected from synchronous writes. Luckily, admins don't usually modify the tracing flags via two different tasks. But if scripts were to be used to modify them, then they could get corrupted. Move the trace_types_lock that protects against tracers changing to also protect the flags being set. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> [bwh: Backported to 3.2: also move failure return in tracing_trace_options_write() after unlocking] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Steven Rostedt (Red Hat) authored
commit 740466bc upstream. Because function tracing is very invasive, and can even trace calls to rcu_read_lock(), RCU access in function tracing is done with preempt_disable_notrace(). This requires a synchronize_sched() for updates and not a synchronize_rcu(). Function probes (traceon, traceoff, etc) must be freed after a synchronize_sched() after its entry has been removed from the hash. But call_rcu() is used. Fix this by using call_rcu_sched(). Also fix the usage to use hlist_del_rcu() instead of hlist_del(). Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Kees Cook authored
commit 3118a4f6 upstream. It is possible to wrap the counter used to allocate the buffer for relocation copies. This could lead to heap writing overflows. CVE-2013-0913 v3: collapse test, improve comment v2: move check into validate_exec_list Signed-off-by: Kees Cook <keescook@chromium.org> Reported-by: Pinkie Pie Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Kees Cook authored
commit 2563a452 upstream. Masks kernel address info-leak in object dumps with the %pK suffix, so they cannot be used to target kernel memory corruption attacks if the kptr_restrict sysctl is set. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> [bwh: Backported to 3.2: the rest of the format string is different] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Mateusz Guzik authored
commit 24261fc2 upstream. cifsFileInfo objects hold references to dentries and it is possible that these will still be around in workqueues when VFS decides to kill super block during unmount. This results in panics like this one: BUG: Dentry ffff88001f5e76c0{i=66b4a,n=1M-2} still in use (1) [unmount of cifs cifs] ------------[ cut here ]------------ kernel BUG at fs/dcache.c:943! [..] Process umount (pid: 1781, threadinfo ffff88003d6e8000, task ffff880035eeaec0) [..] Call Trace: [<ffffffff811b44f3>] shrink_dcache_for_umount+0x33/0x60 [<ffffffff8119f7fc>] generic_shutdown_super+0x2c/0xe0 [<ffffffff8119f946>] kill_anon_super+0x16/0x30 [<ffffffffa036623a>] cifs_kill_sb+0x1a/0x30 [cifs] [<ffffffff8119fcc7>] deactivate_locked_super+0x57/0x80 [<ffffffff811a085e>] deactivate_super+0x4e/0x70 [<ffffffff811bb417>] mntput_no_expire+0xd7/0x130 [<ffffffff811bc30c>] sys_umount+0x9c/0x3c0 [<ffffffff81657c19>] system_call_fastpath+0x16/0x1b Fix this by making each cifsFileInfo object hold a reference to cifs super block, which implicitly keeps VFS super block around as well. Signed-off-by: Mateusz Guzik <mguzik@redhat.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Reported-and-Tested-by: Ben Greear <greearb@candelatech.com> Signed-off-by: Steve French <sfrench@us.ibm.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Steven Rostedt (Red Hat) authored
commit 2721e72d upstream. Although the swap is wrapped with a spin_lock, the assignment of the temp buffer used to swap is not within that lock. It needs to be moved into that lock, otherwise two swaps happening on two different CPUs, can end up using the wrong temp buffer to assign in the swap. Luckily, all current callers of the swap function appear to have their own locks. But in case something is added that allows two different callers to call the swap, then there's a chance that this race can trigger and corrupt the buffers. New code is coming soon that will allow for this race to trigger. I've Cc'd stable, so this bug will not show up if someone backports one of the changes that can trigger this bug. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Theodore Ts'o authored
commit 90ba983f upstream. A user who was using a 8TB+ file system and with a very large flexbg size (> 65536) could cause the atomic_t used in the struct flex_groups to overflow. This was detected by PaX security patchset: http://forums.grsecurity.net/viewtopic.php?f=3&t=3289&p=12551#p12551 This bug was introduced in commit 9f24e420, so it's been around since 2.6.30. :-( Fix this by using an atomic64_t for struct orlav_stats's free_clusters. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Reviewed-by: Lukas Czerner <lczerner@redhat.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Lukas Czerner authored
commit 810da240 upstream. We're using macro EXT4_B2C() to convert number of blocks to number of clusters for bigalloc file systems. However, we should be using EXT4_NUM_B2C(). Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> [bwh: Backported to 3.2: - Adjust context - Drop changes in ext4_setup_new_descs() and ext4_calculate_overhead()] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Jan Kara authored
commit ad56edad upstream. jbd2_journal_dirty_metadata() didn't get a reference to journal_head it was working with. This is OK in most of the cases since the journal head should be attached to a transaction but in rare occasions when we are journalling data, __ext4_journalled_writepage() can race with jbd2_journal_invalidatepage() stripping buffers from a page and thus journal head can be freed under hands of jbd2_journal_dirty_metadata(). Fix the problem by getting own journal head reference in jbd2_journal_dirty_metadata() (and also in jbd2_journal_set_triggers() which can possibly have the same issue). Reported-by: Zheng Liu <gnehzuil.liu@gmail.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Zheng Liu authored
commit 3a225670 upstream. This commit fixes a wrong return value of the number of the allocated blocks in ext4_split_extent. When the length of blocks we want to allocate is greater than the length of the current extent, we return a wrong number. Let's see what happens in the following case when we call ext4_split_extent(). map: [48, 72] ex: [32, 64, u] 'ex' will be split into two parts: ex1: [32, 47, u] ex2: [48, 64, w] 'map->m_len' is returned from this function, and the value is 24. But the real length is 16. So it should be fixed. Meanwhile in this commit we use right length of the allocated blocks when get_reserved_cluster_alloc in ext4_ext_handle_uninitialized_extents is called. Signed-off-by: Zheng Liu <wenqing.lz@taobao.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Dmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
[ Upstream commit fae8563b ] Using TX push when notifying the NIC of multiple new descriptors in the ring will very occasionally cause the TX DMA engine to re-use an old descriptor. This can result in a duplicated or partly duplicated packet (new headers with old data), or an IOMMU page fault. This does not happen when the pushed descriptor is the only one written. TX push also provides little latency benefit when a packet requires more than one descriptor. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
[ Upstream commit 35205b21 ] efx_device_detach_sync() locks all TX queues before marking the device detached and thus disabling further TX scheduling. But it can still be interrupted by TX completions which then result in TX scheduling in soft interrupt context. This will deadlock when it tries to acquire a TX queue lock that efx_device_detach_sync() already acquired. To avoid deadlock, we must use netif_tx_{,un}lock_bh(). Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
[ Upstream commit 29c69a48 ] We must only ever stop TX queues when they are full or the net device is not 'ready' so far as the net core, and specifically the watchdog, is concerned. Otherwise, the watchdog may fire *immediately* if no packets have been added to the queue in the last 5 seconds. The device is ready if all the following are true: (a) It has a qdisc (b) It is marked present (c) It is running (d) The link is reported up (a) and (c) are normally true, and must not be changed by a driver. (d) is under our control, but fake link changes may disturb userland. This leaves (b). We already mark the device absent during reset and self-test, but we need to do the same during MTU changes and ring reallocation. We don't need to do this when the device is brought down because then (c) is already false. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
[ Upstream commits 06e63c57, b590ace0 and c73e787a ] We assume that the mapping between DMA and virtual addresses is done on whole pages, so we can find the page offset of an RX buffer using the lower bits of the DMA address. However, swiotlb maps in units of 2K, breaking this assumption. Add an explicit page_offset field to struct efx_rx_buffer. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
[ Upstream commit 3a68f19d ] We may currently allocate two RX DMA buffers to a page, and only unmap the page when the second is completed. We do not sync the first RX buffer to be completed; this can result in packet loss or corruption if the last RX buffer completed in a NAPI poll is the first in a page and is not DMA-coherent. (In the middle of a NAPI poll, we will handle the following RX completion and unmap the page *before* looking at the content of the first buffer.) Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
[ Upstream commit ebf98e79 ] efx_mcdi_poll() uses get_seconds() to read the current time and to implement a polling timeout. The use of this function was chosen partly because it could easily be replaced in a co-sim environment with a macro that read the simulated time. Unfortunately the real get_seconds() returns the system time (real time) which is subject to adjustment by e.g. ntpd. If the system time is adjusted forward during a polled MCDI operation, the effective timeout can be shorter than the intended 10 seconds, resulting in a spurious failure. It is also possible for a backward adjustment to delay detection of a areal failure. Use jiffies instead, and change MCDI_RPC_TIMEOUT to be denominated in jiffies. Also correct rounding of the timeout: check time > finish (or rather time_after(time, finish)) and not time >= finish. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Daniel Pieczko authored
[ Upstream commit c2f3b8e3 ] The assertion of netif_device_present() at the top of efx_hard_start_xmit() may fail if we don't do this. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
[ Upstream commits a606f432, d5e8cc6c, 525d9e82 ] The TX DMA engine issues upstream read requests when there is room in the TX FIFO for the completion. However, the fetches for the rest of the packet might be delayed by any back pressure. Since a flush must wait for an EOP, the entire flush may be delayed by back pressure. Mitigate this by disabling flow control before the flushes are started. Since PF and VF flushes run in parallel introduce fc_disable, a reference count of the number of flushes outstanding. The same principle could be applied to Falcon, but that would bring with it its own testing. We sometimes hit a "failed to flush" timeout on some TX queues, but the flushes have completed and the flush completion events seem to go missing. In this case, we can check the TX_DESC_PTR_TBL register and drain the queues if the flushes had finished. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.2: - Call efx_nic_type::finish_flush() on both success and failure paths - Check the TX_DESC_PTR_TBL registers in the polling loop - Declare efx_mcdi_set_mac() extern] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
[ Upstream commit bfeed902 ] On big-endian systems the MTD partition names currently have mangled subtype numbers and are not recognised by the firmware update tool (sfupdate). Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> [bwh: Backported to 3.2: use old macros for length of firmware subtype array] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Stuart Hodgson authored
[ Upstream commit 3dca9d2d ] efx_nic_fatal_interrupt() disables DMA before scheduling a reset. After this, we need not and *cannot* flush queues. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Hannes Frederic Sowa authored
[ Upstream commit 5a3da1fe ] This patch introduces a constant limit of the fragment queue hash table bucket list lengths. Currently the limit 128 is choosen somewhat arbitrary and just ensures that we can fill up the fragment cache with empty packets up to the default ip_frag_high_thresh limits. It should just protect from list iteration eating considerable amounts of cpu. If we reach the maximum length in one hash bucket a warning is printed. This is implemented on the caller side of inet_frag_find to distinguish between the different users of inet_fragment.c. I dropped the out of memory warning in the ipv4 fragment lookup path, because we already get a warning by the slab allocator. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jesper Dangaard Brouer <jbrouer@redhat.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Vlad Yasevich authored
[ Upstream commit a5b8db91 ] Range/validity checks on rta_type in rtnetlink_rcv_msg() do not account for flags that may be set. This causes the function to return -EINVAL when flags are set on the type (for example NLA_F_NESTED). Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Eric Dumazet authored
[ Upstream commit 16fad69c ] Chrome OS team reported a crash on a Pixel ChromeBook in TCP stack : https://code.google.com/p/chromium/issues/detail?id=182056 commit a21d4572 (tcp: avoid order-1 allocations on wifi and tx path) did a poor choice adding an 'avail_size' field to skb, while what we really needed was a 'reserved_tailroom' one. It would have avoided commit 22b4a4f2 (tcp: fix retransmit of partially acked frames) and this commit. Crash occurs because skb_split() is not aware of the 'avail_size' management (and should not be aware) Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Mukesh Agrawal <quiche@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Denis V. Lunev authored
[ Upstream commit 5b9e12db ] a long time ago by the commit commit 93456b6d Author: Denis V. Lunev <den@openvz.org> Date: Thu Jan 10 03:23:38 2008 -0800 [IPV4]: Unify access to the routing tables. the defenition of FIB_HASH_TABLE size has obtained wrong dependency: it should depend upon CONFIG_IP_MULTIPLE_TABLES (as was in the original code) but it was depended from CONFIG_IP_ROUTE_MULTIPATH This patch returns the situation to the original state. The problem was spotted by Tingwei Liu. Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Tingwei Liu <tingw.liu@gmail.com> CC: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Xufeng Zhang authored
[ Upstream commit 2317f449 ] sctp_assoc_lookup_tsn() function searchs which transport a certain TSN was sent on, if not found in the active_path transport, then go search all the other transports in the peer's transport_addr_list, however, we should continue to the next entry rather than break the loop when meet the active_path transport. Signed-off-by: Xufeng Zhang <xufeng.zhang@windriver.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Vlad Yasevich authored
[ Upstream commit f2815633 ] When SCTP is done processing a duplicate cookie chunk, it tries to delete a newly created association. For that, it has to set the right association for the side-effect processing to work. However, when it uses the SCTP_CMD_NEW_ASOC command, that performs more work then really needed (like hashing the associationa and assigning it an id) and there is no point to do that only to delete the association as a next step. In fact, it also creates an impossible condition where an association may be found by the getsockopt() call, and that association is empty. This causes a crash in some sctp getsockopts. The solution is rather simple. We simply use SCTP_CMD_SET_ASOC command that doesn't have all the overhead and does exactly what we need. Reported-by: Karl Heiss <kheiss@gmail.com> Tested-by: Karl Heiss <kheiss@gmail.com> CC: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Vlad Yasevich <vyasevich@gmail.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Veaceslav Falico authored
[ Upstream commit 876254ae ] bond_update_speed_duplex() might sleep while calling underlying slave's routines. Move it out of atomic context in bond_enslave() and remove it from bond_miimon_commit() - it was introduced by commit 546add79, however when the slave interfaces go up/change state it's their responsibility to fire NETDEV_UP/NETDEV_CHANGE events so that bonding can properly update their speed. I've tested it on all combinations of ifup/ifdown, autoneg/speed/duplex changes, remote-controlled and local, on (not) MII-based cards. All changes are visible. Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Veaceslav Falico authored
[ Upstream commit 3f315bef ] __netpoll_cleanup() is called in netconsole_netdev_event() while holding a spinlock. Release/acquire the spinlock before/after it and restart the loop. Also, disable the netconsole completely, because we won't have chance after the restart of the loop, and might end up in a situation where nt->enabled == 1 and nt->np.dev == NULL. Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
David Ward authored
[ Upstream commit 4660c7f4 ] This is needed in order to detect if the timestamp option appears more than once in a packet, to remove the option if the packet is fragmented, etc. My previous change neglected to store the option location when the router addresses were prespecified and Pointer > Length. But now the option location is also stored when Flag is an unrecognized value, to ensure these option handling behaviors are still performed. Signed-off-by: David Ward <david.ward@ll.mit.edu> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Tkhai Kirill authored
[ Upstream commit cb29529e ] If a machine has X (X < 4) sunsu ports and cmdline option "console=ttySY" is passed, where X < Y <= 4, than the following panic happens: Unable to handle kernel NULL pointer dereference TPC: <sunsu_console_setup+0x78/0xe0> RPC: <sunsu_console_setup+0x74/0xe0> I7: <register_console+0x378/0x3e0> Call Trace: [0000000000453a38] register_console+0x378/0x3e0 [0000000000576fa0] uart_add_one_port+0x2e0/0x340 [000000000057af40] su_probe+0x160/0x2e0 [00000000005b8a4c] platform_drv_probe+0xc/0x20 [00000000005b6c2c] driver_probe_device+0x12c/0x220 [00000000005b6da8] __driver_attach+0x88/0xa0 [00000000005b4df4] bus_for_each_dev+0x54/0xa0 [00000000005b5a54] bus_add_driver+0x154/0x260 [00000000005b7190] driver_register+0x50/0x180 [00000000006d250c] sunsu_init+0x18c/0x1e0 [00000000006c2668] do_one_initcall+0xe8/0x160 [00000000006c282c] kernel_init_freeable+0x12c/0x1e0 [0000000000603764] kernel_init+0x4/0x100 [0000000000405f64] ret_from_syscall+0x1c/0x2c [0000000000000000] (null) 1)Fix the panic; 2)Increment registered port number every successful probe. Signed-off-by: Kirill Tkhai <tkhai@yandex.ru> CC: David Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Mathias Krause authored
commit fe685aab upstream. For type 1 the parent_offset member in struct isofs_fid gets copied uninitialized to userland. Fix this by initializing it to 0. Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Mathias Krause authored
commit 0143fc5e upstream. For type 0x51 the udf.parent_partref member in struct fid gets copied uninitialized to userland. Fix this by initializing it to 0. Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Tomas Hozza authored
commit 95a69ada upstream. The source code without this patch caused hypervkvpd to exit when it processed a spoofed Netlink packet which has been sent from an untrusted local user. Now Netlink messages with a non-zero nl_pid source address are ignored and a warning is printed into the syslog. Signed-off-by: Tomas Hozza <thozza@redhat.com> Acked-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Dan Carpenter authored
commit 4502403d upstream. The call tree here is: sk_clone_lock() <- takes bh_lock_sock(newsk); xfrm_sk_clone_policy() __xfrm_sk_clone_policy() clone_policy() <- uses GFP_ATOMIC for allocations security_xfrm_policy_clone() security_ops->xfrm_policy_clone_security() selinux_xfrm_policy_clone() Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-