- 14 Sep, 2012 40 commits
-
-
Greg Kroah-Hartman authored
commit b9c4167c upstream. This structure needs to always stick around, even if CONFIG_HOTPLUG is disabled, otherwise we can oops when trying to probe a device that was added after the structure is thrown away. Thanks to Fengguang Wu and Bjørn Mork for tracking this issue down. Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Reported-by:
Bjørn Mork <bjorn@mork.no> CC: Christian Lamparter <chunkeey@googlemail.com> CC: "John W. Linville" <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Greg Kroah-Hartman authored
commit ec063351 upstream. This structure needs to always stick around, even if CONFIG_HOTPLUG is disabled, otherwise we can oops when trying to probe a device that was added after the structure is thrown away. Thanks to Fengguang Wu and Bjørn Mork for tracking this issue down. Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Reported-by:
Bjørn Mork <bjorn@mork.no> CC: Hans de Goede <hdegoede@redhat.com> CC: Mauro Carvalho Chehab <mchehab@infradead.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Greg Kroah-Hartman authored
commit e694d518 upstream. This structure needs to always stick around, even if CONFIG_HOTPLUG is disabled, otherwise we can oops when trying to probe a device that was added after the structure is thrown away. Thanks to Fengguang Wu and Bjørn Mork for tracking this issue down. Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Reported-by:
Bjørn Mork <bjorn@mork.no> CC: Hans de Goede <hdegoede@redhat.com> CC: Mauro Carvalho Chehab <mchehab@infradead.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Frysinger authored
commit 515c7af8 upstream. Some of the arguments to {g,s}etsockopt are passed in userland pointers. If we try to use the 64bit entry point, we end up sometimes failing. For example, dhcpcd doesn't run in x32: # dhcpcd eth0 dhcpcd[1979]: version 5.5.6 starting dhcpcd[1979]: eth0: broadcasting for a lease dhcpcd[1979]: eth0: open_socket: Invalid argument dhcpcd[1979]: eth0: send_raw_packet: Bad file descriptor The code in particular is getting back EINVAL when doing: struct sock_fprog pf; setsockopt(s, SOL_SOCKET, SO_ATTACH_FILTER, &pf, sizeof(pf)); Diving into the kernel code, we can see: include/linux/filter.h: struct sock_fprog { unsigned short len; struct sock_filter __user *filter; }; net/core/sock.c: case SO_ATTACH_FILTER: ret = -EINVAL; if (optlen == sizeof(struct sock_fprog)) { struct sock_fprog fprog; ret = -EFAULT; if (copy_from_user(&fprog, optval, sizeof(fprog))) break; ret = sk_attach_filter(&fprog, sk); } break; arch/x86/syscalls/syscall_64.tbl: 54 common setsockopt sys_setsockopt 55 common getsockopt sys_getsockopt So for x64, sizeof(sock_fprog) is 16 bytes. For x86/x32, it's 8 bytes. This comes down to the pointer being 32bit for x32, which means we need to do structure size translation. But since x32 comes in directly to sys_setsockopt, it doesn't get translated like x86. After changing the syscall table and rebuilding glibc with the new kernel headers, dhcp runs fine in an x32 userland. Oddly, it seems like Linus noted the same thing during the initial port, but I guess that was missed/lost along the way: https://lkml.org/lkml/2011/8/26/452 [ hpa: tagging for -stable since this is an ABI fix. ] Bugzilla: https://bugs.gentoo.org/423649Reported-by:
Mads <mads@ab3.no> Signed-off-by:
Mike Frysinger <vapier@gentoo.org> Link: http://lkml.kernel.org/r/1345320697-15713-1-git-send-email-vapier@gentoo.org Cc: H. J. Lu <hjl.tools@gmail.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aaro Koskinen authored
commit 908d6d52 upstream. It seems commit 2098e95c (regulator: twl: adapt twl-regulator driver to dt) accidentally deleted VINTANA1. Also the same commit defines VINTANA2 twice with TWL4030_ADJUSTABLE_LDO and TWL4030_FIXED_LDO. This patch changes the fixed one to be VINTANA1. I noticed this when auditing my N900 boot logs. I could not notice any change in device behaviour, though, except that the boot logs are now like before: ... [ 0.282928] VDAC: 1800 mV normal standby [ 0.284027] VCSI: 1800 mV normal standby [ 0.285400] VINTANA1: 1500 mV normal standby [ 0.286865] VINTANA2: 2750 mV normal standby [ 0.288208] VINTDIG: 1500 mV normal standby [ 0.289978] VSDI_CSI: 1800 mV normal standby ... Signed-off-by:
Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by:
Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexandre Bounine authored
commit 9a9a9a7a upstream. Fix unused variable compiler warning when built with CONFIG_RAPIDIO_DEBUG option off. This patch is applicable to kernel versions starting from v3.2 Signed-off-by:
Alexandre Bounine <alexandre.bounine@idt.com> Cc: Matt Porter <mporter@kernel.crashing.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexandre Bounine authored
commit 3670e7e1 upstream. Make sure that there is no doorbell messages left behind due to disabled interrupts during inbound doorbell processing. The most common case for this bug is loss of rionet JOIN messages in systems with three or more rionet participants and MSI or MSI-X enabled. As result, requests for packet transfers may finish with "destination unreachable" error message. This patch is applicable to kernel versions starting from v3.2. Signed-off-by:
Alexandre Bounine <alexandre.bounine@idt.com> Cc: Matt Porter <mporter@kernel.crashing.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jayakrishnan Memana authored
commit 8a3f0ede upstream. Buffers marked as erroneous are recycled immediately by the driver if the nodrop module parameter isn't set. The buffer payload size is reset to 0, but the buffer bytesused field isn't. This results in the buffer being immediately considered as complete, leading to an infinite loop in interrupt context. Fix the problem by resetting the bytesused field when recycling the buffer. Signed-off-by:
Jayakrishnan Memana <jayakrishnan.memana@maxim-ic.com> Signed-off-by:
Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by:
Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stanislaw Gruszka authored
commit bea6832c upstream. On architectures where cputime_t is 64 bit type, is possible to trigger divide by zero on do_div(temp, (__force u32) total) line, if total is a non zero number but has lower 32 bit's zeroed. Removing casting is not a good solution since some do_div() implementations do cast to u32 internally. This problem can be triggered in practice on very long lived processes: PID: 2331 TASK: ffff880472814b00 CPU: 2 COMMAND: "oraagent.bin" #0 [ffff880472a51b70] machine_kexec at ffffffff8103214b #1 [ffff880472a51bd0] crash_kexec at ffffffff810b91c2 #2 [ffff880472a51ca0] oops_end at ffffffff814f0b00 #3 [ffff880472a51cd0] die at ffffffff8100f26b #4 [ffff880472a51d00] do_trap at ffffffff814f03f4 #5 [ffff880472a51d60] do_divide_error at ffffffff8100cfff #6 [ffff880472a51e00] divide_error at ffffffff8100be7b [exception RIP: thread_group_times+0x56] RIP: ffffffff81056a16 RSP: ffff880472a51eb8 RFLAGS: 00010046 RAX: bc3572c9fe12d194 RBX: ffff880874150800 RCX: 0000000110266fad RDX: 0000000000000000 RSI: ffff880472a51eb8 RDI: 001038ae7d9633dc RBP: ffff880472a51ef8 R8: 00000000b10a3a64 R9: ffff880874150800 R10: 00007fcba27ab680 R11: 0000000000000202 R12: ffff880472a51f08 R13: ffff880472a51f10 R14: 0000000000000000 R15: 0000000000000007 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ffff880472a51f00] do_sys_times at ffffffff8108845d #8 [ffff880472a51f40] sys_times at ffffffff81088524 #9 [ffff880472a51f80] system_call_fastpath at ffffffff8100b0f2 RIP: 0000003808caac3a RSP: 00007fcba27ab6d8 RFLAGS: 00000202 RAX: 0000000000000064 RBX: ffffffff8100b0f2 RCX: 0000000000000000 RDX: 00007fcba27ab6e0 RSI: 000000000076d58e RDI: 00007fcba27ab6e0 RBP: 00007fcba27ab700 R8: 0000000000000020 R9: 000000000000091b R10: 00007fcba27ab680 R11: 0000000000000202 R12: 00007fff9ca41940 R13: 0000000000000000 R14: 00007fcba27ac9c0 R15: 00007fff9ca41940 ORIG_RAX: 0000000000000064 CS: 0033 SS: 002b Signed-off-by:
Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120808092714.GA3580@redhat.comSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Galbraith authored
commit 35cf4e50 upstream. With multiple instances of task_groups, for_each_rt_rq() is a noop, no task groups having been added to the rt.c list instance. This renders __enable/disable_runtime() and print_rt_stats() noop, the user (non) visible effect being that rt task groups are missing in /proc/sched_debug. Signed-off-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1344308413.6846.7.camel@marge.simpson.netSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hugh Dickins authored
commit 676ce6d5 upstream. Commit 91f68c89 ("block: fix infinite loop in __getblk_slow") is not good: a successful call to grow_buffers() cannot guarantee that the page won't be reclaimed before the immediate next call to __find_get_block(), which is why there was always a loop there. Yesterday I got "EXT4-fs error (device loop0): __ext4_get_inode_loc:3595: inode #19278: block 664: comm cc1: unable to read itable block" on console, which pointed to this commit. I've been trying to bisect for weeks, why kbuild-on-ext4-on-loop-on-tmpfs sometimes fails from a missing header file, under memory pressure on ppc G5. I've never seen this on x86, and I've never seen it on 3.5-rc7 itself, despite that commit being in there: bisection pointed to an irrelevant pinctrl merge, but hard to tell when failure takes between 18 minutes and 38 hours (but so far it's happened quicker on 3.6-rc2). (I've since found such __ext4_get_inode_loc errors in /var/log/messages from previous weeks: why the message never appeared on console until yesterday morning is a mystery for another day.) Revert 91f68c89, restoring __getblk_slow() to how it was (plus a checkpatch nitfix). Simplify the interface between grow_buffers() and grow_dev_page(), and avoid the infinite loop beyond end of device by instead checking init_page_buffers()'s end_block there (I presume that's more efficient than a repeated call to blkdev_max_block()), returning -ENXIO to __getblk_slow() in that case. And remove akpm's ten-year-old "__getblk() cannot fail ... weird" comment, but that is worrying: are all users of __getblk() really now prepared for a NULL bh beyond end of device, or will some oops?? Signed-off-by:
Hugh Dickins <hughd@google.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rafael J. Wysocki authored
commit 0b68c8e2 upstream. Commit dbf0e4c7 (PCI: EHCI: fix crash during suspend on ASUS computers) added a workaround for an ASUS suspend issue related to USB EHCI and a bug in a number of ASUS BIOSes that attempt to shut down the EHCI controller during system suspend if its PCI command register doesn't contain 0 at that time. It turns out that the same workaround is necessary in the analogous hibernation code path, so add it. References: https://bugzilla.kernel.org/show_bug.cgi?id=45811Reported-and-tested-by:
Oleksij Rempel <bug-track@fisher-privat.net> Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lorenzo Bianconi authored
commit e1352fde upstream. ath_rx_tasklet() calls ath9k_rx_skb_preprocess() and ath9k_rx_skb_postprocess() in a loop over the received frames. The decrypt_error flag is initialized to false just outside ath_rx_tasklet() loop. ath9k_rx_accept(), called by ath9k_rx_skb_preprocess(), only sets decrypt_error to true and never to false. Then ath_rx_tasklet() calls ath9k_rx_skb_postprocess() and passes decrypt_error to it. So, after a decryption error, in ath9k_rx_skb_postprocess(), we can have a leftover value from another processed frame. In that case, the frame will not be marked with RX_FLAG_DECRYPTED even if it is decrypted correctly. When using CCMP encryption this issue can lead to connection stuck because of CCMP PN corruption and a waste of CPU time since mac80211 tries to decrypt an already deciphered frame with ieee80211_aes_ccm_decrypt. Fix the issue initializing decrypt_error flag at the begging of the ath_rx_tasklet() loop. Signed-off-by:
Lorenzo Bianconi <lorenzo.bianconi83@gmail.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 4f81f986 upstream. We need it in the radeon drm module to fetch and verify the vbios image on UEFI systems. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stephen M. Cameron authored
commit b0cf0b11 upstream. Delete code which sets SCSI status incorrectly as it's already been set correctly above this incorrect code. The bug was introduced in 2009 by commit b0e15f6d ("cciss: fix typo that causes scsi status to be lost.") Signed-off-by:
Stephen M. Cameron <scameron@beardog.cce.hp.com> Reported-by:
Roel van Meer <roel.vanmeer@bokxing.nl> Tested-by:
Roel van Meer <roel.vanmeer@bokxing.nl> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
J. Bruce Fields authored
commit f06f00a2 upstream. svc_tcp_sendto sets XPT_CLOSE if we fail to transmit the entire reply. However, the XPT_CLOSE won't be acted on immediately. Meanwhile other threads could send further replies before the socket is really shut down. This can manifest as data corruption: for example, if a truncated read reply is followed by another rpc reply, that second reply will look to the client like further read data. Symptoms were data corruption preceded by svc_tcp_sendto logging something like kernel: rpc-srv/tcp: nfsd: sent only 963696 when sending 1048708 bytes - shutting down socket Reported-by:
Malahal Naineni <malahal@us.ibm.com> Tested-by:
Malahal Naineni <malahal@us.ibm.com> Signed-off-by:
J. Bruce Fields <bfields@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
J. Bruce Fields authored
commit d10f27a7 upstream. The rpc server tries to ensure that there will be room to send a reply before it receives a request. It does this by tracking, in xpt_reserved, an upper bound on the total size of the replies that is has already committed to for the socket. Currently it is adding in the estimate for a new reply *before* it checks whether there is space available. If it finds that there is not space, it then subtracts the estimate back out. This may lead the subsequent svc_xprt_enqueue to decide that there is space after all. The results is a svc_recv() that will repeatedly return -EAGAIN, causing server threads to loop without doing any actual work. Reported-by:
Michael Tokarev <mjt@tls.msk.ru> Tested-by:
Michael Tokarev <mjt@tls.msk.ru> Signed-off-by:
J. Bruce Fields <bfields@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
J. Bruce Fields authored
commit be1e4444 upstream. Examination of svc_tcp_clear_pages shows that it assumes sk_tcplen is consistent with sk_pages[] (in particular, sk_pages[n] can't be NULL if sk_tcplen would lead us to expect n pages of data). svc_tcp_restore_pages zeroes out sk_pages[] while leaving sk_tcplen. This is OK, since both functions are serialized by XPT_BUSY. However, that means the inconsistency must be repaired before dropping XPT_BUSY. Therefore we should be ensuring that svc_tcp_save_pages repairs the problem before exiting svc_tcp_recv_record on error. Symptoms were a BUG() in svc_tcp_clear_pages. Signed-off-by:
J. Bruce Fields <bfields@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 676bc2e1 upstream. This reverts commit d1c7871d. ttm_bo_init() destroys the BO on failure. So this patch makes the retry path work with freed memory. This ends up causing kernel panics when this path is hit. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alan Cox authored
commit f5869a83 upstream. If you do a page flip with no flags set then event is NULL. If event is NULL then the vmw_gfx driver likes to go digging into NULL and extracts NULL->base.file_priv. On a modern kernel with NULL mapping protection it's just another oops, without it there are some "intriguing" possibilities. What it should do is an open question but that for the driver owners to sort out. Signed-off-by:
Alan Cox <alan@linux.intel.com> Reviewed-by:
Jakob Bornecrantz <jakob@vmware.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Miklos Szeredi authored
commit a2140fc0 upstream. Refcounting of fsnotify_mark in audit tree is broken. E.g: refcount create_chunk alloc_chunk 1 fsnotify_add_mark 2 untag_chunk fsnotify_get_mark 3 fsnotify_destroy_mark audit_tree_freeing_mark 2 fsnotify_put_mark 1 fsnotify_put_mark 0 via destroy_list fsnotify_mark_destroy -1 This was reported by various people as triggering Oops when stopping auditd. We could just remove the put_mark from audit_tree_freeing_mark() but that would break freeing via inode destruction. So this patch simply omits a put_mark after calling destroy_mark or adds a get_mark before. The additional get_mark is necessary where there's no other put_mark after fsnotify_destroy_mark() since it assumes that the caller is holding a reference (or the inode is keeping the mark pinned, not the case here AFAICS). Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reported-by:
Valentin Avram <aval13@gmail.com> Reported-by:
Peter Moody <pmoody@google.com> Acked-by:
Eric Paris <eparis@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Miklos Szeredi authored
commit 0fe33aae upstream. Don't do free_chunk() after fsnotify_add_mark(). That one does a delayed unref via the destroy list and this results in use-after-free. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Acked-by:
Eric Paris <eparis@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
bjschuma@gmail.com authored
commit 425e776d upstream. This allows distros to remove the line from their modprobe configuration. Signed-off-by:
Bryan Schumaker <bjschuma@netapp.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> [bwh: Backported to 3.2: adjust context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Szymon Janc authored
commit a9ea3ed9 upstream. Some devices e.g. some Android based phones don't do SDP search before pairing and cancel legacy pairing when ACL is disconnected. PIN Code Request event which changes ACL timeout to HCI_PAIRING_TIMEOUT is only received after remote user entered PIN. In that case no L2CAP is connected so default HCI_DISCONN_TIMEOUT (2 seconds) is being used to timeout ACL connection. This results in problems with legacy pairing as remote user has only few seconds to enter PIN before ACL is disconnected. Increase disconnect timeout for incomming connection to HCI_PAIRING_TIMEOUT if SSP is disabled and no linkey exists. To avoid keeping ACL alive for too long after SDP search set ACL timeout back to HCI_DISCONN_TIMEOUT when L2CAP is connected. 2012-07-19 13:24:43.413521 < HCI Command: Create Connection (0x01|0x0005) plen 13 bdaddr 00:02:72:D6:6A:3F ptype 0xcc18 rswitch 0x01 clkoffset 0x0000 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 2012-07-19 13:24:43.425224 > HCI Event: Command Status (0x0f) plen 4 Create Connection (0x01|0x0005) status 0x00 ncmd 1 2012-07-19 13:24:43.885222 > HCI Event: Role Change (0x12) plen 8 status 0x00 bdaddr 00:02:72:D6:6A:3F role 0x01 Role: Slave 2012-07-19 13:24:44.054221 > HCI Event: Connect Complete (0x03) plen 11 status 0x00 handle 42 bdaddr 00:02:72:D6:6A:3F type ACL encrypt 0x00 2012-07-19 13:24:44.054313 < HCI Command: Read Remote Supported Features (0x01|0x001b) plen 2 handle 42 2012-07-19 13:24:44.055176 > HCI Event: Page Scan Repetition Mode Change (0x20) plen 7 bdaddr 00:02:72:D6:6A:3F mode 0 2012-07-19 13:24:44.056217 > HCI Event: Max Slots Change (0x1b) plen 3 handle 42 slots 5 2012-07-19 13:24:44.059218 > HCI Event: Command Status (0x0f) plen 4 Read Remote Supported Features (0x01|0x001b) status 0x00 ncmd 0 2012-07-19 13:24:44.062192 > HCI Event: Command Status (0x0f) plen 4 Unknown (0x00|0x0000) status 0x00 ncmd 1 2012-07-19 13:24:44.067219 > HCI Event: Read Remote Supported Features (0x0b) plen 11 status 0x00 handle 42 Features: 0xbf 0xfe 0xcf 0xfe 0xdb 0xff 0x7b 0x87 2012-07-19 13:24:44.067248 < HCI Command: Read Remote Extended Features (0x01|0x001c) plen 3 handle 42 page 1 2012-07-19 13:24:44.071217 > HCI Event: Command Status (0x0f) plen 4 Read Remote Extended Features (0x01|0x001c) status 0x00 ncmd 1 2012-07-19 13:24:44.076218 > HCI Event: Read Remote Extended Features (0x23) plen 13 status 0x00 handle 42 page 1 max 1 Features: 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00 2012-07-19 13:24:44.076249 < HCI Command: Remote Name Request (0x01|0x0019) plen 10 bdaddr 00:02:72:D6:6A:3F mode 2 clkoffset 0x0000 2012-07-19 13:24:44.081218 > HCI Event: Command Status (0x0f) plen 4 Remote Name Request (0x01|0x0019) status 0x00 ncmd 1 2012-07-19 13:24:44.105214 > HCI Event: Remote Name Req Complete (0x07) plen 255 status 0x00 bdaddr 00:02:72:D6:6A:3F name 'uw000951-0' 2012-07-19 13:24:44.105284 < HCI Command: Authentication Requested (0x01|0x0011) plen 2 handle 42 2012-07-19 13:24:44.111207 > HCI Event: Command Status (0x0f) plen 4 Authentication Requested (0x01|0x0011) status 0x00 ncmd 1 2012-07-19 13:24:44.112220 > HCI Event: Link Key Request (0x17) plen 6 bdaddr 00:02:72:D6:6A:3F 2012-07-19 13:24:44.112249 < HCI Command: Link Key Request Negative Reply (0x01|0x000c) plen 6 bdaddr 00:02:72:D6:6A:3F 2012-07-19 13:24:44.115215 > HCI Event: Command Complete (0x0e) plen 10 Link Key Request Negative Reply (0x01|0x000c) ncmd 1 status 0x00 bdaddr 00:02:72:D6:6A:3F 2012-07-19 13:24:44.116215 > HCI Event: PIN Code Request (0x16) plen 6 bdaddr 00:02:72:D6:6A:3F 2012-07-19 13:24:48.099184 > HCI Event: Auth Complete (0x06) plen 3 status 0x13 handle 42 Error: Remote User Terminated Connection 2012-07-19 13:24:48.179182 > HCI Event: Disconn Complete (0x05) plen 4 status 0x00 handle 42 reason 0x13 Reason: Remote User Terminated Connection Signed-off-by:
Szymon Janc <szymon.janc@tieto.com> Acked-by:
Johan Hedberg <johan.hedberg@intel.com> Signed-off-by:
Gustavo Padovan <gustavo.padovan@collabora.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ram Malovany authored
commit c3e7c0d9 upstream. When the name of the given entry is empty , the state needs to be updated accordingly. Signed-off-by:
Ram Malovany <ramm@ti.com> Signed-off-by:
Gustavo Padovan <gustavo.padovan@collabora.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ram Malovany authored
commit 7cc8380e upstream. If the device was not found in a list of found devices names of which are pending.This may happen in a case when HCI Remote Name Request was sent as a part of incoming connection establishment procedure. Hence there is no need to continue resolving a next name as it will be done upon receiving another Remote Name Request Complete Event. This will fix a kernel crash when trying to use this entry to resolve the next name. Signed-off-by:
Ram Malovany <ramm@ti.com> Signed-off-by:
Gustavo Padovan <gustavo.padovan@collabora.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ram Malovany authored
commit c810089c upstream. If entry wasn't found in the hci_inquiry_cache_lookup_resolve do not resolve the name.This will fix a kernel crash when trying to use NULL pointer. Signed-off-by:
Ram Malovany <ramm@ti.com> Signed-off-by:
Gustavo Padovan <gustavo.padovan@collabora.co.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Artem Bityutskiy authored
commit 65b455b1 upstream. When debugging is enabled, we use a temporary on-stack buffer for formatting the key strings like "(11368871, direntry, 0xcd0750)". The buffer size is 32 bytes and sometimes it is not enough to fit the key string - e.g., when inode numbers are high. This is not fatal, but the key strings are incomplete and UBIFS complains like this: UBIFS assert failed in dbg_snprintf_key at 137 (pid 1) This is a regression caused by "515315a1 UBIFS: fix key printing". Fix the issue by increasing the buffer to 48 bytes. Reported-by:
Michael Hench <michaelhench@gmail.com> Signed-off-by:
Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Tested-by:
Michael Hench <michaelhench@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bryan Schumaker authored
commit 12dfd080 upstream. This allows the normal error-paths to handle the error, rather than making a special call to complete_request_key() just for this instance. Signed-off-by:
Bryan Schumaker <bjschuma@netapp.com> Tested-by:
William Dauchy <wdauchy@gmail.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bryan Schumaker authored
commit c5066945 upstream. idmap_pipe_downcall already clears this field if the upcall succeeds, but if it fails (rpc.idmapd isn't running) the field will still be set on the next call triggering a BUG_ON(). This patch tries to handle all possible ways that the upcall could fail and clear the idmap key data for each one. Signed-off-by:
Bryan Schumaker <bjschuma@netapp.com> Tested-by:
William Dauchy <wdauchy@gmail.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
commit 47fbf797 upstream. Ever since commit 0a57cdac (NFSv4.1 send layoutreturn to fence disconnected data server) we've been sending layoutreturn calls while there is potentially still outstanding I/O to the data servers. The reason we do this is to avoid races between replayed writes to the MDS and the original writes to the DS. When this happens, the BUG_ON() in nfs4_layoutreturn_done can be triggered because it assumes that we would never call layoutreturn without knowing that all I/O to the DS is finished. The fix is to remove the BUG_ON() now that the assumptions behind the test are obsolete. Reported-by:
Boaz Harrosh <bharrosh@panasas.com> Reported-by:
Tigran Mkrtchyan <tigran.mkrtchyan@desy.de> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Idan Kedar authored
commit 8554116e upstream. we have encountered a bug whereby reading a lot of files (copying fedora's /bin) from a pNFS mount and hitting Ctrl+C in the middle caused a general protection fault in xdr_shrink_bufhead. this function is called when decoding the response from LAYOUTGET. the decoding is done by a worker thread, and the caller of LAYOUTGET waits for the worker thread to complete. hitting Ctrl+C caused the synchronous wait to end and the next thing the caller does is to free the pages, so when the worker thread calls xdr_shrink_bufhead, the pages are gone. therefore, the cleanup of these pages has been moved to nfs4_layoutget_release. Signed-off-by:
Idan Kedar <idank@tonian.com> Signed-off-by:
Benny Halevy <bhalevy@tonian.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
commit 08660043 upstream. If the rpc call to NFS3PROC_FSINFO fails, then we need to report that error so that the mount fails. Otherwise we can end up with a superblock with completely unusable values for block sizes, maxfilesize, etc. Reported-by:
Yuanming Chen <hikvision_linux@163.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yi Zou authored
commit d0e27c88 upstream. I am hitting this bug when the target is low in memory that fails the alloc_page() for the newly submitted command. This is a sort of off-by-one bug causing NULL pointer dereference in __free_page() since 'i' here is really the counter of total pages that have been successfully allocated here. Signed-off-by:
Yi Zou <yi.zou@intel.com> Cc: Andy Grover <agrover@redhat.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Cc: Open-FCoE.org <devel@open-fcoe.org> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Henningsson authored
commit c41999a2 upstream. It's possible that these amps are settable somehow, e g through secret codec verbs, but for now, don't create the controls (as they won't be working anyway, and cause errors in amixer). BugLink: https://bugs.launchpad.net/bugs/1038651Signed-off-by:
David Henningsson <david.henningsson@canonical.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michal Hocko authored
commit eb48c071 upstream. Each page mapped in a process's address space must be correctly accounted for in _mapcount. Normally the rules for this are straightforward but hugetlbfs page table sharing is different. The page table pages at the PMD level are reference counted while the mapcount remains the same. If this accounting is wrong, it causes bugs like this one reported by Larry Woodman: kernel BUG at mm/filemap.c:135! invalid opcode: 0000 [#1] SMP CPU 22 Modules linked in: bridge stp llc sunrpc binfmt_misc dcdbas microcode pcspkr acpi_pad acpi] Pid: 18001, comm: mpitest Tainted: G W 3.3.0+ #4 Dell Inc. PowerEdge R620/07NDJ2 RIP: 0010:[<ffffffff8112cfed>] [<ffffffff8112cfed>] __delete_from_page_cache+0x15d/0x170 Process mpitest (pid: 18001, threadinfo ffff880428972000, task ffff880428b5cc20) Call Trace: delete_from_page_cache+0x40/0x80 truncate_hugepages+0x115/0x1f0 hugetlbfs_evict_inode+0x18/0x30 evict+0x9f/0x1b0 iput_final+0xe3/0x1e0 iput+0x3e/0x50 d_kill+0xf8/0x110 dput+0xe2/0x1b0 __fput+0x162/0x240 During fork(), copy_hugetlb_page_range() detects if huge_pte_alloc() shared page tables with the check dst_pte == src_pte. The logic is if the PMD page is the same, they must be shared. This assumes that the sharing is between the parent and child. However, if the sharing is with a different process entirely then this check fails as in this diagram: parent | ------------>pmd src_pte----------> data page ^ other--------->pmd--------------------| ^ child-----------| dst_pte For this situation to occur, it must be possible for Parent and Other to have faulted and failed to share page tables with each other. This is possible due to the following style of race. PROC A PROC B copy_hugetlb_page_range copy_hugetlb_page_range src_pte == huge_pte_offset src_pte == huge_pte_offset !src_pte so no sharing !src_pte so no sharing (time passes) hugetlb_fault hugetlb_fault huge_pte_alloc huge_pte_alloc huge_pmd_share huge_pmd_share LOCK(i_mmap_mutex) find nothing, no sharing UNLOCK(i_mmap_mutex) LOCK(i_mmap_mutex) find nothing, no sharing UNLOCK(i_mmap_mutex) pmd_alloc pmd_alloc LOCK(instantiation_mutex) fault UNLOCK(instantiation_mutex) LOCK(instantiation_mutex) fault UNLOCK(instantiation_mutex) These two processes are not poing to the same data page but are not sharing page tables because the opportunity was missed. When either process later forks, the src_pte == dst pte is potentially insufficient. As the check falls through, the wrong PTE information is copied in (harmless but wrong) and the mapcount is bumped for a page mapped by a shared page table leading to the BUG_ON. This patch addresses the issue by moving pmd_alloc into huge_pmd_share which guarantees that the shared pud is populated in the same critical section as pmd. This also means that huge_pte_offset test in huge_pmd_share is serialized correctly now which in turn means that the success of the sharing will be higher as the racing tasks see the pud and pmd populated together. Race identified and changelog written mostly by Mel Gorman. {akpm@linux-foundation.org: attempt to make the huge_pmd_share() comment comprehensible, clean up coding style] Reported-by:
Larry Woodman <lwoodman@redhat.com> Tested-by:
Larry Woodman <lwoodman@redhat.com> Reviewed-by:
Mel Gorman <mgorman@suse.de> Signed-off-by:
Michal Hocko <mhocko@suse.cz> Reviewed-by:
Rik van Riel <riel@redhat.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Ken Chen <kenchen@google.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Hillf Danton <dhillf@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Greg Kroah-Hartman authored
commit 43a34695 upstream. This structure needs to always stick around, even if CONFIG_HOTPLUG is disabled, otherwise we can oops when trying to probe a device that was added after the structure is thrown away. Thanks to Fengguang Wu and Bjørn Mork for tracking this issue down. Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Reported-by:
Bjørn Mork <bjorn@mork.no> CC: Pavel Machek <pavel@ucw.cz> CC: Paul Gortmaker <paul.gortmaker@windriver.com> CC: "John W. Linville" <linville@tuxdriver.com> CC: Eliad Peller <eliad@wizery.com> CC: Devendra Naga <devendra.aaru@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Cree authored
commit a2fa3ccd upstream. Currently we export SOCK_NONBLOCK to user space but that conflicts with the definition from glibc leading to compilation errors in user programs (e.g. see Debian bug #658460). The generic socket.h restricts the definition of SOCK_NONBLOCK to the kernel, as does the MIPS specific socket.h, so let's do the same on Alpha. Signed-off-by:
Michael Cree <mcree@orcon.net.nz> Acked-by:
Matt Turner <mattst88@gmail.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Frysinger authored
commit 0be42186 upstream. After commit ec221208 ("Disintegrate asm/system.h for Alpha"), the fpu.h header which we install for userland started depending on special_insns.h which is not installed. However, fpu.h only uses that for __KERNEL__ code, so protect the inclusion the same way to avoid build breakage in glibc: /usr/include/asm/fpu.h:4:31: fatal error: asm/special_insns.h: No such file or directory Reported-by:
Matt Turner <mattst88@gentoo.org> Signed-off-by:
Mike Frysinger <vapier@gentoo.org> Signed-off-by:
Michael Cree <mcree@orcon.net.nz> Acked-by:
Matt Turner <mattst88@gmail.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Miklos Szeredi authored
commit e68726ff upstream. Userspace can pass weird create mode in open(2) that we canonicalize to "(mode & S_IALLUGO) | S_IFREG" in vfs_create(). The problem is that we use the uncanonicalized mode before calling vfs_create() with unforseen consequences. So do the canonicalization early in build_open_flags(). Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Tested-by:
Richard W.M. Jones <rjones@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-