- 09 Aug, 2012 40 commits
-
-
Greg Pearson authored
commit 6c4088ac upstream. efi_setup_pcdp_console() is called during boot to parse the HCDP/PCDP EFI system table and setup an early console for printk output. The routine uses ioremap/iounmap to setup access to the HCDP/PCDP table information. The call to ioremap is happening early in the boot process which leads to a panic on x86_64 systems: panic+0x01ca do_exit+0x043c oops_end+0x00a7 no_context+0x0119 __bad_area_nosemaphore+0x0138 bad_area_nosemaphore+0x000e do_page_fault+0x0321 page_fault+0x0020 reserve_memtype+0x02a1 __ioremap_caller+0x0123 ioremap_nocache+0x0012 efi_setup_pcdp_console+0x002b setup_arch+0x03a9 start_kernel+0x00d4 x86_64_start_reservations+0x012c x86_64_start_kernel+0x00fe This replaces the calls to ioremap/iounmap in efi_setup_pcdp_console() with calls to early_ioremap/early_iounmap which can be called during early boot. This patch was tested on an x86_64 prototype system which uses the HCDP/PCDP table for early console setup. Signed-off-by: Greg Pearson <greg.pearson@hp.com> Acked-by: Khalid Aziz <khalid.aziz@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
NeilBrown authored
commit b7219ccb upstream. If a resync of a RAID1 array with 2 devices finds a known bad block one device it will neither read from, or write to, that device for this block offset. So there will be one read_target (The other device) and zero write targets. This condition causes md/raid1 to abort the resync assuming that it has finished - without known bad blocks this would be true. When there are no write targets because of the presence of bad blocks we should only skip over the area covered by the bad block. RAID10 already gets this right, raid1 doesn't. Or didn't. As this can cause a 'sync' to abort early and appear to have succeeded it could lead to some data corruption, so it suitable for -stable. Reported-by: Alexander Lyakas <alex.bolshoy@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Jeff Layton authored
commit 5cf02d09 upstream. We've had some reports of a deadlock where rpciod ends up with a stack trace like this: PID: 2507 TASK: ffff88103691ab40 CPU: 14 COMMAND: "rpciod/14" #0 [ffff8810343bf2f0] schedule at ffffffff814dabd9 #1 [ffff8810343bf3b8] nfs_wait_bit_killable at ffffffffa038fc04 [nfs] #2 [ffff8810343bf3c8] __wait_on_bit at ffffffff814dbc2f #3 [ffff8810343bf418] out_of_line_wait_on_bit at ffffffff814dbcd8 #4 [ffff8810343bf488] nfs_commit_inode at ffffffffa039e0c1 [nfs] #5 [ffff8810343bf4f8] nfs_release_page at ffffffffa038bef6 [nfs] #6 [ffff8810343bf528] try_to_release_page at ffffffff8110c670 #7 [ffff8810343bf538] shrink_page_list.clone.0 at ffffffff81126271 #8 [ffff8810343bf668] shrink_inactive_list at ffffffff81126638 #9 [ffff8810343bf818] shrink_zone at ffffffff8112788f #10 [ffff8810343bf8c8] do_try_to_free_pages at ffffffff81127b1e #11 [ffff8810343bf958] try_to_free_pages at ffffffff8112812f #12 [ffff8810343bfa08] __alloc_pages_nodemask at ffffffff8111fdad #13 [ffff8810343bfb28] kmem_getpages at ffffffff81159942 #14 [ffff8810343bfb58] fallback_alloc at ffffffff8115a55a #15 [ffff8810343bfbd8] ____cache_alloc_node at ffffffff8115a2d9 #16 [ffff8810343bfc38] kmem_cache_alloc at ffffffff8115b09b #17 [ffff8810343bfc78] sk_prot_alloc at ffffffff81411808 #18 [ffff8810343bfcb8] sk_alloc at ffffffff8141197c #19 [ffff8810343bfce8] inet_create at ffffffff81483ba6 #20 [ffff8810343bfd38] __sock_create at ffffffff8140b4a7 #21 [ffff8810343bfd98] xs_create_sock at ffffffffa01f649b [sunrpc] #22 [ffff8810343bfdd8] xs_tcp_setup_socket at ffffffffa01f6965 [sunrpc] #23 [ffff8810343bfe38] worker_thread at ffffffff810887d0 #24 [ffff8810343bfee8] kthread at ffffffff8108dd96 #25 [ffff8810343bff48] kernel_thread at ffffffff8100c1ca rpciod is trying to allocate memory for a new socket to talk to the server. The VM ends up calling ->releasepage to get more memory, and it tries to do a blocking commit. That commit can't succeed however without a connected socket, so we deadlock. Fix this by setting PF_FSTRANS on the workqueue task prior to doing the socket allocation, and having nfs_release_page check for that flag when deciding whether to do a commit call. Also, set PF_FSTRANS unconditionally in rpc_async_schedule since that function can also do allocations sometimes. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Heiko Carstens authored
commit 008c2e8f upstream. Make sure the kernel does not incorrectly create a SIGBUS signal during user space accesses: For user space accesses in the switched addressing mode case the kernel may walk page tables and access user address space via the kernel mapping. If a page table entry is invalid the function __handle_fault() gets called in order to emulate a page fault and trigger all the usual actions like paging in a missing page etc. by calling handle_mm_fault(). If handle_mm_fault() returns with an error fixup handling is necessary. For the switched addressing mode case all errors need to be mapped to -EFAULT, so that the calling uaccess function can return -EFAULT to user space. Unfortunately the __handle_fault() incorrectly calls do_sigbus() if VM_FAULT_SIGBUS is set. This however should only happen if a page fault was triggered by a user space instruction. For kernel mode uaccesses the correct action is to only return -EFAULT. So user space may incorrectly see SIGBUS signals because of this bug. For current machines this would only be possible for the switched addressing mode case in conjunction with futex operations. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [bwh: Backported to 3.2: do_exception() and do_sigbus() parameters differ] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Asias He authored
commit 2c95a329 upstream. Block layer will allocate a spinlock for the queue if the driver does not provide one in blk_init_queue(). The reason to use the internal spinlock is that blk_cleanup_queue() will switch to use the internal spinlock in the cleanup code path. if (q->queue_lock != &q->__queue_lock) q->queue_lock = &q->__queue_lock; However, processes which are in D state might have taken the driver provided spinlock, when the processes wake up, they would release the block provided spinlock. ===================================== [ BUG: bad unlock balance detected! ] 3.4.0-rc7+ #238 Not tainted ------------------------------------- fio/3587 is trying to release lock (&(&q->__queue_lock)->rlock) at: [<ffffffff813274d2>] blk_queue_bio+0x2a2/0x380 but there are no more locks to release! other info that might help us debug this: 1 lock held by fio/3587: #0: (&(&vblk->lock)->rlock){......}, at: [<ffffffff8132661a>] get_request_wait+0x19a/0x250 Other drivers use block layer provided spinlock as well, e.g. SCSI. Switching to the block layer provided spinlock saves a bit of memory and does not increase lock contention. Performance test shows no real difference is observed before and after this patch. Changes in v2: Improve commit log as Michael suggested. Cc: virtualization@lists.linux-foundation.org Cc: kvm@vger.kernel.org Signed-off-by: Asias He <asias@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Alex Hung authored
commit 63a78bb1 upstream. According to responses from the BIOS team, ASUS_WMI_METHODID_DSTS2 (0x53545344) will be used as future DSTS ID. In addition, calling asus_wmi_evaluate_method(ASUS_WMI_METHODID_DSTS2, 0, 0, NULL) returns ASUS_WMI_UNSUPPORTED_METHOD in new ASUS laptop PCs. This patch fixes no DSTS ID will be assigned in this case. Signed-off-by: Alex Hung <alex.hung@canonical.com> Signed-off-by: Matthew Garrett <mjg@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
H. Peter Anvin authored
commit d2e7c96a upstream. Mix in any architectural randomness in extract_buf() instead of xfer_secondary_buf(). This allows us to mix in more architectural randomness, and it also makes xfer_secondary_buf() faster, moving a tiny bit of additional CPU overhead to process which is extracting the randomness. [ Commit description modified by tytso to remove an extended advertisement for the RDRAND instruction. ] Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Acked-by: Ingo Molnar <mingo@kernel.org> Cc: DJ Johnston <dj.johnston@intel.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Joe Thornber authored
commit 905386f8 upstream. Fix memory leak in process_prepared_mapping by always freeing the dm_thin_new_mapping structs from the mapping_pool mempool on the error paths. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Alasdair G Kergon authored
commit 7768ed33 upstream. Reduce the slab size used for the dm_thin_endio_hook mempool. Allocation has been seen to fail on machines with smaller amounts of memory due to fragmentation. lvm: page allocation failure. order:5, mode:0xd0 device-mapper: table: 253:38: thin-pool: Error creating pool's endio_hook mempool Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Tony Luck authored
commit a1193655 upstream. The following build error occured during a ia64 build with swap-over-NFS patches applied. net/core/sock.c:274:36: error: initializer element is not constant net/core/sock.c:274:36: error: (near initialization for 'memalloc_socks') net/core/sock.c:274:36: error: initializer element is not constant This is identical to a parisc build error. Fengguang Wu, Mel Gorman and James Bottomley did all the legwork to track the root cause of the problem. This fix and entire commit log is shamelessly copied from them with one extra detail to change a dubious runtime use of ATOMIC_INIT() to atomic_set() in drivers/char/mspec.c Dave Anglin says: > Here is the line in sock.i: > > struct static_key memalloc_socks = ((struct static_key) { .enabled = > ((atomic_t) { (0) }) }); The above line contains two compound literals. It also uses a designated initializer to initialize the field enabled. A compound literal is not a constant expression. The location of the above statement isn't fully clear, but if a compound literal occurs outside the body of a function, the initializer list must consist of constant expressions. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Martin Schwidefsky authored
commit 0f6f281b upstream. The downgrade of the 4 level page table created by init_new_context is currently done only in start_thread31. If a 31 bit process forks the new mm uses a 4 level page table, including the task size of 2<<42 that goes along with it. This is incorrect as now a 31 bit process can map memory beyond 2GB. Define arch_dup_mmap to do the downgrade after fork. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Alan Cox authored
commit d6250a3f upstream. The Intel case falls through into the generic case which then changes the values. For cases like the P6 it doesn't do the right thing so this seems to be a screwup. Signed-off-by: Alan Cox <alan@linux.intel.com> Link: http://lkml.kernel.org/n/tip-lww2uirad4skzjlmrm0vru8o@git.kernel.orgSigned-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Takashi Iwai authored
commit bc733d49 upstream. The irq field of struct snd_mpu401 is supposed to be initialized to -1. Since it's set to zero as of now, a probing error before the irq installation results in a kernel warning "Trying to free already-free IRQ 0". Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=44821Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Takashi Iwai authored
commit 6162552b upstream. We've got a bug report about the silent output from the headphone on a mobo with VT2021, and spotted out that this was because of the wrong D3 state on the DAC for the headphone output. The bug is triggered by the incomplete check for this DAC in set_widgets_power_state_vt1718S(). It checks only the connectivity of the primary output (0x27) but doesn't consider the path from the headphone pin (0x28). Now this patch fixes the problem by checking both pins for DAC 0x0b. Signed-off-by: Takashi Iwai <tiwai@suse.de> [bwh: Backported to 3.2: keep using snd_hda_codec_write() as update_power_state() is missing] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Seth Forshee authored
commit c0394506 upstream. The touchpad on the Acer Aspire One D250 will report out of range values in the extreme lower portion of the touchpad. These appear as abrupt changes in the values reported by the hardware from very low values to very high values, which can cause unexpected vertical jumps in the position of the mouse pointer. What seems to be happening is that the value is wrapping to a two's compliment negative value of higher resolution than the 13-bit value reported by the hardware, with the high-order bits being truncated. This patch adds handling for these values by converting them to the appropriate negative values. The only tricky part about this is deciding when to treat a number as negative. It stands to reason that if out of range values can be reported on the low end then it could also happen on the high end, so not all out of range values should be treated as negative. The approach taken here is to split the difference between the maximum legitimate value for the axis and the maximum possible value that the hardware can report, treating values greater than this number as negative and all other values as positive. This can be tweaked later if hardware is found that operates outside of these parameters. BugLink: http://bugs.launchpad.net/bugs/1001251Signed-off-by: Seth Forshee <seth.forshee@canonical.com> Reviewed-by: Daniel Kurtz <djkurtz@chromium.org> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Alexander Holler authored
commit 2fe2d9f4 upstream. Line 0 and 1 were both written to line 0 (on the display) and all subsequent lines had an offset of -1. The result was that the last line on the display was never overwritten by writes to /dev/fbN. The origin of this bug seems to have been udlfb. Signed-off-by: Alexander Holler <holler@ahsoftware.de> Signed-off-by: Florian Tobias Schandinat <FlorianSchandinat@gmx.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Darren Hart authored
commit 6f7b0a2a upstream. If uaddr == uaddr2, then we have broken the rule of only requeueing from a non-pi futex to a pi futex with this call. If we attempt this, as the trinity test suite manages to do, we miss early wakeups as q.key is equal to key2 (because they are the same uaddr). We will then attempt to dereference the pi_mutex (which would exist had the futex_q been properly requeued to a pi futex) and trigger a NULL pointer dereference. Signed-off-by: Darren Hart <dvhart@linux.intel.com> Cc: Dave Jones <davej@redhat.com> Link: http://lkml.kernel.org/r/ad82bfe7f7d130247fbe2b5b4275654807774227.1342809673.git.dvhart@linux.intel.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Darren Hart authored
commit f27071cb upstream. The WARN_ON in futex_wait_requeue_pi() for a NULL q.pi_state was testing the address (&q.pi_state) of the pointer instead of the value (q.pi_state) of the pointer. Correct it accordingly. Signed-off-by: Darren Hart <dvhart@linux.intel.com> Cc: Dave Jones <davej@redhat.com> Link: http://lkml.kernel.org/r/1c85d97f6e5f79ec389a4ead3e367363c74bd09a.1342809673.git.dvhart@linux.intel.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Darren Hart authored
commit b6070a8d upstream. If fixup_pi_state_owner() faults, pi_mutex may be NULL. Test for pi_mutex != NULL before testing the owner against current and possibly unlocking it. Signed-off-by: Darren Hart <dvhart@linux.intel.com> Cc: Dave Jones <davej@redhat.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Link: http://lkml.kernel.org/r/dc59890338fc413606f04e5c5b131530734dae3d.1342809673.git.dvhart@linux.intel.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Mark Brown authored
commit b8edf3e5 upstream. Otherwise if someone tries to use all four channels on AIF1 with the device in master mode we won't be able to clock out all the data. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Mark Brown authored
commit 27130f0c upstream. wm831x devices contain a unique ID value. Feed this into the newly added device_add_randomness() to add some per device seed data to the pool. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Mark Brown authored
commit 9dccf55f upstream. The tamper evident features of the RTC include the "write counter" which is a pseudo-random number regenerated whenever we set the RTC. Since this value is unpredictable it should provide some useful seeding to the random number generator. Only do this on boot since the goal is to seed the pool rather than add useful entropy. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Theodore Ts'o authored
commit c2557a30 upstream. Create a new function, get_random_bytes_arch() which will use the architecture-specific hardware random number generator if it is present. Change get_random_bytes() to not use the HW RNG, even if it is avaiable. The reason for this is that the hw random number generator is fast (if it is present), but it requires that we trust the hardware manufacturer to have not put in a back door. (For example, an increasing counter encrypted by an AES key known to the NSA.) It's unlikely that Intel (for example) was paid off by the US Government to do this, but it's impossible for them to prove otherwise
-
Theodore Ts'o authored
commit e6d4947b upstream. If the CPU supports a hardware random number generator, use it in xfer_secondary_pool(), where it will significantly improve things and where we can afford it. Also, remove the use of the arch-specific rng in add_timer_randomness(), since the call is significantly slower than get_cycles(), and we're much better off using it in xfer_secondary_pool() anyway. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Theodore Ts'o authored
commit 7bf23575 upstream. Cc: David Miller <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Theodore Ts'o authored
commit b04b3156 upstream. Send the USB device's serial, product, and manufacturer strings to the /dev/random driver to help seed its pools. Cc: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Greg KH <greg@kroah.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Linus Torvalds authored
commit a2080a67 upstream. Add a new interface, add_device_randomness() for adding data to the random pool that is likely to differ between two devices (or possibly even per boot). This would be things like MAC addresses or serial numbers, or the read-out of the RTC. This does *not* add any actual entropy to the pool, but it initializes the pool to different values for devices that might otherwise be identical and have very little entropy available to them (particularly common in the embedded world). [ Modified by tytso to mix in a timestamp, since there may be some variability caused by the time needed to detect/configure the hardware in question. ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Theodore Ts'o authored
commit 902c098a upstream. The real-time Linux folks don't like add_interrupt_randomness() taking a spinlock since it is called in the low-level interrupt routine. This also allows us to reduce the overhead in the fast path, for the random driver, which is the interrupt collection path. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Theodore Ts'o authored
commit 775f4b29 upstream. We've been moving away from add_interrupt_randomness() for various reasons: it's too expensive to do on every interrupt, and flooding the CPU with interrupts could theoretically cause bogus floods of entropy from a somewhat externally controllable source. This solves both problems by limiting the actual randomness addition to just once a second or after 64 interrupts, whicever comes first. During that time, the interrupt cycle data is buffered up in a per-cpu pool. Also, we make sure the the nonblocking pool used by urandom is initialized before we start feeding the normal input pool. This assures that /dev/urandom is returning unpredictable data as soon as possible. (Based on an original patch by Linus, but significantly modified by tytso.) Tested-by: Eric Wustrow <ewust@umich.edu> Reported-by: Eric Wustrow <ewust@umich.edu> Reported-by: Nadia Heninger <nadiah@cs.ucsd.edu> Reported-by: Zakir Durumeric <zakir@umich.edu> Reported-by: J. Alex Halderman <jhalderm@umich.edu>. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
H. Peter Anvin authored
commit 2dac8e54 upstream. When we are initializing using arch_get_random_long() we only need to loop enough times to touch all the bytes in the buffer; using poolwords for that does twice the number of operations necessary on a 64-bit machine, since in the random number generator code "word" means 32 bits. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: "Theodore Ts'o" <tytso@mit.edu> Link: http://lkml.kernel.org/r/1324589281-31931-1-git-send-email-tytso@mit.eduSigned-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Theodore Ts'o authored
commit 3e88bdff upstream. If there is an architecture-specific random number generator (such as RDRAND for Intel architectures), use it to initialize /dev/random's entropy stores. Even in the worst case, if RDRAND is something like AES(NSA_KEY, counter++), it won't hurt, and it will definitely help against any other adversaries. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Link: http://lkml.kernel.org/r/1324589281-31931-1-git-send-email-tytso@mit.eduSigned-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Linus Torvalds authored
commit cf833d0b upstream. We still don't use rdrand in /dev/random, which just seems stupid. We accept the *cycle*counter* as a random input, but we don't accept rdrand? That's just broken. Sure, people can do things in user space (write to /dev/random, use rdrand in addition to /dev/random themselves etc etc), but that *still* seems to be a particularly stupid reason for saying "we shouldn't bother to try to do better in /dev/random". And even if somebody really doesn't trust rdrand as a source of random bytes, it seems singularly stupid to trust the cycle counter *more*. So I'd suggest the attached patch. I'm not going to even bother arguing that we should add more bits to the entropy estimate, because that's not the point - I don't care if /dev/random fills up slowly or not, I think it's just stupid to not use the bits we can get from rdrand and mix them into the strong randomness pool. Link: http://lkml.kernel.org/r/CA%2B55aFwn59N1=m651QAyTy-1gO1noGbK18zwKDwvwqnravA84A@mail.gmail.comAcked-by: "David S. Miller" <davem@davemloft.net> Acked-by: "Theodore Ts'o" <tytso@mit.edu> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: Matt Mackall <mpm@selenic.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
J. Bruce Fields authored
commit 2930d381 upstream. Actually, xfs and jfs can optionally be case insensitive; we'll handle that case in later patches. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Luis Henriques authored
commit b31b0219 upstream. commit 9ef449c6 ("[media] rc: Postpone ISR registration") fixed an early ISR registration on several drivers. It did however also introduced a bug by moving the invocation of pnp_port_start() to the end of the probe function. This patch fixes this issue by moving the invocation of pnp_port_start() to an earlier stage in the probe function. Cc: Jarod Wilson <jarod@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Mikael Pettersson authored
commit c6636005 upstream. Booting a 3.2, 3.3, or 3.4-rc4 kernel on an Atari using the `nfeth' ethernet device triggers a WARN_ONCE() in generic irq handling code on the first irq for that device: WARNING: at kernel/irq/handle.c:146 handle_irq_event_percpu+0x134/0x142() irq 3 handler nfeth_interrupt+0x0/0x194 enabled interrupts Modules linked in: Call Trace: [<000299b2>] warn_slowpath_common+0x48/0x6a [<000299c0>] warn_slowpath_common+0x56/0x6a [<00029a4c>] warn_slowpath_fmt+0x2a/0x32 [<0005b34c>] handle_irq_event_percpu+0x134/0x142 [<0005b34c>] handle_irq_event_percpu+0x134/0x142 [<0000a584>] nfeth_interrupt+0x0/0x194 [<001ba0a8>] schedule_preempt_disabled+0x0/0xc [<0005b37a>] handle_irq_event+0x20/0x2c [<0005add4>] generic_handle_irq+0x2c/0x3a [<00002ab6>] do_IRQ+0x20/0x32 [<0000289e>] auto_irqhandler_fixup+0x4/0x6 [<00003144>] cpu_idle+0x22/0x2e [<001b8a78>] printk+0x0/0x18 [<0024d112>] start_kernel+0x37a/0x386 [<0003021d>] __do_proc_dointvec+0xb1/0x366 [<0003021d>] __do_proc_dointvec+0xb1/0x366 [<0024c31e>] _sinittext+0x31e/0x9c0 After invoking the irq's handler the kernel sees !irqs_disabled() and concludes that the handler erroneously enabled interrupts. However, debugging shows that !irqs_disabled() is true even before the handler is invoked, which indicates a problem in the platform code rather than the specific driver. The warning does not occur in 3.1 or older kernels. It turns out that the ALLOWINT definition for Atari is incorrect. The Atari definition of ALLOWINT is ~0x400, the stated purpose of that is to avoid taking HSYNC interrupts. irqs_disabled() returns true if the 3-bit ipl & 4 is non-zero. The nfeth interrupt runs at ipl 3 (it's autovector 3), but 3 & 4 is zero so irqs_disabled() is false, and the warning above is generated. When interrupts are explicitly disabled, ipl is set to 7. When they are enabled, ipl is masked with ALLOWINT. On Atari this will result in ipl = 3, which blocks interrupts at ipl 3 and below. So how come nfeth interrupts at ipl 3 are received at all? That's because ipl is reset to 2 by Atari-specific code in default_idle(), again with the stated purpose of blocking HSYNC interrupts. This discrepancy means that ipl 3 can remain blocked for longer than intended. Both default_idle() and falcon_hblhandler() identify HSYNC with ipl 2, and the "Atari ST/.../F030 Hardware Register Listing" agrees, but ALLOWINT is defined as if HSYNC was ipl 3. [As an experiment I modified default_idle() to reset ipl to 3, and as expected that resulted in all nfeth interrupts being blocked.] The fix is simple: define ALLOWINT as ~0x500 instead. This makes arch_local_irq_enable() consistent with default_idle(), and prevents the !irqs_disabled() problems for ipl 3 interrupts. Tested on Atari running in an Aranym VM. Signed-off-by: Mikael Pettersson <mikpe@it.uu.se> Tested-by: Michael Schmitz <schmitzmic@googlemail.com> (on Falcon/CT60) [Geert Uytterhoeven: This version applies to v3.2..v3.4.] Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Liang Li authored
partial of commit 8e8b41f9 upstream. As part of commit 463454b5 ("cfg80211: fix interface combinations check"), this extra check was introduced: if ((all_iftypes & used_iftypes) != used_iftypes) goto cont; However, most wireless NIC drivers did not advertise ADHOC in wiphy.iface_combinations[i].limits[] and hence we'll get -EBUSY when we bring up a ADHOC wlan with commands similar to: # iwconfig wlan0 mode ad-hoc && ifconfig wlan0 up In commit 8e8b41f9 ("cfg80211: enforce lack of interface combinations"), the change below fixes the issue: if (total == 1) return 0; But it also introduces other dependencies for stable. For example, a full cherry pick of 8e8b41f9 would introduce additional regressions unless we also start cherry picking driver specific fixes like the following: 9b4760e3 ath5k: add possible wiphy interface combinations 1ae2fc25 mac80211_hwsim: advertise interface combinations 20c8e8dc ath9k: add possible wiphy interface combinations And the purpose of the 'if (total == 1)' is to cover the specific use case (IBSS, adhoc) that was mentioned above. So we just pick the specific part out from 8e8b41f9 here. Doing so gives stable kernels a way to fix the change introduced by 463454b5, without having to make cherry picks specific to various NIC drivers. Signed-off-by: Liang Li <liang.li@windriver.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
David Henningsson authored
commit 108cc108 upstream. Also add a model/fixup string "lenovo-dock", so that other Thinkpad users will be able to test this fixup easily, to see if it enables dock I/O for them as well. BugLink: https://bugs.launchpad.net/bugs/1026953Tested-by: John McCarron <john.mccarron@canonical.com> Signed-off-by: David Henningsson <david.henningsson@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Paul Gortmaker authored
commit 2584f521 upstream. Also add information on where the respective trees are. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Acked-by: Rob Landley <rob@landley.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Jarod Wilson authored
commit 4b71ca6b upstream. For one, the driver device pointer needs to be filled in, or the lirc core will refuse to load the driver. And we really need to wire up all the platform_device bits. This has been tested via the lirc sourceforge tree and verified to work, been sitting there for months, finally getting around to sending it. :\ CC: Josh Boyer <jwboyer@redhat.com> Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Stefano Stabellini authored
commit b9e0d95c upstream. When the frontend and the backend reside on the same domain, even if we add pages to the m2p_override, these pages will never be returned by mfn_to_pfn because the check "get_phys_to_machine(pfn) != mfn" will always fail, so the pfn of the frontend will be returned instead (resulting in a deadlock because the frontend pages are already locked). INFO: task qemu-system-i38:1085 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. qemu-system-i38 D ffff8800cfc137c0 0 1085 1 0x00000000 ffff8800c47ed898 0000000000000282 ffff8800be4596b0 00000000000137c0 ffff8800c47edfd8 ffff8800c47ec010 00000000000137c0 00000000000137c0 ffff8800c47edfd8 00000000000137c0 ffffffff82213020 ffff8800be4596b0 Call Trace: [<ffffffff81101ee0>] ? __lock_page+0x70/0x70 [<ffffffff81a0fdd9>] schedule+0x29/0x70 [<ffffffff81a0fe80>] io_schedule+0x60/0x80 [<ffffffff81101eee>] sleep_on_page+0xe/0x20 [<ffffffff81a0e1ca>] __wait_on_bit_lock+0x5a/0xc0 [<ffffffff81101ed7>] __lock_page+0x67/0x70 [<ffffffff8106f750>] ? autoremove_wake_function+0x40/0x40 [<ffffffff811867e6>] ? bio_add_page+0x36/0x40 [<ffffffff8110b692>] set_page_dirty_lock+0x52/0x60 [<ffffffff81186021>] bio_set_pages_dirty+0x51/0x70 [<ffffffff8118c6b4>] do_blockdev_direct_IO+0xb24/0xeb0 [<ffffffff811e71a0>] ? ext3_get_blocks_handle+0xe00/0xe00 [<ffffffff8118ca95>] __blockdev_direct_IO+0x55/0x60 [<ffffffff811e71a0>] ? ext3_get_blocks_handle+0xe00/0xe00 [<ffffffff811e91c8>] ext3_direct_IO+0xf8/0x390 [<ffffffff811e71a0>] ? ext3_get_blocks_handle+0xe00/0xe00 [<ffffffff81004b60>] ? xen_mc_flush+0xb0/0x1b0 [<ffffffff81104027>] generic_file_aio_read+0x737/0x780 [<ffffffff813bedeb>] ? gnttab_map_refs+0x15b/0x1e0 [<ffffffff811038f0>] ? find_get_pages+0x150/0x150 [<ffffffff8119736c>] aio_rw_vect_retry+0x7c/0x1d0 [<ffffffff811972f0>] ? lookup_ioctx+0x90/0x90 [<ffffffff81198856>] aio_run_iocb+0x66/0x1a0 [<ffffffff811998b8>] do_io_submit+0x708/0xb90 [<ffffffff81199d50>] sys_io_submit+0x10/0x20 [<ffffffff81a18d69>] system_call_fastpath+0x16/0x1b The explanation is in the comment within the code: We need to do this because the pages shared by the frontend (xen-blkfront) can be already locked (lock_page, called by do_read_cache_page); when the userspace backend tries to use them with direct_IO, mfn_to_pfn returns the pfn of the frontend, so do_blockdev_direct_IO is going to try to lock the same pages again resulting in a deadlock. A simplified call graph looks like this: pygrub QEMU ----------------------------------------------- do_read_cache_page io_submit | | lock_page ext3_direct_IO | bio_add_page | lock_page Internally the xen-blkback uses m2p_add_override to swizzle (temporarily) a 'struct page' to have a different MFN (so that it can point to another guest). It also can easily find out whether another pfn corresponding to the mfn exists in the m2p, and can set the FOREIGN bit in the p2m, making sure that mfn_to_pfn returns the pfn of the backend. This allows the backend to perform direct_IO on these pages, but as a side effect prevents the frontend from using get_user_pages_fast on them while they are being shared with the backend. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-