- 17 Jul, 2014 23 commits
-
-
Theodore Ts'o authored
commit 61c219f5 upstream. The first time that we allocate from an uninitialized inode allocation bitmap, if the block allocation bitmap is also uninitalized, we need to get write access to the block group descriptor before we start modifying the block group descriptor flags and updating the free block count, etc. Otherwise, there is the potential of a bad journal checksum (if journal checksums are enabled), and of the file system becoming inconsistent if we crash at exactly the wrong time. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joe Thornber authored
commit 10f1d5d1 upstream. There's a race condition between the atomic_dec_and_test(&io->count) in dec_count() and the waking of the sync_io() thread. If the thread is spuriously woken immediately after the decrement it may exit, making the on stack io struct invalid, yet the dec_count could still be using it. Fix this race by using a completion in sync_io() and dec_count(). Reported-by: Minfei Huang <huangminfei@ucloud.cn> Signed-off-by: Joe Thornber <thornber@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
K. Y. Srinivasan authored
commit affb1aff upstream. Starting with Win8, we have implemented several optimizations to improve the scalability and performance of the VMBUS transport between the Host and the Guest. Some of the non-performance critical services cannot leverage these optimization since they only read and process one message at a time. Make adjustments to the callback dispatch code to account for the way non-performance critical drivers handle reading of the channel. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 15ebb052 upstream. The control register is at offset 0x10, not 0x0. This is wreckaged since commit 5df33a62 (SPEAr: Switch to common clock framework). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Mike Turquette <mturquette@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Colin Cross authored
commit fa2ec3ea upstream. include/linux/sched.h implements TASK_SIZE_OF as TASK_SIZE if it is not set by the architecture headers. TASK_SIZE uses the current task to determine the size of the virtual address space. On a 64-bit kernel this will cause reading /proc/pid/pagemap of a 64-bit process from a 32-bit process to return EOF when it reads past 0xffffffff. Implement TASK_SIZE_OF exactly the same as TASK_SIZE with test_tsk_thread_flag instead of test_thread_flag. Signed-off-by: Colin Cross <ccross@android.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jussi Kivilinna authored
commit cfe82d4f upstream. Byte-to-bit-count computation is only partly converted to big-endian and is mixing in CPU-endian values. Problem was noticed by sparce with warning: CHECK arch/x86/crypto/sha512_ssse3_glue.c arch/x86/crypto/sha512_ssse3_glue.c:144:19: warning: restricted __be64 degrades to integer arch/x86/crypto/sha512_ssse3_glue.c:144:17: warning: incorrect type in assignment (different base types) arch/x86/crypto/sha512_ssse3_glue.c:144:17: expected restricted __be64 <noident> arch/x86/crypto/sha512_ssse3_glue.c:144:17: got unsigned long long Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Prabhakar Lad authored
commit 5a90af67 upstream. Since commtit 8a7b1227 (cpufreq: davinci: move cpufreq driver to drivers/cpufreq) this added dependancy only for CONFIG_ARCH_DAVINCI_DA850 where as davinci_cpufreq_init() call is used by all davinci platform. This patch fixes following build error: arch/arm/mach-davinci/built-in.o: In function `davinci_init_late': :(.init.text+0x928): undefined reference to `davinci_cpufreq_init' make: *** [vmlinux] Error 1 Fixes: 8a7b1227 (cpufreq: davinci: move cpufreq driver to drivers/cpufreq) Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joel Stanley authored
commit b50a6c58 upstream. On POWER8 when switching to a KVM guest we set bits in MMCR2 to freeze the PMU counters. Aside from on boot they are then never reset, resulting in stuck perf counters for any user in the guest or host. We now set MMCR2 to 0 whenever enabling the PMU, which provides a sane state for perf to use the PMU counters under either the guest or the host. This was manifesting as a bug with ppc64_cpu --frequency: $ sudo ppc64_cpu --frequency WARNING: couldn't run on cpu 0 WARNING: couldn't run on cpu 8 ... WARNING: couldn't run on cpu 144 WARNING: couldn't run on cpu 152 min: 18446744073.710 GHz (cpu -1) max: 0.000 GHz (cpu -1) avg: 0.000 GHz The command uses a perf counter to measure CPU cycles over a fixed amount of time, in order to approximate the frequency of the machine. The counters were returning zero once a guest was started, regardless of weather it was still running or had been shut down. By dumping the value of MMCR2, it was observed that once a guest is running MMCR2 is set to 1s - which stops counters from running: $ sudo sh -c 'echo p > /proc/sysrq-trigger' CPU: 0 PMU registers, ppmu = POWER8 n_counters = 6 PMC1: 5b635e38 PMC2: 00000000 PMC3: 00000000 PMC4: 00000000 PMC5: 1bf5a646 PMC6: 5793d378 PMC7: deadbeef PMC8: deadbeef MMCR0: 0000000080000000 MMCR1: 000000001e000000 MMCRA: 0000040000000000 MMCR2: fffffffffffffc00 EBBHR: 0000000000000000 EBBRR: 0000000000000000 BESCR: 0000000000000000 SIAR: 00000000000a51cc SDAR: c00000000fc40000 SIER: 0000000001000000 This is done unconditionally in book3s_hv_interrupts.S upon entering the guest, and the original value is only save/restored if the host has indicated it was using the PMU. This is okay, however the user of the PMU needs to ensure that it is in a defined state when it starts using it. Fixes: e05b9b9e ("powerpc/perf: Power8 PMU support") Signed-off-by: Joel Stanley <joel@jms.id.au> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joel Stanley authored
commit 4d9690dd upstream. Instead of separate bits for every POWER8 PMU feature, have a single one for v2.07 of the architecture. This saves us adding a MMCR2 define for a future patch. Signed-off-by: Joel Stanley <joel@jms.id.au> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Anton Blanchard authored
commit f5602941 upstream. We are seeing a lot of PMU warnings on POWER8: Can't find PMC that caused IRQ Looking closer, the active PMC is 0 at this point and we took a PMU exception on the transition from negative to 0. Some versions of POWER8 have an issue where they edge detect and not level detect PMC overflows. A number of places program the PMC with (0x80000000 - period_left), where period_left can be negative. We can either fix all of these or just ensure that period_left is always >= 1. This patch takes the second option. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andy Whitcroft authored
commit 867f9d46 upstream. The recently merged change (in v3.14-rc6) to ACPI resource detection (below) causes all zero length ACPI resources to be elided from the table: commit b355cee8 Author: Zhang Rui <rui.zhang@intel.com> Date: Thu Feb 27 11:37:15 2014 +0800 ACPI / resources: ignore invalid ACPI device resources This change has caused a regression in (at least) serial port detection for a number of machines (see LP#1313981 [1]). These seem to represent their IO regions (presumably incorrectly) as a zero length region. Reverting the above commit restores these serial devices. Only elide zero length resources which lie at address 0. Fixes: b355cee8 (ACPI / resources: ignore invalid ACPI device resources) Signed-off-by: Andy Whitcroft <apw@canonical.com> Acked-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit c024044d upstream. The module test script for the adm1021 driver exposes a cache problem when writing temperature limits. temp_min and temp_max are expected to be stored in milli-degrees C but are stored in degrees C. Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit 1035a9e3 upstream. Writing to fanX_div does not clear the cache. As a result, reading from fanX_div may return the old value for up to two seconds after writing a new value. This patch ensures the fan_div cache is updated in set_fan_div(). Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit 145e74a4 upstream. Upper limit for write operations to temperature limit registers was clamped to a fractional value. However, limit registers do not support fractional values. As a result, upper limits of 127.5 degrees C or higher resulted in a rounded limit of 128 degrees C. Since limit registers are signed, this was stored as -128 degrees C. Clamp limits to (-55, +127) degrees C to solve the problem. Value on writes to auto_temp[12]_min and auto_temp[12]_max were not clamped at all, but masked. As a result, out-of-range writes resulted in a more or less arbitrary limit. Clamp those attributes to (0, 127) degrees C for more predictable results. Cc: Axel Lin <axel.lin@ingics.com> Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit df86754b upstream. temp2_input should not be writable, fix it. Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yasuaki Ishimatsu authored
commit 5a6024f1 upstream. When hot-adding and onlining CPU, kernel panic occurs, showing following call trace. BUG: unable to handle kernel paging request at 0000000000001d08 IP: [<ffffffff8114acfd>] __alloc_pages_nodemask+0x9d/0xb10 PGD 0 Oops: 0000 [#1] SMP ... Call Trace: [<ffffffff812b8745>] ? cpumask_next_and+0x35/0x50 [<ffffffff810a3283>] ? find_busiest_group+0x113/0x8f0 [<ffffffff81193bc9>] ? deactivate_slab+0x349/0x3c0 [<ffffffff811926f1>] new_slab+0x91/0x300 [<ffffffff815de95a>] __slab_alloc+0x2bb/0x482 [<ffffffff8105bc1c>] ? copy_process.part.25+0xfc/0x14c0 [<ffffffff810a3c78>] ? load_balance+0x218/0x890 [<ffffffff8101a679>] ? sched_clock+0x9/0x10 [<ffffffff81105ba9>] ? trace_clock_local+0x9/0x10 [<ffffffff81193d1c>] kmem_cache_alloc_node+0x8c/0x200 [<ffffffff8105bc1c>] copy_process.part.25+0xfc/0x14c0 [<ffffffff81114d0d>] ? trace_buffer_unlock_commit+0x4d/0x60 [<ffffffff81085a80>] ? kthread_create_on_node+0x140/0x140 [<ffffffff8105d0ec>] do_fork+0xbc/0x360 [<ffffffff8105d3b6>] kernel_thread+0x26/0x30 [<ffffffff81086652>] kthreadd+0x2c2/0x300 [<ffffffff81086390>] ? kthread_create_on_cpu+0x60/0x60 [<ffffffff815f20ec>] ret_from_fork+0x7c/0xb0 [<ffffffff81086390>] ? kthread_create_on_cpu+0x60/0x60 In my investigation, I found the root cause is wq_numa_possible_cpumask. All entries of wq_numa_possible_cpumask is allocated by alloc_cpumask_var_node(). And these entries are used without initializing. So these entries have wrong value. When hot-adding and onlining CPU, wq_update_unbound_numa() is called. wq_update_unbound_numa() calls alloc_unbound_pwq(). And alloc_unbound_pwq() calls get_unbound_pool(). In get_unbound_pool(), worker_pool->node is set as follow: 3592 /* if cpumask is contained inside a NUMA node, we belong to that node */ 3593 if (wq_numa_enabled) { 3594 for_each_node(node) { 3595 if (cpumask_subset(pool->attrs->cpumask, 3596 wq_numa_possible_cpumask[node])) { 3597 pool->node = node; 3598 break; 3599 } 3600 } 3601 } But wq_numa_possible_cpumask[node] does not have correct cpumask. So, wrong node is selected. As a result, kernel panic occurs. By this patch, all entries of wq_numa_possible_cpumask are allocated by zalloc_cpumask_var_node to initialize them. And the panic disappeared. Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Reviewed-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Tejun Heo <tj@kernel.org> Fixes: bce90380 ("workqueue: add wq_numa_tbl_len and wq_numa_possible_cpumask[]") Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gu Zheng authored
commit 391acf97 upstream. When runing with the kernel(3.15-rc7+), the follow bug occurs: [ 9969.258987] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:586 [ 9969.359906] in_atomic(): 1, irqs_disabled(): 0, pid: 160655, name: python [ 9969.441175] INFO: lockdep is turned off. [ 9969.488184] CPU: 26 PID: 160655 Comm: python Tainted: G A 3.15.0-rc7+ #85 [ 9969.581032] Hardware name: FUJITSU-SV PRIMEQUEST 1800E/SB, BIOS PRIMEQUEST 1000 Series BIOS Version 1.39 11/16/2012 [ 9969.706052] ffffffff81a20e60 ffff8803e941fbd0 ffffffff8162f523 ffff8803e941fd18 [ 9969.795323] ffff8803e941fbe0 ffffffff8109995a ffff8803e941fc58 ffffffff81633e6c [ 9969.884710] ffffffff811ba5dc ffff880405c6b480 ffff88041fdd90a0 0000000000002000 [ 9969.974071] Call Trace: [ 9970.003403] [<ffffffff8162f523>] dump_stack+0x4d/0x66 [ 9970.065074] [<ffffffff8109995a>] __might_sleep+0xfa/0x130 [ 9970.130743] [<ffffffff81633e6c>] mutex_lock_nested+0x3c/0x4f0 [ 9970.200638] [<ffffffff811ba5dc>] ? kmem_cache_alloc+0x1bc/0x210 [ 9970.272610] [<ffffffff81105807>] cpuset_mems_allowed+0x27/0x140 [ 9970.344584] [<ffffffff811b1303>] ? __mpol_dup+0x63/0x150 [ 9970.409282] [<ffffffff811b1385>] __mpol_dup+0xe5/0x150 [ 9970.471897] [<ffffffff811b1303>] ? __mpol_dup+0x63/0x150 [ 9970.536585] [<ffffffff81068c86>] ? copy_process.part.23+0x606/0x1d40 [ 9970.613763] [<ffffffff810bf28d>] ? trace_hardirqs_on+0xd/0x10 [ 9970.683660] [<ffffffff810ddddf>] ? monotonic_to_bootbased+0x2f/0x50 [ 9970.759795] [<ffffffff81068cf0>] copy_process.part.23+0x670/0x1d40 [ 9970.834885] [<ffffffff8106a598>] do_fork+0xd8/0x380 [ 9970.894375] [<ffffffff81110e4c>] ? __audit_syscall_entry+0x9c/0xf0 [ 9970.969470] [<ffffffff8106a8c6>] SyS_clone+0x16/0x20 [ 9971.030011] [<ffffffff81642009>] stub_clone+0x69/0x90 [ 9971.091573] [<ffffffff81641c29>] ? system_call_fastpath+0x16/0x1b The cause is that cpuset_mems_allowed() try to take mutex_lock(&callback_mutex) under the rcu_read_lock(which was hold in __mpol_dup()). And in cpuset_mems_allowed(), the access to cpuset is under rcu_read_lock, so in __mpol_dup, we can reduce the rcu_read_lock protection region to protect the access to cpuset only in current_cpuset_is_being_rebound(). So that we can avoid this bug. This patch is a temporary solution that just addresses the bug mentioned above, can not fix the long-standing issue about cpuset.mems rebinding on fork(): "When the forker's task_struct is duplicated (which includes ->mems_allowed) and it races with an update to cpuset_being_rebound in update_tasks_nodemask() then the task's mems_allowed doesn't get updated. And the child task's mems_allowed can be wrong if the cpuset's nodemask changes before the child has been added to the cgroup's tasklist." Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Acked-by: Li Zefan <lizefan@huawei.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maxime Bizon authored
commit bddbceb6 upstream. Uevents are suppressed during attributes registration, but never restored, so kobject_uevent() does nothing. Signed-off-by: Maxime Bizon <mbizon@freebox.fr> Signed-off-by: Tejun Heo <tj@kernel.org> Fixes: 226223abSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit eadcc720 upstream. Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michal Sojka authored
commit d8279a40 upstream. This adds support for Infineon TriBoard TC1798 [1]. Only interface 1 is used as serial line (see [2], Figure 8-6). [1] http://www.infineon.com/cms/de/product/microcontroller/development-tools-software-and-kits/tricore-tm-development-tools-software-and-kits/starterkits-and-evaluation-boards/starter-kit-tc1798/channel.html?channel=db3a304333b8a7ca0133cfa3d73e4268 [2] http://www.infineon.com/dgdl/TriBoardManual-TC1798-V10.pdf?folderId=db3a304412b407950112b409ae7c0343&fileId=db3a304333b8a7ca0133cfae99fe426aSigned-off-by: Michal Sojka <sojkam1@fel.cvut.cz> Cc: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bert Vermeulen authored
commit 5a7fbe7e upstream. This patch adds PID 0x0003 to the VID 0x128d (Testo). At least the Testo 435-4 uses this, likely other gear as well. Signed-off-by: Bert Vermeulen <bert@biot.com> Cc: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andras Kovacs authored
commit b9326057 upstream. Corsair USB Dongles are shipped with Corsair AXi series PSUs. These are cp210x serial usb devices, so make driver detect these. I have a program, that can get information from these PSUs. Tested with 2 different dongles shipped with Corsair AX860i and AX1200i units. Signed-off-by: Andras Kovacs <andras@sth.sze.hu> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bernd Wachter authored
commit 3d28bd84 upstream. Add ID of the Telewell 4G v2 hardware to option driver to get legacy serial interface working Signed-off-by: Bernd Wachter <bernd.wachter@jolla.com> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 09 Jul, 2014 17 commits
-
-
Greg Kroah-Hartman authored
-
Hugh Dickins authored
commit d05f0cdc upstream. In v2.6.34 commit 9d8cebd4 ("mm: fix mbind vma merge problem") introduced vma merging to mbind(), but it should have also changed the convention of passing start vma from queue_pages_range() (formerly check_range()) to new_vma_page(): vma merging may have already freed that structure, resulting in BUG at mm/mempolicy.c:1738 and probably worse crashes. Fixes: 9d8cebd4 ("mm: fix mbind vma merge problem") Reported-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Tested-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Naoya Horiguchi authored
commit 4a705fef upstream. There's a race between fork() and hugepage migration, as a result we try to "dereference" a swap entry as a normal pte, causing kernel panic. The cause of the problem is that copy_hugetlb_page_range() can't handle "swap entry" family (migration entry and hwpoisoned entry) so let's fix it. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit fd1232b2 upstream. This patch fixes I/O errors with the sym53c8xx_2 driver when the disk returns QUEUE FULL status. When the controller encounters an error (including QUEUE FULL or BUSY status), it aborts all not yet submitted requests in the function sym_dequeue_from_squeue. This function aborts them with DID_SOFT_ERROR. If the disk has full tag queue, the request that caused the overflow is aborted with QUEUE FULL status (and the scsi midlayer properly retries it until it is accepted by the disk), but the sym53c8xx_2 driver aborts the following requests with DID_SOFT_ERROR --- for them, the midlayer does just a few retries and then signals the error up to sd. The result is that disk returning QUEUE FULL causes request failures. The error was reproduced on 53c895 with COMPAQ BD03685A24 disk (rebranded ST336607LC) with command queue 48 or 64 tags. The disk has 64 tags, but under some access patterns it return QUEUE FULL when there are less than 64 pending tags. The SCSI specification allows returning QUEUE FULL anytime and it is up to the host to retry. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: Matthew Wilcox <matthew@wil.cx> Cc: James Bottomley <JBottomley@Parallels.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Zhichuang SUN authored
commit fbc6c4a1 upstream. Function unifb_mmap calls functions which are defined in linux/mm.h and asm/pgtable.h The related error (for unicore32 with unicore32_defconfig): CC drivers/video/fbdev/fb-puv3.o drivers/video/fbdev/fb-puv3.c: In function 'unifb_mmap': drivers/video/fbdev/fb-puv3.c:646: error: implicit declaration of function 'vm_iomap_memory' drivers/video/fbdev/fb-puv3.c:646: error: implicit declaration of function 'pgprot_noncached' Signed-off-by: Zhichuang Sun <sunzc522@gmail.com> Cc: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com> Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> Cc: Jingoo Han <jg1.han@samsung.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Joe Perches <joe@perches.com> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: linux-fbdev@vger.kernel.org Acked-by: Xuetao Guan <gxt@mprc.pku.edu.cn> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chen Gang authored
commit 1ff38c56 upstream. Need include "asm/pgtable.h" to include "asm-generic/pgtable-nopmd.h", so can let 'pmd_t' defined. The related error with allmodconfig: CC arch/unicore32/mm/alignment.o In file included from arch/unicore32/mm/alignment.c:24: arch/unicore32/include/asm/tlbflush.h:135: error: expected .). before .*. token arch/unicore32/include/asm/tlbflush.h:154: error: expected .). before .*. token In file included from arch/unicore32/mm/alignment.c:27: arch/unicore32/mm/mm.h:15: error: expected .=., .,., .;., .sm. or ._attribute__. before .*. token arch/unicore32/mm/mm.h:20: error: expected .=., .,., .;., .sm. or ._attribute__. before .*. token arch/unicore32/mm/mm.h:25: error: expected .=., .,., .;., .sm. or ._attribute__. before .*. token make[1]: *** [arch/unicore32/mm/alignment.o] Error 1 make: *** [arch/unicore32/mm] Error 2 Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com> Acked-by: Xuetao Guan <gxt@mprc.pku.edu.cn> Signed-off-by: Xuetao Guan <gxt@mprc.pku.edu.cn> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sander Eikelenboom authored
commit b7a77235 upstream. This (widely used) construction: if(printk_ratelimit()) dev_dbg() Causes the ratelimiting to spam the kernel log with the "callbacks suppressed" message below, even while the dev_dbg it is supposed to rate limit wouldn't print anything because DEBUG is not defined for this device. [ 533.803964] retire_playback_urb: 852 callbacks suppressed [ 538.807930] retire_playback_urb: 852 callbacks suppressed [ 543.811897] retire_playback_urb: 852 callbacks suppressed [ 548.815745] retire_playback_urb: 852 callbacks suppressed [ 553.819826] retire_playback_urb: 852 callbacks suppressed So use dev_dbg_ratelimited() instead of this construction. Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tim Gardner authored
commit a5065eb6 upstream. BugLink: http://bugs.launchpad.net/bugs/1305133 Malfunctioning or slow devices can cause a flood of dmesg SPAM. I've ignored checkpatch.pl complaints about the use of printk_ratelimit() in favour of prior art in sound/usb/pcm.c. WARNING: Prefer printk_ratelimited or pr_<level>_ratelimited to printk_ratelimit + if (printk_ratelimit() && Cc: Jaroslav Kysela <perex@perex.cz> Cc: Takashi Iwai <tiwai@suse.de> Cc: Eldad Zack <eldad@fogrefinery.com> Cc: Daniel Mack <zonque@gmail.com> Cc: Clemens Ladisch <clemens@ladisch.de> Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Hogan authored
commit 6979f8d2 upstream. Commit c49436b6 (serial: 8250_dw: Improve unwritable LCR workaround) caused a regression. It added a check that the LCR was written properly to detect and workaround the busy quirk, but the behaviour of bit 5 (UART_LCR_SPAR) differs between IP versions 3.00a and 3.14c per the docs. On older versions this caused the check to fail and it would repeatedly force idle and rewrite the LCR register, causing delays and preventing any input from serial being received. This is fixed by masking out UART_LCR_SPAR before making the comparison. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Tim Kryger <tim.kryger@linaro.org> Cc: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Cc: Matt Porter <matt.porter@linaro.org> Cc: Markus Mayer <markus.mayer@linaro.org> Tested-by: Tim Kryger <tim.kryger@linaro.org> Tested-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Tested-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Cc: Wang Nan <wangnan0@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tim Kryger authored
commit c49436b6 upstream. When configured with UART_16550_COMPATIBLE=NO or in versions prior to the introduction of this option, the Designware UART will ignore writes to the LCR if the UART is busy. The current workaround saves a copy of the last written LCR and re-writes it in the ISR for a special interrupt that is raised when a write was ignored. Unfortunately, interrupts are typically disabled prior to performing a sequence of register writes that include the LCR so the point at which the retry occurs is too late. An example is serial8250_do_set_termios() where an ignored LCR write results in the baud divisor not being set and instead a garbage character is sent out the transmitter. Furthermore, since serial_port_out() offers no way to indicate failure, a serious effort must be made to ensure that the LCR is actually updated before returning back to the caller. This is difficult, however, as a UART that was busy during the first attempt is likely to still be busy when a subsequent attempt is made unless some extra action is taken. This updated workaround reads back the LCR after each write to confirm that the new value was accepted by the hardware. Should the hardware ignore a write, the TX/RX FIFOs are cleared and the receive buffer read before attempting to rewrite the LCR out of the hope that doing so will force the UART into an idle state. While this may seem unnecessarily aggressive, writes to the LCR are used to change the baud rate, parity, stop bit, or data length so the data that may be lost is likely not important. Admittedly, this is far from ideal but it seems to be the best that can be done given the hardware limitations. Lastly, the revised workaround doesn't touch the LCR in the ISR, so it avoids the possibility of a "serial8250: too much work for irq" lock up. This problem is rare in real situations but can be reproduced easily by wiring up two UARTs and running the following commands. # stty -F /dev/ttyS1 echo # stty -F /dev/ttyS2 echo # cat /dev/ttyS1 & [1] 375 # echo asdf > /dev/ttyS1 asdf [ 27.700000] serial8250: too much work for irq96 [ 27.700000] serial8250: too much work for irq96 [ 27.710000] serial8250: too much work for irq96 [ 27.710000] serial8250: too much work for irq96 [ 27.720000] serial8250: too much work for irq96 [ 27.720000] serial8250: too much work for irq96 [ 27.730000] serial8250: too much work for irq96 [ 27.730000] serial8250: too much work for irq96 [ 27.740000] serial8250: too much work for irq96 Signed-off-by: Tim Kryger <tim.kryger@linaro.org> Reviewed-by: Matt Porter <matt.porter@linaro.org> Reviewed-by: Markus Mayer <markus.mayer@linaro.org> Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> [wangnan: backport to 3.10.43: - adjust context - remove unneeded local var] Signed-off-by: Wang Nan <wangnan0@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tim Kryger authored
commit 33acbb82 upstream. When a serial port is configured for RTS/CTS flow control, serial core will disable the transmitter if it observes CTS is de-asserted. This is perfectly reasonable and appropriate when the UART lacks the ability to automatically perform CTS flow control. However, if the UART hardware can manage flow control automatically, it is important that software not get involved. When the DesignWare UART enables 16C750 style auto-RTS/CTS it stops generating interrupts for changes in CTS state so software mostly stays out of the way. However, it does report the true state of CTS in the MSR so software may notice it is de-asserted and respond by improperly disabling the transmitter. Once this happens the transmitter will be blocked forever. To avoid this situation, we simply lie to the 8250 and serial core by reporting that CTS is asserted whenever auto-RTS/CTS mode is enabled. Signed-off-by: Tim Kryger <tim.kryger@linaro.org> Reviewed-by: Matt Porter <matt.porter@linaro.org> Reviewed-by: Markus Mayer <markus.mayer@linaro.org> Cc: Wang Nan <wangnan0@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Micky Ching authored
commit 5027251e upstream. a27fbf2f ("mmc: add ignorance case for CMD13 CRC error") produced a cmd.flags unhandled in realtek pci host driver. This will make MMC card fail to initialize, this patch is used to handle the new cmd.flags condition and MMC card can be used. Signed-off-by: Micky Ching <micky_ching@realsil.com.cn> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Chris Ball <chris@printf.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Gleixner authored
commit 4f436603 upstream. The ras3 block on spear320 claims to have 3 interrupts. In fact it has one and 6 reserved interrupts. Account the 6 reserved to this block so it has 7 interrupts total. That matches the datasheet and the device tree entries. Broken since commit 80515a5a(ARM: SPEAr3xx: shirq: simplify and move the shared irq multiplexor to DT). Testing is overrated.... Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20140619212712.872379208@linutronix.de Fixes: 80515a5a ('ARM: SPEAr3xx: shirq: simplify and move the shared irq multiplexor to DT') Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
NeilBrown authored
commit 133d4527 upstream. When we write to a degraded array which has a bitmap, we make sure the relevant bit in the bitmap remains set when the write completes (so a 're-add' can quickly rebuilt a temporarily-missing device). If, immediately after such a write starts, we incorporate a spare, commence recovery, and skip over the region where the write is happening (because the 'needs recovery' flag isn't set yet), then that write will not get to the new device. Once the recovery finishes the new device will be trusted, but will have incorrect data, leading to possible corruption. We cannot set the 'needs recovery' flag when we start the write as we do not know easily if the write will be "degraded" or not. That depends on details of the particular raid level and particular write request. This patch fixes a corruption issue of long standing and so it suitable for any -stable kernel. It applied correctly to 3.0 at least and will minor editing to earlier kernels. Reported-by: Bill <billstuff2001@sbcglobal.net> Tested-by: Bill <billstuff2001@sbcglobal.net> Link: http://lkml.kernel.org/r/53A518BB.60709@sbcglobal.netSigned-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steven Rostedt (Red Hat) authored
commit 099ed151 upstream. Disabling reading and writing to the trace file should not be able to disable all function tracing callbacks. There's other users today (like kprobes and perf). Reading a trace file should not stop those from happening. Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michal Nazarewicz authored
commit f35f7124 upstream. It appears that no one ever run ffs-test on a big-endian machine, since it used cpu-endianess for fs_count and hs_count fields which should be in little-endian format. Fix by wrapping the numbers in cpu_to_le32. Signed-off-by: Michal Nazarewicz <mina86@mina86.com> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
J. Bruce Fields authored
commit 76f47128 upstream. An NFS operation that creates a new symlink includes the symlink data, which is xdr-encoded as a length followed by the data plus 0 to 3 bytes of zero-padding as required to reach a 4-byte boundary. The vfs, on the other hand, wants null-terminated data. The simple way to handle this would be by copying the data into a newly allocated buffer with space for the final null. The current nfsd_symlink code tries to be more clever by skipping that step in the (likely) case where the byte following the string is already 0. But that assumes that the byte following the string is ours to look at. In fact, it might be the first byte of a page that we can't read, or of some object that another task might modify. Worse, the NFSv4 code tries to fix the problem by actually writing to that byte. In the NFSv2/v3 cases this actually appears to be safe: - nfs3svc_decode_symlinkargs explicitly null-terminates the data (after first checking its length and copying it to a new page). - NFSv2 limits symlinks to 1k. The buffer holding the rpc request is always at least a page, and the link data (and previous fields) have maximum lengths that prevent the request from reaching the end of a page. In the NFSv4 case the CREATE op is potentially just one part of a long compound so can end up on the end of a page if you're unlucky. The minimal fix here is to copy and null-terminate in the NFSv4 case. The nfsd_symlink() interface here seems too fragile, though. It should really either do the copy itself every time or just require a null-terminated string. Reported-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-