- 07 Aug, 2014 18 commits
-
-
Silesh C V authored
commit aed8adb7 upstream. Commit 079148b9 ("coredump: factor out the setting of PF_DUMPCORE") cleaned up the setting of PF_DUMPCORE by removing it from all the linux_binfmt->core_dump() and moving it to zap_threads().But this ended up clearing all the previously set flags. This causes issues during core generation when tsk->flags is checked again (eg. for PF_USED_MATH to dump floating point registers). Fix this. Signed-off-by:
Silesh C V <svellattu@mvista.com> Acked-by:
Oleg Nesterov <oleg@redhat.com> Cc: Mandeep Singh Baines <msb@chromium.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Christian König authored
commit e8c214d2 upstream. We must mask out the overflow bit as well, otherwise the wptr will never match the rptr again and the interrupt handler will loop forever. Signed-off-by:
Christian König <christian.koenig@amd.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Reviewed-by:
Michel Dänzer <michel.daenzer@amd.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Tejun Heo authored
commit 1a112d10 upstream. 1871ee13 ("libata: support the ata host which implements a queue depth less than 32") directly used ata_port->scsi_host->can_queue from ata_qc_new() to determine the number of tags supported by the host; unfortunately, SAS controllers doing SATA don't initialize ->scsi_host leading to the following oops. BUG: unable to handle kernel NULL pointer dereference at 0000000000000058 IP: [<ffffffff814e0618>] ata_qc_new_init+0x188/0x1b0 PGD 0 Oops: 0002 [#1] SMP Modules linked in: isci libsas scsi_transport_sas mgag200 drm_kms_helper ttm CPU: 1 PID: 518 Comm: udevd Not tainted 3.16.0-rc6+ #62 Hardware name: Intel Corporation S2600CO/S2600CO, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013 task: ffff880c1a00b280 ti: ffff88061a000000 task.ti: ffff88061a000000 RIP: 0010:[<ffffffff814e0618>] [<ffffffff814e0618>] ata_qc_new_init+0x188/0x1b0 RSP: 0018:ffff88061a003ae8 EFLAGS: 00010012 RAX: 0000000000000001 RBX: ffff88000241ca80 RCX: 00000000000000fa RDX: 0000000000000020 RSI: 0000000000000020 RDI: ffff8806194aa298 RBP: ffff88061a003ae8 R08: ffff8806194a8000 R09: 0000000000000000 R10: 0000000000000000 R11: ffff88000241ca80 R12: ffff88061ad58200 R13: ffff8806194aa298 R14: ffffffff814e67a0 R15: ffff8806194a8000 FS: 00007f3ad7fe3840(0000) GS:ffff880627620000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000058 CR3: 000000061a118000 CR4: 00000000001407e0 Stack: ffff88061a003b20 ffffffff814e96e1 ffff88000241ca80 ffff88061ad58200 ffff8800b6bf6000 ffff880c1c988000 ffff880619903850 ffff88061a003b68 ffffffffa0056ce1 ffff88061a003b48 0000000013d6e6f8 ffff88000241ca80 Call Trace: [<ffffffff814e96e1>] ata_sas_queuecmd+0xa1/0x430 [<ffffffffa0056ce1>] sas_queuecommand+0x191/0x220 [libsas] [<ffffffff8149afee>] scsi_dispatch_cmd+0x10e/0x300 [<ffffffff814a3bc5>] scsi_request_fn+0x2f5/0x550 [<ffffffff81317613>] __blk_run_queue+0x33/0x40 [<ffffffff8131781a>] queue_unplugged+0x2a/0x90 [<ffffffff8131ceb4>] blk_flush_plug_list+0x1b4/0x210 [<ffffffff8131d274>] blk_finish_plug+0x14/0x50 [<ffffffff8117eaa8>] __do_page_cache_readahead+0x198/0x1f0 [<ffffffff8117ee21>] force_page_cache_readahead+0x31/0x50 [<ffffffff8117ee7e>] page_cache_sync_readahead+0x3e/0x50 [<ffffffff81172ac6>] generic_file_read_iter+0x496/0x5a0 [<ffffffff81219897>] blkdev_read_iter+0x37/0x40 [<ffffffff811e307e>] new_sync_read+0x7e/0xb0 [<ffffffff811e3734>] vfs_read+0x94/0x170 [<ffffffff811e43c6>] SyS_read+0x46/0xb0 [<ffffffff811e33d1>] ? SyS_lseek+0x91/0xb0 [<ffffffff8171ee29>] system_call_fastpath+0x16/0x1b Code: 00 00 00 88 50 29 83 7f 08 01 19 d2 83 e2 f0 83 ea 50 88 50 34 c6 81 1d 02 00 00 40 c6 81 17 02 00 00 00 5d c3 66 0f 1f 44 00 00 <89> 14 25 58 00 00 00 Fix it by introducing ata_host->n_tags which is initialized to ATA_MAX_QUEUE - 1 in ata_host_init() for SAS controllers and set to scsi_host_template->can_queue in ata_host_register() for !SAS ones. As SAS hosts are never registered, this will give them the same ATA_MAX_QUEUE - 1 as before. Note that we can't use scsi_host->can_queue directly for SAS hosts anyway as they can go higher than the libata maximum. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Mike Qiu <qiudayu@linux.vnet.ibm.com> Reported-by:
Jesse Brandeburg <jesse.brandeburg@gmail.com> Reported-by:
Peter Hurley <peter@hurleysoftware.com> Reported-by:
Peter Zijlstra <peterz@infradead.org> Tested-by:
Alexey Kardashevskiy <aik@ozlabs.ru> Fixes: 1871ee13 ("libata: support the ata host which implements a queue depth less than 32") Cc: Kevin Hao <haokexin@gmail.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Chris Wilson authored
commit a0d036b0 upstream. commit 4be17381 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Jun 6 10:22:29 2014 +0100 drm/i915: Reorder semaphore deadlock check did the majority of the work, but it missed one crucial detail: The check for the unkickable deadlock on this ring must come after the check whether the ring that we are waiting on has already passed its target seqno. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=80709Tested-by:
Stefan Huber <shuber@sthu.org> Signed-off-by:
Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Tony Luck authored
commit 58d4e21e upstream. The "uptime" trace clock added in: commit 8aacf017 tracing: Add "uptime" trace clock that uses jiffies has wraparound problems when the system has been up more than 1 hour 11 minutes and 34 seconds. It converts jiffies to nanoseconds using: (u64)jiffies_to_usecs(jiffy) * 1000ULL but since jiffies_to_usecs() only returns a 32-bit value, it truncates at 2^32 microseconds. An additional problem on 32-bit systems is that the argument is "unsigned long", so fixing the return value only helps until 2^32 jiffies (49.7 days on a HZ=1000 system). Avoid these problems by using jiffies_64 as our basis, and not converting to nanoseconds (we do convert to clock_t because user facing API must not be dependent on internal kernel HZ values). Link: http://lkml.kernel.org/p/99d63c5bfe9b320a3b428d773825a37095bf6a51.1405708254.git.tony.luck@intel.com Fixes: 8aacf017 "tracing: Add "uptime" trace clock that uses jiffies" Signed-off-by:
Tony Luck <tony.luck@intel.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Dmitry Torokhov authored
commit 50c5d36d upstream. We attempt to remove noise from coordinates reported by devices in input_handle_abs_event(), unfortunately, unless we were dropping the event altogether, we were ignoring the adjusted value and were passing on the original value instead. Reviewed-by:
Andrew de los Reyes <adlr@chromium.org> Reviewed-by:
Benson Leung <bleung@chromium.org> Reviewed-by:
David Herrmann <dh.herrmann@gmail.com> Reviewed-by:
Henrik Rydberg <rydberg@euromail.se> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Romain Degez authored
commit b32bfc06 upstream. Add support of the Promise FastTrak TX8660 SATA HBA in ahci mode by registering the board in the ahci_pci_tbl[]. Note: this HBA also provide a hardware RAID mode when activated in BIOS but specific drivers from the manufacturer are required in this case. Signed-off-by:
Romain Degez <romain.degez@gmail.com> Tested-by:
Romain Degez <romain.degez@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Martin Schwidefsky authored
commit dab6cf55 upstream. The PSW mask check of the PTRACE_POKEUSR_AREA command is incorrect. The PSW_MASK_USER define contains the PSW_MASK_ASC bits, the ptrace interface accepts all combinations for the address-space-control bits. To protect the kernel space the PSW mask check in ptrace needs to reject the address-space-control bit combination for home space. Fixes CVE-2014-3534 Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Hans de Goede authored
commit 242841d3 upstream. Tested-and-reported-by:
yullaw <yullaw@mageia.cz> Signed-off-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Antti Palosaari authored
commit db4175ae upstream. Only supported modulation for DVB-S is QPSK. Modulation parameter contains invalid value for DVB-S on some cases, which leads driver refusing tuning attempt. Due to that, hard code modulation to QPSK in case of DVB-S. Signed-off-by:
Antti Palosaari <crope@iki.fi> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Kevin Hao authored
commit 1871ee13 upstream. The sata on fsl mpc8315e is broken after the commit 8a4aeec8 ("libata/ahci: accommodate tag ordered controllers"). The reason is that the ata controller on this SoC only implement a queue depth of 16. When issuing the commands in tag order, all the commands in tag 16 ~ 31 are mapped to tag 0 unconditionally and then causes the sata malfunction. It makes no senses to use a 32 queue in software while the hardware has less queue depth. So consider the queue depth implemented by the hardware when requesting a command tag. Fixes: 8a4aeec8 ("libata/ahci: accommodate tag ordered controllers") Signed-off-by:
Kevin Hao <haokexin@gmail.com> Acked-by:
Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Mikulas Patocka authored
commit 3b3a1814 upstream. This patch provides the compat BLKZEROOUT ioctl. The argument is a pointer to two uint64_t values, so there is no need to translate it. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Acked-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Jens Axboe <axboe@fb.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Tejun Heo authored
commit 0b462c89 upstream. While a queue is being destroyed, all the blkgs are destroyed and its ->root_blkg pointer is set to NULL. If someone else starts to drain while the queue is in this state, the following oops happens. NULL pointer dereference at 0000000000000028 IP: [<ffffffff8144e944>] blk_throtl_drain+0x84/0x230 PGD e4a1067 PUD b773067 PMD 0 Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC Modules linked in: cfq_iosched(-) [last unloaded: cfq_iosched] CPU: 1 PID: 537 Comm: bash Not tainted 3.16.0-rc3-work+ #2 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 task: ffff88000e222250 ti: ffff88000efd4000 task.ti: ffff88000efd4000 RIP: 0010:[<ffffffff8144e944>] [<ffffffff8144e944>] blk_throtl_drain+0x84/0x230 RSP: 0018:ffff88000efd7bf0 EFLAGS: 00010046 RAX: 0000000000000000 RBX: ffff880015091450 RCX: 0000000000000001 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffff88000efd7c10 R08: 0000000000000000 R09: 0000000000000001 R10: ffff88000e222250 R11: 0000000000000000 R12: ffff880015091450 R13: ffff880015092e00 R14: ffff880015091d70 R15: ffff88001508fc28 FS: 00007f1332650740(0000) GS:ffff88001fa80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 0000000000000028 CR3: 0000000009446000 CR4: 00000000000006e0 Stack: ffffffff8144e8f6 ffff880015091450 0000000000000000 ffff880015091d80 ffff88000efd7c28 ffffffff8144ae2f ffff880015091450 ffff88000efd7c58 ffffffff81427641 ffff880015091450 ffffffff82401f00 ffff880015091450 Call Trace: [<ffffffff8144ae2f>] blkcg_drain_queue+0x1f/0x60 [<ffffffff81427641>] __blk_drain_queue+0x71/0x180 [<ffffffff81429b3e>] blk_queue_bypass_start+0x6e/0xb0 [<ffffffff814498b8>] blkcg_deactivate_policy+0x38/0x120 [<ffffffff8144ec44>] blk_throtl_exit+0x34/0x50 [<ffffffff8144aea5>] blkcg_exit_queue+0x35/0x40 [<ffffffff8142d476>] blk_release_queue+0x26/0xd0 [<ffffffff81454968>] kobject_cleanup+0x38/0x70 [<ffffffff81454848>] kobject_put+0x28/0x60 [<ffffffff81427505>] blk_put_queue+0x15/0x20 [<ffffffff817d07bb>] scsi_device_dev_release_usercontext+0x16b/0x1c0 [<ffffffff810bc339>] execute_in_process_context+0x89/0xa0 [<ffffffff817d064c>] scsi_device_dev_release+0x1c/0x20 [<ffffffff817930e2>] device_release+0x32/0xa0 [<ffffffff81454968>] kobject_cleanup+0x38/0x70 [<ffffffff81454848>] kobject_put+0x28/0x60 [<ffffffff817934d7>] put_device+0x17/0x20 [<ffffffff817d11b9>] __scsi_remove_device+0xa9/0xe0 [<ffffffff817d121b>] scsi_remove_device+0x2b/0x40 [<ffffffff817d1257>] sdev_store_delete+0x27/0x30 [<ffffffff81792ca8>] dev_attr_store+0x18/0x30 [<ffffffff8126f75e>] sysfs_kf_write+0x3e/0x50 [<ffffffff8126ea87>] kernfs_fop_write+0xe7/0x170 [<ffffffff811f5e9f>] vfs_write+0xaf/0x1d0 [<ffffffff811f69bd>] SyS_write+0x4d/0xc0 [<ffffffff81d24692>] system_call_fastpath+0x16/0x1b 776687bc ("block, blk-mq: draining can't be skipped even if bypass_depth was non-zero") made it easier to trigger this bug by making blk_queue_bypass_start() drain even when it loses the first bypass test to blk_cleanup_queue(); however, the bug has always been there even before the commit as blk_queue_bypass_start() could race against queue destruction, win the initial bypass test but perform the actual draining after blk_cleanup_queue() already destroyed all blkgs. Fix it by skippping calling into policy draining if all the blkgs are already gone. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Shirish Pargaonkar <spargaonkar@suse.com> Reported-by:
Sasha Levin <sasha.levin@oracle.com> Reported-by:
Jet Chen <jet.chen@intel.com> Tested-by:
Shirish Pargaonkar <spargaonkar@suse.com> Signed-off-by:
Jens Axboe <axboe@fb.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Christoph Hellwig authored
commit d45b3279 upstream. There is no inherent reason why the last put of a tag structure must be the one for the Scsi_Host, as device model objects can be held for arbitrary periods. Merge blk_free_tags and __blk_free_tags into a single funtion that just release a references and get rid of the BUG() when the host reference wasn't the last. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@fb.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Hans Verkuil authored
commit 3445857b upstream. When the audio encoding is changed the driver calls hdpvr_set_audio with the current opt->audio_input value. However, that should have been opt->audio_input + 1. So changing the audio encoding inadvertently changes the input as well. This bug has always been there. The second bug was introduced in kernel 3.10 and that broke the default_audio_input module option handling: the audio encoding was never switched to AC3 if default_audio_input was set to 2 (SPDIF input). In addition, since starting with 3.10 the audio encoding is always set at the start the first bug now always happens when the driver is loaded. In the past this bug would only surface if the user would change the audio encoding after the driver was loaded. Also fixes a small trivial typo (bufffer -> buffer). Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Reported-by:
Scott Doty <scott@corp.sonic.net> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Rickard Strandqvist authored
commit f71920ef upstream. Wrong value used in same cases for the aspect ratio. Signed-off-by:
Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Acked-by:
Lad, Prabhakar <prabhakar.csengg@gmail.com> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Max Filippov authored
commit 17290231 upstream. There are two FIXMEs in the double exception handler 'for the extremely unlikely case'. This case gets hit by gcc during kernel build once in a few hours, resulting in an unrecoverable exception condition. Provide missing fixup routine to handle this case. Double exception literals now need 8 more bytes, add them to the linker script. Also replace bbsi instructions with bbsi.l as we're branching depending on 8th and 7th LSB-based bits of exception address. This may be tested by adding the explicit DTLB invalidation to window overflow handlers, like the following: --- a/arch/xtensa/kernel/vectors.S +++ b/arch/xtensa/kernel/vectors.S @@ -592,6 +592,14 @@ ENDPROC(_WindowUnderflow4) ENTRY_ALIGN64(_WindowOverflow8) s32e a0, a9, -16 + bbsi.l a9, 31, 1f + rsr a0, ccount + bbsi.l a0, 4, 1f + pdtlb a0, a9 + idtlb a0 + movi a0, 9 + idtlb a0 +1: l32e a0, a1, -12 s32e a2, a9, -8 s32e a1, a9, -12 Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Mikulas Patocka authored
commit 69461747 upstream. The patch 3e374919 is supposed to fix the problem where kmem_cache_create incorrectly reports duplicate cache name and fails. The problem is described in the header of that patch. However, the patch doesn't really fix the problem because of these reasons: * the logic to test for debugging is reversed. It was intended to perform the check only if slub debugging is enabled (which implies that caches with the same parameters are not merged). Therefore, there should be #if !defined(CONFIG_SLUB) || defined(CONFIG_SLUB_DEBUG_ON) The current code has the condition reversed and performs the test if debugging is disabled. * slub debugging may be enabled or disabled based on kernel command line, CONFIG_SLUB_DEBUG_ON is just the default settings. Therefore the test based on definition of CONFIG_SLUB_DEBUG_ON is unreliable. This patch fixes the problem by removing the test "!defined(CONFIG_SLUB_DEBUG_ON)". Therefore, duplicate names are never checked if the SLUB allocator is used. Note to stable kernel maintainers: when backporint this patch, please backport also the patch 3e374919. Acked-by:
David Rientjes <rientjes@google.com> Acked-by:
Christoph Lameter <cl@linux.com> Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Pekka Enberg <penberg@kernel.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
- 06 Aug, 2014 22 commits
-
-
Tomasz Figa authored
commit 29e697b1 upstream. Certain GIC implementation, namely those found on earlier, single cluster, Exynos SoCs, have registers mapped without per-CPU banking, which means that the driver needs to use different offset for each CPU. Currently the driver calculates the offset by multiplying value returned by cpu_logical_map() by CPU offset parsed from DT. This is correct when CPU topology is not specified in DT and aforementioned function returns core ID alone. However when DT contains CPU topology, the function changes to return cluster ID as well, which is non-zero on mentioned SoCs and so breaks the calculation in GIC driver. This patch fixes this by masking out cluster ID in CPU offset calculation so that only core ID is considered. Multi-cluster Exynos SoCs already have banked GIC implementations, so this simple fix should be enough. Reported-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by:
Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by:
Tomasz Figa <t.figa@samsung.com> Fixes: db0d4db2 ("ARM: gic: allow GIC to support non-banked setups") Link: https://lkml.kernel.org/r/1405610624-18722-1-git-send-email-t.figa@samsung.comSigned-off-by:
Jason Cooper <jason@lakedaemon.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Gavin Guo authored
commit bb86cf56 upstream. When using USB 3.0 pen drive with the [AMD] FCH USB XHCI Controller [1022:7814], the second hotplugging will experience the USB 3.0 pen drive is recognized as high-speed device. After bisecting the kernel, I found the commit number 41e7e056 (USB: Allow USB 3.0 ports to be disabled.) causes the bug. After doing some experiments, the bug can be fixed by avoiding executing the function hub_usb3_port_disable(). Because the port status with [AMD] FCH USB XHCI Controlleris [1022:7814] is already in RxDetect (I tried printing out the port status before setting to Disabled state), it's reasonable to check the port status before really executing hub_usb3_port_disable(). Fixes: 41e7e056 (USB: Allow USB 3.0 ports to be disabled.) Signed-off-by:
Gavin Guo <gavin.guo@canonical.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Abbas Raza authored
commit 953c6646 upstream. There are 2 methods for ZLP (zero-length packet) generation: 1) In software 2) Automatic generation by device controller 1) is implemented in UDC driver and it attaches ZLP to IN packet if descriptor->size < wLength 2) can be enabled/disabled by setting ZLT bit in the QH When gadget ffs is connected to ubuntu host, the host sends get descriptor request and wLength in setup packet is 255 while the size of descriptor which will be sent by gadget in IN packet is 64 byte. So the composite driver sets req->zero = 1. In UDC driver following code will be executed then if (hwreq->req.zero && hwreq->req.length && (hwreq->req.length % hwep->ep.maxpacket == 0)) add_td_to_list(hwep, hwreq, 0); Case-A: So in case of ubuntu host, UDC driver will attach a ZLP to the IN packet. ubuntu host will request 255 byte in IN request, gadget will send 64 byte with ZLP and host will come to know that there is no more data. But hold on, by default ZLT=0 for endpoint 0 so hardware also tries to automatically generate the ZLP which blocks enumeration for ~6 seconds due to endpoint 0 STALL, NAKs are sent to host for any requests (OUT/PING) Case-B: In case when gadget ffs is connected to Apple device, Apple device sends setup packet with wLength=64. So descriptor->size = 64 and wLength=64 therefore req->zero = 0 and UDC driver will not attach any ZLP to the IN packet. Apple device requests 64 bytes, gets 64 bytes and doesn't further request for IN data. But ZLT=0 by default for endpoint 0 so hardware tries to automatically generate the ZLP which blocks enumeration for ~6 seconds due to endpoint 0 STALL, NAKs are sent to host for any requests (OUT/PING) According to USB2.0 specs: 8.5.3.2 Variable-length Data Stage A control pipe may have a variable-length data phase in which the host requests more data than is contained in the specified data structure. When all of the data structure is returned to the host, the function should indicate that the Data stage is ended by returning a packet that is shorter than the MaxPacketSize for the pipe. If the data structure is an exact multiple of wMaxPacketSize for the pipe, the function will return a zero-length packet to indicate the end of the Data stage. In Case-A mentioned above: If we disable software ZLP generation & ZLT=0 for endpoint 0 OR if software ZLP generation is not disabled but we set ZLT=1 for endpoint 0 then enumeration doesn't block for 6 seconds. In Case-B mentioned above: If we disable software ZLP generation & ZLT=0 for endpoint then enumeration still blocks due to ZLP automatically generated by hardware and host not needing it. But if we keep software ZLP generation enabled but we set ZLT=1 for endpoint 0 then enumeration doesn't block for 6 seconds. So the proper solution for this issue seems to disable automatic ZLP generation by hardware (i.e by setting ZLT=1 for endpoint 0) and let software (UDC driver) handle the ZLP generation based on req->zero field. Signed-off-by:
Abbas Raza <Abbas_Raza@mentor.com> Signed-off-by:
Peter Chen <peter.chen@freescale.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Alex Deucher authored
commit 201bb624 upstream. If the value in the scratch register is 0, set it to the max level. This fixes an issue where the console fb blanking code calls back into the backlight driver on unblank and then sets the backlight level to 0 after the driver has already set the mode and enabled the backlight. bugs: https://bugs.freedesktop.org/show_bug.cgi?id=81382 https://bugs.freedesktop.org/show_bug.cgi?id=70207Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Tested-by:
David Heidelberger <david.heidelberger@ixit.cz> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Alex Deucher authored
commit 0ac66eff upstream. In some cases we fetch the edid in the detect() callback in order to determine what sort of monitor is connected. If that happens, don't fetch the edid again in the get_modes() callback or we will leak the edid. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Viresh Kumar authored
commit 92c14bd9 upstream. This is only relevant to implementations with multiple clusters, where clusters have separate clock lines but all CPUs within a cluster share it. Consider a dual cluster platform with 2 cores per cluster. During suspend we start hot unplugging CPUs in order 1 to 3. When CPU2 is removed, policy->kobj would be moved to CPU3 and when CPU3 goes down we wouldn't free policy or its kobj as we want to retain permissions/values/etc. Now on resume, we will get CPU2 before CPU3 and will call __cpufreq_add_dev(). We will recover the old policy and update policy->cpu from 3 to 2 from update_policy_cpu(). But the kobj is still tied to CPU3 and isn't moved to CPU2. We wouldn't create a link for CPU2, but would try that for CPU3 while bringing it online. Which will report errors as CPU3 already has kobj assigned to it. This bug got introduced with commit 42f921a6, which overlooked this scenario. To fix this, lets move kobj to the new policy->cpu while bringing first CPU of a cluster back. Also do a WARN_ON() if kobject_move failed, as we would reach here only for the first CPU of a non-boot cluster. And we can't recover from this situation, if kobject_move() fails. Fixes: 42f921a6 (cpufreq: remove sysfs files for CPUs which failed to come back after resume) Reported-and-tested-by:
Bu Yitian <ybu@qti.qualcomm.com> Reported-by:
Saravana Kannan <skannan@codeaurora.org> Reviewed-by:
Srivatsa S. Bhat <srivatsa@mit.edu> Signed-off-by:
Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> [ kamal: backport to 3.13-stable (context) ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Guenter Roeck authored
commit de12d6f4 upstream. Temperature limit registers are signed. Limits therefore need to be clamped to (-128, 127) degrees C and not to (0, 255) degrees C. Without this fix, writing a limit of 128 degrees C sets the actual limit to -128 degrees C. Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Reviewed-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jason Wang authored
commit fbb60fe3 upstream. Return IRQ_NONE if it was not our irq. This is necessary for the case when qxl is sharing irq line with a device A in a crash kernel. If qxl is initialized before A and A's irq was raised during this gap, returning IRQ_HANDLED in this case will cause this irq to be raised again after EOI since kernel think it was handled but in fact it was not. Cc: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by:
Jason Wang <jasowang@redhat.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Peter Zijlstra authored
commit 4badad35 upstream. The optimistic spin code assumes regular stores and cmpxchg() play nice; this is found to not be true for at least: parisc, sparc32, tile32, metag-lock1, arc-!llsc and hexagon. There is further wreckage, but this in particular seemed easy to trigger, so blacklist this. Opt in for known good archs. Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Reported-by:
Mikulas Patocka <mpatocka@redhat.com> Cc: David Miller <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: James Bottomley <James.Bottomley@hansenpartnership.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Jason Low <jason.low2@hp.com> Cc: Waiman Long <waiman.long@hp.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: John David Anglin <dave.anglin@bell.net> Cc: James Hogan <james.hogan@imgtec.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Davidlohr Bueso <davidlohr@hp.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: sparclinux@vger.kernel.org Link: http://lkml.kernel.org/r/20140606175316.GV13930@laptop.programming.kicks-ass.netSigned-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Mateusz Guzik authored
commit b0ab99e7 upstream. proc_sched_show_task() does: if (nr_switches) do_div(avg_atom, nr_switches); nr_switches is unsigned long and do_div truncates it to 32 bits, which means it can test non-zero on e.g. x86-64 and be truncated to zero for division. Fix the problem by using div64_ul() instead. As a side effect calculations of avg_atom for big nr_switches are now correct. Signed-off-by:
Mateusz Guzik <mguzik@redhat.com> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1402750809-31991-1-git-send-email-mguzik@redhat.comSigned-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Martin Lau authored
commit 97b8ee84 upstream. ring_buffer_poll_wait() should always put the poll_table to its wait_queue even there is immediate data available. Otherwise, the following epoll and read sequence will eventually hang forever: 1. Put some data to make the trace_pipe ring_buffer read ready first 2. epoll_ctl(efd, EPOLL_CTL_ADD, trace_pipe_fd, ee) 3. epoll_wait() 4. read(trace_pipe_fd) till EAGAIN 5. Add some more data to the trace_pipe ring_buffer 6. epoll_wait() -> this epoll_wait() will block forever ~ During the epoll_ctl(efd, EPOLL_CTL_ADD,...) call in step 2, ring_buffer_poll_wait() returns immediately without adding poll_table, which has poll_table->_qproc pointing to ep_poll_callback(), to its wait_queue. ~ During the epoll_wait() call in step 3 and step 6, ring_buffer_poll_wait() cannot add ep_poll_callback() to its wait_queue because the poll_table->_qproc is NULL and it is how epoll works. ~ When there is new data available in step 6, ring_buffer does not know it has to call ep_poll_callback() because it is not in its wait queue. Hence, block forever. Other poll implementation seems to call poll_wait() unconditionally as the very first thing to do. For example, tcp_poll() in tcp.c. Link: http://lkml.kernel.org/p/20140610060637.GA14045@devbig242.prn2.facebook.com Fixes: 2a2cc8f7 "ftrace: allow the event pipe to be polled" Reviewed-by:
Chris Mason <clm@fb.com> Signed-off-by:
Martin Lau <kafai@fb.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Niu Yawei authored
commit d68aab6b upstream. Commit 1ab6c499 (fs: convert fs shrinkers to new scan/count API) accidentally removed locking from quota shrinker. Fix it - dqcache_shrink_scan() should use dq_list_lock to protect the scan on free_dquots list. Fixes: 1ab6c499Signed-off-by:
Niu Yawei <yawei.niu@intel.com> Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Mike Snitzer authored
commit 048e5a07 upstream. The block size for the dm-cache's data device must remained fixed for the life of the cache. Disallow any attempt to change the cache's data block size. Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Acked-by:
Joe Thornber <ejt@redhat.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Mike Snitzer authored
commit 9aec8629 upstream. The block size for the thin-pool's data device must remained fixed for the life of the thin-pool. Disallow any attempt to change the thin-pool's data block size. It should be noted that attempting to change the data block size via thin-pool table reload will be ignored as a side-effect of the thin-pool handover that the thin-pool target does during thin-pool table reload. Here is an example outcome of attempting to load a thin-pool table that reduced the thin-pool's data block size from 1024K to 512K. Before: kernel: device-mapper: thin: 253:4: growing the data device from 204800 to 409600 blocks After: kernel: device-mapper: thin metadata: changing the data block size (from 2048 to 1024) is not supported kernel: device-mapper: table: 253:4: thin-pool: Error creating metadata object kernel: device-mapper: ioctl: error adding target to table Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Acked-by:
Joe Thornber <ejt@redhat.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
zhangwei(Jovi) authored
commit f0160a5a upstream. The TRACE_ITER_PRINTK check in __trace_puts/__trace_bputs is missing, so add it, to be consistent with __trace_printk/__trace_bprintk. Those functions are all called by the same function: trace_printk(). Link: http://lkml.kernel.org/p/51E7A7D6.8090900@huawei.comSigned-off-by:
zhangwei(Jovi) <jovi.zhangwei@huawei.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Steven Rostedt (Red Hat) authored
commit 5f8bf2d2 upstream. Running my ftrace tests on PowerPC, it failed the test that checks if function_graph tracer is affected by the stack tracer. It was. Looking into this, I found that the update_function_graph_func() must be called even if the trampoline function is not changed. This is because archs like PowerPC do not support ftrace_ops being passed by assembly and instead uses a helper function (what the trampoline function points to). Since this function is not changed even when multiple ftrace_ops are added to the code, the test that falls out before calling update_function_graph_func() will miss that the update must still be done. Call update_function_graph_function() for all calls to update_ftrace_function() Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
zhangwei(Jovi) authored
commit 8abfb872 upstream. Currently trace option stacktrace is not applicable for trace_printk with constant string argument, the reason is in __trace_puts/__trace_bputs ftrace_trace_stack is missing. In contrast, when using trace_printk with non constant string argument(will call into __trace_printk/__trace_bprintk), then trace option stacktrace is workable, this inconstant result will confuses users a lot. Link: http://lkml.kernel.org/p/51E7A7C9.9040401@huawei.comSigned-off-by:
zhangwei(Jovi) <jovi.zhangwei@huawei.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Takashi Iwai authored
commit 4da63c6f upstream. When the initialization of Intel HDMI controller fails due to missing i915 kernel symbols (e.g. HD-audio is built in while i915 is module), the driver discontinues the probe. However, since the probe was done asynchronously, the driver object still remains, thus the relevant PM ops are still called at suspend/resume. This results in the bad access to the incomplete audio card object, eventually leads to Oops or stall at PM. This patch adds the missing checks of chip->init_failed flag at each PM callback in order to fix the problem above. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=79561Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Takashi Iwai authored
commit 4320f6b1 upstream. The commit [247bc037: PM / Sleep: Mitigate race between the freezer and request_firmware()] introduced the finer state control, but it also leads to a new bug; for example, a bug report regarding the firmware loading of intel BT device at suspend/resume: https://bugzilla.novell.com/show_bug.cgi?id=873790 The root cause seems to be a small window between the process resume and the clear of usermodehelper lock. The request_firmware() function checks the UMH lock and gives up when it's in UMH_DISABLE state. This is for avoiding the invalid f/w loading during suspend/resume phase. The problem is, however, that usermodehelper_enable() is called at the end of thaw_processes(). Thus, a thawed process in between can kick off the f/w loader code path (in this case, via btusb_setup_intel()) even before the call of usermodehelper_enable(). Then usermodehelper_read_trylock() returns an error and request_firmware() spews WARN_ON() in the end. This oneliner patch fixes the issue just by setting to UMH_FREEZING state again before restarting tasks, so that the call of request_firmware() will be blocked until the end of this function instead of returning an error. Fixes: 247bc037 (PM / Sleep: Mitigate race between the freezer and request_firmware()) Link: https://bugzilla.novell.com/show_bug.cgi?id=873790Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Sasha Levin authored
commit 3cf521f7 upstream. The l2tp [get|set]sockopt() code has fallen back to the UDP functions for socket option levels != SOL_PPPOL2TP since day one, but that has never actually worked, since the l2tp socket isn't an inet socket. As David Miller points out: "If we wanted this to work, it'd have to look up the tunnel and then use tunnel->sk, but I wonder how useful that would be" Since this can never have worked so nobody could possibly have depended on that functionality, just remove the broken code and return -EINVAL. Reported-by:
Sasha Levin <sasha.levin@oracle.com> Acked-by:
James Chapman <jchapman@katalix.com> Acked-by:
David Miller <davem@davemloft.net> Cc: Phil Turnbull <phil.turnbull@oracle.com> Cc: Vegard Nossum <vegard.nossum@oracle.com> Cc: Willy Tarreau <w@1wt.eu> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Axel Lin authored
commit 6b00f440 upstream. Dashes are not allowed in hwmon name attributes. Use "da9055" instead of "da9055-hwmon". Signed-off-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Axel Lin authored
commit ee14b644 upstream. Dashes are not allowed in hwmon name attributes. Use "da9052" instead of "da9052-hwmon". Signed-off-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-