- 02 Jul, 2014 40 commits
-
-
Mike Frysinger authored
commit 7fd44dac upstream. The io_setup takes a pointer to a context id of type aio_context_t. This in turn is typed to a __kernel_ulong_t. We could tweak the exported headers to define this as a 64bit quantity for specific ABIs, but since we already have a 32bit compat shim for the x86 ABI, let's just re-use that logic. The libaio package is also written to expect this as a pointer type, so a compat shim would simplify that. The io_submit func operates on an array of pointers to iocb structs. Padding out the array to be 64bit aligned is a huge pain, so convert it over to the existing compat shim too. We don't convert io_getevents to the compat func as its only purpose is to handle the timespec struct, and the x32 ABI uses 64bit times. With this change, the libaio package can now pass its testsuite when built for the x32 ABI. Signed-off-by: Mike Frysinger <vapier@gentoo.org> Link: http://lkml.kernel.org/r/1399250595-5005-1-git-send-email-vapier@gentoo.org Cc: H.J. Lu <hjl.tools@gmail.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
H. Peter Anvin authored
commit 246f2d2e upstream. It is not safe to use LAR to filter when to go down the espfix path, because the LDT is per-process (rather than per-thread) and another thread might change the descriptors behind our back. Fortunately it is always *safe* (if a bit slow) to go down the espfix path, and a 32-bit LDT stack segment is extremely rare. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.comSigned-off-by: Jiri Slaby <jslaby@suse.cz>
-
Nicholas A. Bellinger authored
[Note that a different patch to address the same issue went in during v3.15-rc1 (commit 4442dc8a), but includes a bunch of other changes that don't strictly apply to fixing the bug] This patch changes rd_allocate_sgl_table() to explicitly clear ramdisk_mcp backend memory pages by passing __GFP_ZERO into alloc_pages(). This addresses a potential security issue where reading from a ramdisk_mcp could return sensitive information, and follows what >= v3.15 does to explicitly clear ramdisk_mcp memory at backend device initialization time. Reported-by: Jorge Daniel Sequeira Matias <jdsm@tecnico.ulisboa.pt> Cc: Jorge Daniel Sequeira Matias <jdsm@tecnico.ulisboa.pt> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Roland Dreier authored
commit 2426bd45 upstream. When an initiator sends an allocation length bigger than what its command consumes, the target should only return the actual response data and set the residual length to the unused part of the allocation length. Add a helper function that command handlers (INQUIRY, READ CAPACITY, etc) can use to do this correctly, and use this code to get the correct residual for commands that don't use the full initiator allocation in the handlers for READ CAPACITY, READ CAPACITY(16), INQUIRY, MODE SENSE and REPORT LUNS. This addresses a handful of failures as reported by Christophe with the Windows Certification Kit: http://permalink.gmane.org/gmane.linux.scsi.target.devel/6515Signed-off-by: Roland Dreier <roland@purestorage.com> Tested-by: Christophe Vu-Brugier <cvubrugier@yahoo.fr> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sagi Grimberg authored
commit 22c7aaa5 upstream. In case the transport is iser we should not include the iscsi target info in the sendtargets text response pdu. This causes sendtargets response to include the target info twice. Modify iscsit_build_sendtargets_response to filter transport types that don't match. Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Reported-by: Slava Shwartsman <valyushash@gmail.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Nicholas Bellinger authored
commit bbc05048 upstream. This patch fixes a iscsi_queue_req memory leak when ABORT_TASK response has been queued by TFO->queue_tm_rsp() -> lio_queue_tm_rsp() after a long standing I/O completes, but the connection has already reset and waiting for cleanup to complete in iscsit_release_commands_from_conn() -> transport_generic_free_cmd() -> transport_wait_for_tasks() code. It moves iscsit_free_queue_reqs_for_conn() after the per-connection command list has been released, so that the associated se_cmd tag can be completed + released by target-core before freeing any remaining iscsi_queue_req memory for the connection generated by lio_queue_tm_rsp(). Cc: Thomas Glanzmann <thomas@glanzmann.de> Cc: Charalampos Pournaris <charpour@gmail.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Nicholas Bellinger authored
commit a95d6511 upstream. This patch fixes a bug where multiple waiters on ->t_transport_stop_comp occurs due to a concurrent ABORT_TASK and session reset both invoking transport_wait_for_tasks(), while waiting for the associated se_cmd descriptor backend processing to complete. For this case, complete_all() should be invoked in order to wake up both waiters in core_tmr_abort_task() + transport_generic_free_cmd() process contexts. Cc: Thomas Glanzmann <thomas@glanzmann.de> Cc: Charalampos Pournaris <charpour@gmail.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Nicholas Bellinger authored
commit f15e9cd9 upstream. This patch fixes a bug where se_cmd descriptors associated with a Task Management Request (TMR) where not setting CMD_T_ACTIVE before being dispatched into target_tmr_work() process context. This is required in order for transport_generic_free_cmd() -> transport_wait_for_tasks() to wait on se_cmd->t_transport_stop_comp if a session reset event occurs while an ABORT_TASK is outstanding waiting for another I/O to complete. Cc: Thomas Glanzmann <thomas@glanzmann.de> Cc: Charalampos Pournaris <charpour@gmail.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sagi Grimberg authored
commit f5ebec96 upstream. disconnected_handler works are scheduled on system_wq. When attempting to unload, first make sure all works have cleaned up. Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sagi Grimberg authored
commit 88c4015f upstream. There are 4 RDMA_CM events that all basically mean that the user should teardown the IB connection: - DISCONNECTED - ADDR_CHANGE - DEVICE_REMOVAL - TIMEWAIT_EXIT Only in DISCONNECTED/ADDR_CHANGE it makes sense to call rdma_disconnect (send DREQ/DREP to our initiator). So we keep the same teardown handler for all of them but only indicate calling rdma_disconnect for the relevant events. This patch also removes redundant debug prints for each single event. v2 changes: - Call isert_disconnected_handler() for DEVICE_REMOVAL (Or + Sag) Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sagi Grimberg authored
commit 9d49f5e2 upstream. In ungraceful teardowns isert close flows seem racy such that isert_wait_conn hangs as RDMA_CM_EVENT_DISCONNECTED never gets invoked (no one called rdma_disconnect). Both graceful and ungraceful teardowns will have rx flush errors (isert posts a batch once connection is established). Once all flush errors are consumed we invoke isert_wait_conn and it will be responsible for calling rdma_disconnect. This way it can be sure that rdma_disconnect was called and it won't wait forever. This patch also removes the logout_posted indicator. either the logout completion was consumed and no problem decrementing the post_send_buf_count, or it was consumed as a flush error. no point of keeping it for isert_wait_conn as there is no danger that isert_conn will be accidentally removed while it is running. (Drop unnecessary sleep_on_conn_wait_comp check in isert_cq_rx_comp_err - nab) Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sagi Grimberg authored
commit e346ab34 upstream. In case np_thread state is in RESET/SHUTDOWN/EXIT states, no point for isert to stall there as we may get a hang in case no one will wake it up later. Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jukka Taimisto authored
commit 8a96f3cd upstream. -[0x01 Introduction We have found a programming error causing a deadlock in Bluetooth subsystem of Linux kernel. The problem is caused by missing release_sock() call when L2CAP connection creation fails due full accept queue. The issue can be reproduced with 3.15-rc5 kernel and is also present in earlier kernels. -[0x02 Details The problem occurs when multiple L2CAP connections are created to a PSM which contains listening socket (like SDP) and left pending, for example, configuration (the underlying ACL link is not disconnected between connections). When L2CAP connection request is received and listening socket is found the l2cap_sock_new_connection_cb() function (net/bluetooth/l2cap_sock.c) is called. This function locks the 'parent' socket and then checks if the accept queue is full. 1178 lock_sock(parent); 1179 1180 /* Check for backlog size */ 1181 if (sk_acceptq_is_full(parent)) { 1182 BT_DBG("backlog full %d", parent->sk_ack_backlog); 1183 return NULL; 1184 } If case the accept queue is full NULL is returned, but the 'parent' socket is not released. Thus when next L2CAP connection request is received the code blocks on lock_sock() since the parent is still locked. Also note that for connections already established and waiting for configuration to complete a timeout will occur and l2cap_chan_timeout() (net/bluetooth/l2cap_core.c) will be called. All threads calling this function will also be blocked waiting for the channel mutex since the thread which is waiting on lock_sock() alread holds the channel mutex. We were able to reproduce this by sending continuously L2CAP connection request followed by disconnection request containing invalid CID. This left the created connections pending configuration. After the deadlock occurs it is impossible to kill bluetoothd, btmon will not get any more data etc. requiring reboot to recover. -[0x03 Fix Releasing the 'parent' socket when l2cap_sock_new_connection_cb() returns NULL seems to fix the issue. Signed-off-by: Jukka Taimisto <jtt@codenomicon.com> Reported-by: Tommi Mäkilä <tmakila@codenomicon.com> Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Felipe Balbi authored
commit da64c27d upstream. LDISCs shouldn't call tty->ops->write() from within ->write_wakeup(). ->write_wakeup() is called with port lock taken and IRQs disabled, tty->ops->write() will try to acquire the same port lock and we will deadlock. Acked-by: Marcel Holtmann <marcel@holtmann.org> Reviewed-by: Peter Hurley <peter@hurleysoftware.com> Reported-by: Huang Shijie <b32955@freescale.com> Signed-off-by: Felipe Balbi <balbi@ti.com> Tested-by: Andreas Bießmann <andreas@biessmann.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jianguo Wu authored
commit 86f40622 upstream. When enable LPAE and big-endian in a hisilicon board, while specify mem=384M mem=512M@7680M, will get bad page state: Freeing unused kernel memory: 180K (c0466000 - c0493000) BUG: Bad page state in process init pfn:fa442 page:c7749840 count:0 mapcount:-1 mapping: (null) index:0x0 page flags: 0x40000400(reserved) Modules linked in: CPU: 0 PID: 1 Comm: init Not tainted 3.10.27+ #66 [<c000f5f0>] (unwind_backtrace+0x0/0x11c) from [<c000cbc4>] (show_stack+0x10/0x14) [<c000cbc4>] (show_stack+0x10/0x14) from [<c009e448>] (bad_page+0xd4/0x104) [<c009e448>] (bad_page+0xd4/0x104) from [<c009e520>] (free_pages_prepare+0xa8/0x14c) [<c009e520>] (free_pages_prepare+0xa8/0x14c) from [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) from [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) from [<c00b6458>] (handle_mm_fault+0xf4/0x120) [<c00b6458>] (handle_mm_fault+0xf4/0x120) from [<c0013754>] (do_page_fault+0xfc/0x354) [<c0013754>] (do_page_fault+0xfc/0x354) from [<c0008400>] (do_DataAbort+0x2c/0x90) [<c0008400>] (do_DataAbort+0x2c/0x90) from [<c0008fb4>] (__dabt_usr+0x34/0x40) The bad pfn:fa442 is not system memory(mem=384M mem=512M@7680M), after debugging, I find in page fault handler, will get wrong pfn from pte just after set pte, as follow: do_anonymous_page() { ... set_pte_at(mm, address, page_table, entry); //debug code pfn = pte_pfn(entry); pr_info("pfn:0x%lx, pte:0x%llxn", pfn, pte_val(entry)); //read out the pte just set new_pte = pte_offset_map(pmd, address); new_pfn = pte_pfn(*new_pte); pr_info("new pfn:0x%lx, new pte:0x%llxn", pfn, pte_val(entry)); ... } pfn: 0x1fa4f5, pte:0xc00001fa4f575f new_pfn:0xfa4f5, new_pte:0xc00000fa4f5f5f //new pfn/pte is wrong. The bug is happened in cpu_v7_set_pte_ext(ptep, pte): An LPAE PTE is a 64bit quantity, passed to cpu_v7_set_pte_ext in the r2 and r3 registers. On an LE kernel, r2 contains the LSB of the PTE, and r3 the MSB. On a BE kernel, the assignment is reversed. Unfortunately, the current code always assumes the LE case, leading to corruption of the PTE when clearing/setting bits. This patch fixes this issue much like it has been done already in the cpu_v7_switch_mm case. Signed-off-by: Jianguo Wu <wujianguo@huawei.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Russell King authored
commit 3683f44c upstream. While debugging the FEC ethernet driver using stacktrace, it was noticed that the stacktraces always begin as follows: [<c00117b4>] save_stack_trace_tsk+0x0/0x98 [<c0011870>] save_stack_trace+0x24/0x28 ... This is because the stack trace code includes the stack frames for itself. This is incorrect behaviour, and also leads to "skip" doing the wrong thing (which is the number of stack frames to avoid recording.) Perversely, it does the right thing when passed a non-current thread. Fix this by ensuring that we have a known constant number of frames above the main stack trace function, and always skip these. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Olivier Langlois authored
commit 3b35fc81 upstream. timestamps in v4l2 buffers returned to userspace are updated in uvc_video_clock_update() which uses timestamps fetched from uvc_video_clock_decode() by calling unconditionally ktime_get_ts(). Hence setting the module clock param to realtime has no effect before this patch. This has been tested with ffmpeg: ffmpeg -y -f v4l2 -input_format yuyv422 -video_size 640x480 -framerate 30 -i /dev/video0 \ -f alsa -acodec pcm_s16le -ar 16000 -ac 1 -i default \ -c:v libx264 -preset ultrafast \ -c:a libfdk_aac \ out.mkv and inspecting the v4l2 input starting timestamp. Signed-off-by: Olivier Langlois <olivier@trillion01.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Lv Zheng authored
commit 73577d1d upstream. This patch fixes the following issue: If DSDT is customized, no local DSDT copy is needed. References: https://bugzilla.kernel.org/show_bug.cgi?id=69711Signed-off-by: Enrico Etxe Arte <goitizena.generoa@gmail.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> [rjw: Subject] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
David Binderman authored
commit 5d42b0fa upstream. ACPICA BZ 1077. David Binderman. References: https://bugs.acpica.org/show_bug.cgi?id=1077Signed-off-by: David Binderman <dcb314@hotmail.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Ezequiel Garcia authored
commit 85ac1a17 upstream. Currently stk1160_read_reg() uses a stack-allocated char to get the read control value. This is wrong because usb_control_msg() requires a kmalloc-ed buffer. This commit fixes such issue by kmalloc'ating a 1-byte buffer to receive the read value. While here, let's remove the urb_buf array which was meant for a similar purpose, but never really used. Cc: Alan Stern <stern@rowland.harvard.edu> Reported-by: Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit deb29e90 upstream. When ivtv PCM device is accessed at the state where no firmware is loaded, it oopses like: BUG: unable to handle kernel NULL pointer dereference at 0000000000000050 IP: [<ffffffffa049a881>] try_mailbox.isra.0+0x11/0x50 [ivtv] Call Trace: [<ffffffffa049aa20>] ivtv_api_call+0x160/0x6b0 [ivtv] [<ffffffffa049af86>] ivtv_api+0x16/0x40 [ivtv] [<ffffffffa049b10c>] ivtv_vapi+0xac/0xc0 [ivtv] [<ffffffffa049d40d>] ivtv_start_v4l2_encode_stream+0x19d/0x630 [ivtv] [<ffffffffa0530653>] snd_ivtv_pcm_capture_open+0x173/0x1c0 [ivtv_alsa] [<ffffffffa04526f1>] snd_pcm_open_substream+0x51/0x100 [snd_pcm] [<ffffffffa0452853>] snd_pcm_open+0xb3/0x260 [snd_pcm] [<ffffffffa0452a37>] snd_pcm_capture_open+0x37/0x50 [snd_pcm] [<ffffffffa033f557>] snd_open+0xa7/0x1e0 [snd] [<ffffffff8118a628>] chrdev_open+0x88/0x1d0 [<ffffffff811840be>] do_dentry_open+0x1de/0x270 [<ffffffff81193a73>] do_last+0x1c3/0xec0 [<ffffffff81194826>] path_openat+0xb6/0x670 [<ffffffff81195b65>] do_filp_open+0x35/0x80 [<ffffffff81185449>] do_sys_open+0x129/0x210 [<ffffffff815b782d>] system_call_fastpath+0x1a/0x1f This patch adds the check of firmware at PCM open callback like other open callbacks of this driver. Bugzilla: https://apibugzilla.novell.com/show_bug.cgi?id=875440Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit c14829fa upstream. Only call usb_autopm_put_interface() if the corresponding usb_autopm_get_interface() was successful. This prevents a potential runtime PM counter imbalance should usb_autopm_get_interface() fail. Note that the USB PM usage counter is reset when the interface is unbound, but that the runtime PM counter may be left unbalanced. Also add comment on why we don't need to worry about racing resume/suspend on autopm_get failures. Fixes: d5fd650c ("usb: serial: prevent suspend/resume from racing against probe/remove") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit 80cc0fcb upstream. Make sure that needs_remote_wake up is always set when there are open ports. Currently close() would unconditionally set needs_remote_wakeup to 0 even though there might still be open ports. This could lead to blocked input and possibly dropped data on devices that do not support remote wakeup (and which must therefore not be runtime suspended while open). Add an open_ports counter (protected by the susp_lock) and only clear needs_remote_wakeup when the last port is closed. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit 014333f7 upstream. The delayed-write queue was never emptied on disconnect, something which would lead to leaked urbs and transfer buffers if the device is disconnected before being runtime resumed due to a write. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit 7fdd26a0 upstream. Neither the transfer buffer or the urb itself were released in the resume error path for delayed writes. Also on errors, the remainder of the queue was not even processed, which leads to further urb and buffer leaks. The same error path also failed to balance the outstanding-urb counter, something which results in degraded throughput or completely blocked writes. Fix this by releasing urb and buffer and balancing counters on errors, and by always processing the whole queue even when submission of one urb fails. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit 8452727d upstream. Fix use after free or NULL-pointer dereference during suspend and resume. The port data may never have been allocated (port probe failed) or may already have been released by port_remove (e.g. driver is unloaded) when suspend and resume are called. Fixes: e6929a90 ("USB: support for autosuspend in sierra while online") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit 353fe198 upstream. Fix AA deadlock in open error path that would call close() and try to grab the already held disc_mutex. Fixes: b9a44bc1 ("sierra: driver urb handling improvements") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit fb7ad4f9 upstream. Keep trying to submit urbs rather than bail out on first read-urb submission error, which would also prevent I/O for any further ports from being resumed. Instead keep an error count, for all types of failed submissions, and let USB core know that something went wrong. Also make sure to always clear the suspended flag. Currently a failed read-urb submission would prevent cached writes as well as any subsequent writes from being submitted until next suspend-resume cycle, something which may not even necessarily happen. Note that USB core currently only logs an error if an interface resume failed. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit 9096f1fb upstream. The interrupt urb was submitted unconditionally at resume, something which could lead to a NULL-pointer dereference in the urb completion handler as resume may be called after the port and port data is gone. Fix this by making sure the interrupt urb is only submitted and active when the port is open. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit 79eed03e upstream. The delayed-write queue was never emptied at shutdown (close), something which could lead to leaked urbs if the port is closed before being runtime resumed due to a write. When this happens the output buffer would not drain on close (closing_wait timeout), and after consecutive opens, writes could be corrupted with previously buffered data, transfered with reduced throughput or completely blocked. Note that unbusy_queued_urb() was simply moved out of CONFIG_PM. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johan Hovold authored
commit 170fad9e upstream. Fix race between write() and suspend() which could lead to writes being dropped (or I/O while suspended) if the device is runtime suspended while a write request is being processed. Specifically, suspend() releases the susp_lock after determining the device is idle but before setting the suspended flag, thus leaving a window where a concurrent write() can submit an urb. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
xiao jin authored
commit d9e93c08 upstream. We find a race between write and resume. usb_wwan_resume run play_delayed() and spin_unlock, but intfdata->suspended still is not set to zero. At this time usb_wwan_write is called and anchor the urb to delay list. Then resume keep running but the delayed urb have no chance to be commit until next resume. If the time of next resume is far away, tty will be blocked in tty_wait_until_sent during time. The race also can lead to writes being reordered. This patch put play_Delayed and intfdata->suspended together in the spinlock, it's to avoid the write race during resume. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by: xiao jin <jin.xiao@intel.com> Signed-off-by: Zhang, Qi1 <qi1.zhang@intel.com> Reviewed-by: David Cohen <david.a.cohen@linux.intel.com> Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
xiao jin authored
commit db090473 upstream. When enable usb serial for modem data, sometimes the tty is blocked in tty_wait_until_sent because portdata->out_busy always is set and have no chance to be cleared. We find a bug in write error path. usb_wwan_write set portdata->out_busy firstly, then try autopm async with error. No out urb submit and no usb_wwan_outdat_callback to this write, portdata->out_busy can't be cleared. This patch clear portdata->out_busy if usb_wwan_write try autopm async with error. Fixes: 383cedc3 ("USB: serial: full autosuspend support for the option driver") Signed-off-by: xiao jin <jin.xiao@intel.com> Signed-off-by: Zhang, Qi1 <qi1.zhang@intel.com> Reviewed-by: David Cohen <david.a.cohen@linux.intel.com> Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Mikulas Patocka authored
commit 972754cf upstream. I had occasional screen corruption with the matrox framebuffer driver and I found out that the reason for the corruption is that the hardware blitter accesses the videoram while it is being written to. The matrox driver has a macro WaitTillIdle() that should wait until the blitter is idle, but it sometimes doesn't work. I added a dummy read mga_inl(M_STATUS) to WaitTillIdle() to fix the problem. The dummy read will flush the write buffer in the PCI chipset, and the next read of M_STATUS will return the hardware status. Since applying this patch, I had no screen corruption at all. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Maurizio Lombardi authored
commit b5b60778 upstream. The variable "size" is expressed as number of blocks and not as number of clusters, this could trigger a kernel panic when using ext4 with the size of a cluster different from the size of a block. Signed-off-by: Maurizio Lombardi <mlombard@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jan Kara authored
commit eeece469 upstream. Tail of a page straddling inode size must be zeroed when being written out due to POSIX requirement that modifications of mmaped page beyond inode size must not be written to the file. ext4_bio_write_page() did this only for blocks fully beyond inode size but didn't properly zero blocks partially beyond inode size. Fix this. The problem has been uncovered by mmap_11-4 test in openposix test suite (part of LTP). Reported-by: Xiaoguang Wang <wangxg.fnst@cn.fujitsu.com> Fixes: 5a0dc736 Fixes: bd2d0210 CC: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Namjae Jeon authored
commit 1c8349a1 upstream. When we perform a data integrity sync we tag all the dirty pages with PAGECACHE_TAG_TOWRITE at start of ext4_da_writepages. Later we check for this tag in write_cache_pages_da and creates a struct mpage_da_data containing contiguously indexed pages tagged with this tag and sync these pages with a call to mpage_da_map_and_submit. This process is done in while loop until all the PAGECACHE_TAG_TOWRITE pages are synced. We also do journal start and stop in each iteration. journal_stop could initiate journal commit which would call ext4_writepage which in turn will call ext4_bio_write_page even for delayed OR unwritten buffers. When ext4_bio_write_page is called for such buffers, even though it does not sync them but it clears the PAGECACHE_TAG_TOWRITE of the corresponding page and hence these pages are also not synced by the currently running data integrity sync. We will end up with dirty pages although sync is completed. This could cause a potential data loss when the sync call is followed by a truncate_pagecache call, which is exactly the case in collapse_range. (It will cause generic/127 failure in xfstests) To avoid this issue, we can use set_page_writeback_keepwrite instead of set_page_writeback, which doesn't clear TOWRITE tag. Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Ashish Sangwan <a.sangwan@samsung.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Christian Borntraeger authored
commit 993072ee upstream. The IRB might be 96 bytes if the extended-I/O-measurement facility is used. This feature is currently not used by Linux, but struct irb already has the emw defined. So let's make the irb in lowcore match the size of the internal data structure to be future proof. We also have to add a pad, to correctly align the paste. The bigger irb field also circumvents a bug in some QEMU versions that always write the emw field on test subchannel and therefore destroy the paste definitions of this CPU. Running under these QEMU version broke some timing functions in the VDSO and all users of these functions, e.g. some JREs. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Cc: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Lai Jiangshan authored
commit 3afb69cb upstream. idr_replace() open-codes the logic to calculate the maximum valid ID given the height of the idr tree; unfortunately, the open-coded logic doesn't account for the fact that the top layer may have unused slots and over-shifts the limit to zero when the tree is at its maximum height. The following test code shows it fails to replace the value for id=((1<<27)+42): static void test5(void) { int id; DEFINE_IDR(test_idr); #define TEST5_START ((1<<27)+42) /* use the highest layer */ printk(KERN_INFO "Start test5\n"); id = idr_alloc(&test_idr, (void *)1, TEST5_START, 0, GFP_KERNEL); BUG_ON(id != TEST5_START); TEST_BUG_ON(idr_replace(&test_idr, (void *)2, TEST5_START) != (void *)1); idr_destroy(&test_idr); printk(KERN_INFO "End of test5\n"); } Fix the bug by using idr_max() which correctly takes into account the maximum allowed shift. sub_alloc() shares the same problem and may incorrectly fail with -EAGAIN; however, this bug doesn't affect correct operation because idr_get_empty_slot(), which already uses idr_max(), retries with the increased @id in such cases. [tj@kernel.org: Updated patch description.] Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Will Deacon authored
commit c1688707 upstream. Our compat PTRACE_POKEUSR implementation simply passes the user data to regset_copy_from_user after some simple range checking. Unfortunately, the data in question has already been copied to the kernel stack by this point, so the subsequent access_ok check fails and the ptrace request returns -EFAULT. This causes problems tracing fork() with older versions of strace. This patch briefly changes the fs to KERNEL_DS, so that the access_ok check passes even with a kernel address. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-