- 04 Oct, 2015 40 commits
-
-
Matt Redfearn authored
The Ingenic UART is similar to a standard 16550, but hardware flow control requires setting a couple of additional, non-standard bits in the MCR. The non-standard "modem control enable" and "hardware flow control mode" bits are set when writing to the MCR register, based on whether the modem control interrupt is active. Additionally the non-16550 compliant parts of the uart need to be masked from higher layers. Tested on Ingenic JZ4780 on MIPS Creator Ci20 board Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Javier Martinez Canillas authored
The driver has a I2C device id table that is used to create the modaliases and also "sc16is7xx" is not a supported I2C id, so it's never used. Signed-off-by: Javier Martinez Canillas <javier@osg.samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
The implementation of mux_console_device() is very similar to uart_console_device(). Setting .data field in mux_console then we can convert to use uart_console_device(). Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Fabio Estevam authored
Compare pointer-typed values to NULL rather than 0 The semantic patch that makes this change is available in scripts/coccinelle/null/badzero.cocci. Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Robert Baldyga authored
So far, when interrupt occured in DMA mode, it was handled by terminating DMA transfer and draining data remaining in RX FIFO. It worked well until interrupt was caused by timeout, but the same interrupt can be alse caused by special condition (eg. 'break'), which requires special handling. In such case handling mechanism was the same - DMA transaction was terminated and FIFO was drained, but any special conditions were ingnored. Because of this in DMA mode there was no ability to use, for example, Magic SysRq. This patch fixes this problem by using s3c24xx_serial_rx_drain_fifo() function instead of uart_rx_drain_fifo(), which does the same thing (drains RX FIFO) plus checks UART status to detect special conditions (such as 'break'). Thanks to this we have exactly the same UART status handling in both DMA and PIO mode. This change additionally simplifies RX handling code, as we no longer need uart_rx_drain_fifo() function, so we can remove it. Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Robert Baldyga <r.baldyga@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Robert Baldyga authored
This patch introduces s3c24xx_serial_rx_drain_fifo() which reads data from RX FIFO and writes it to tty buffer. It also checks for special conditions (such as 'break') and handles it. This function has been separated from s3c24xx_serial_rx_chars_pio() as it contains code which can be used also in DMA mode. Signed-off-by: Robert Baldyga <r.baldyga@samsung.com> Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Robert Baldyga authored
This label does nothing special and we don't need to have it anymore. Signed-off-by: Robert Baldyga <r.baldyga@samsung.com> Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Robert Baldyga authored
This parameter is not used anywhere, so we can get rid of it. Signed-off-by: Robert Baldyga <r.baldyga@samsung.com> Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Add support for obtaining DMA channel information from the device tree. This requires switching from the legacy sh_dmae_slave structures with hardcoded channel numbers and the corresponding filter function to: 1. dma_request_slave_channel_compat(), - On legacy platforms, dma_request_slave_channel_compat() uses the passed DMA channel numbers that originate from platform device data, - On DT-based platforms, dma_request_slave_channel_compat() will retrieve the information from DT. 2. and the generic dmaengine_slave_config() configuration method, which requires filling in DMA register ports and slave bus widths. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Muhammad Hamza Farooq authored
Occasionally, DMA transaction completes _after_ DMA engine is stopped. Verify if the transaction has not finished before forcing the engine to stop and push the data Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Muhammad Hamza Farooq authored
When DMA packet completion and timer expiry take place at the same time, do not terminate the DMA engine, leading by submission of new descriptors, as the DMA communication hasn't necessarily stopped here. Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Muhammad Hamza Farooq authored
dmaengine_submit() will not start the DMA operation, it merely adds it to the pending queue. If the queue is no longer running, it won't be restarted until dma_async_issue_pending() is called. Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com> [geert: Add more description] Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Muhammad Hamza Farooq authored
Since the DMA engine is not stopped everytime rx_timer_fn is called, the interrupts have to be redirected back to CPU only when incomplete DMA transaction is handled Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aleksandar Mitev authored
This prevents DMA timer timeout that can trigger after the port has been closed. Signed-off-by: Aleksandar Mitev <amitev@visteon.com> [geert: Move del_timer_sync() outside spinlock to avoid circular locking dependency between rx_timer_fn() and del_timer_sync()] Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
There's no need to call sci_start_rx() from sci_request_dma() when DMA setup fails, as sci_startup() will call sci_start_rx() anyway. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
For DMA receive requests, the driver is only notified by DMA completion after the whole DMA request has been transferred. If less data is received, it will stay stuck until more data arrives. The driver handles this by setting up a timer handler from the receive interrupt, after reception of the first character. Unlike SCIFA and SCIFB, SCIF and HSCIF don't issue receive interrupts on reception of individual characters if a receive DMA request is in progress, so the timer is never set up. To fix receive DMA on SCIF and HSCIF, submit the receive DMA request from the receive interrupt handler instead. In some sense this is similar to the SCIFA/SCIFB behavior, where the RDRQE (Rx Data Transfer Request Enable) bit is also set from the receive interrupt handler. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
The receive DMA workqueue function work_fn_rx() handles two things: 1. Reception of a full buffer on completion of a receive DMA request, 2. Reception of a partial buffer on receive DMA time-out. The workqueue is kicked by both the receive DMA completion handler, and by a timer to handle DMA time-out. As there are always two receive DMA requests active, it's possible that the receive DMA completion handler is called a second time before the workqueue function runs. As the time-out handler re-enables the receive interrupt, an interrupt may come in before time-out has been fully handled. Move part 1 into the receive DMA completion handler, and move part 2 into the receive DMA time-out handler, to fix these race conditions. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
This allows to: - Remove forward declarations of static functions, - Coalesce two sections protected by #ifdef CONFIG_SERIAL_SH_SCI_DMA, - Avoid shuffling functions around in the near future, - Avoid adding forward declarations in the near future. No functional changes. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kuninori Morimoto authored
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
On receive DMA time-out, avoid calling sci_dma_rx_push() if no data was transferred by the timed out DMA request. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yoshihiro Shimoda authored
If CONFIG_SERIAL_SH_SCI_DMA is enabled, the driver doesn't enable TIE on SCIF or HSCIF. However, this driver may call sci_tx_interrupt() in sci_er_interrupt(). After that, the driver cannot care of the interrupt, and then "irq 109: nobody cared" happens on r8a7791/koelsch board. This patch fixes the issue. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> [geert] Keep kicking tx when using PIO Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
The error handler calls sci_rx_interrupt() to drain the receive FIFO if an error condition happens. However, if DMA is enabled on SCIFA or SCIFB, this will call disable_irq_nosync() twice. Due to this imbalance, the receive interrupt will never be re-enabled, and reception stops forever. To fix this, restrict draining the FIFO to PIO mode, and just call sci_receive_chars() directly. Inspired by a patch from Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>. Reported-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yoshihiro Shimoda authored
This patch fixes an issue that this driver causes a NULL pointer dereference in the following conditions: - CONFIG_HIGHMEM and CONFIG_SERIAL_SH_SCI_DMA are enabled - This driver runs on the sci_dma_rx_push() This issue was caused by virt_to_page(buf) in the sci_request_dma() because this driver didn't check if the "buf" was valid or not. So, this patch uses the "buf" from dma_alloc_coherent() as is, not page. This patch also fixes a WARNING issue in sci_rx_dma_release(): WARNING: CPU: 0 PID: 1328 at lib/dma-debug.c:1125 check_unmap+0x444/0x848() rcar-dmac e6700000.dma-controller: DMA-API: device driver frees DMA memory with different CPU address [device address=0x000000006dd89000] [size=64 bytes] [cpu alloc address=0x000000016189c000] [cpu free address=0x0000000080000000] WARNING: CPU: 1 PID: 1 at drivers/base/dma-mapping.c:334 dma_common_free_remap+0x48/0x6c() trying to free invalid coherent area: (null) Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> [geert] Rebased [geert] Reworded [geert] Dropped .rx_chunk, as it's always identical to .rx_buf[0] Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
There's no need to keep all buffer and DMA pointers on the stack. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Switch from using tty_buffer_request_room() and looping over tty_insert_flip_char() to tty_insert_flip_string(). Keep track of buffer overruns in the icount structure, like serial_core.c does. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Currently sci_dma_rx_push() has to find the active scatterlist itself, but in some cases the caller already knows. Hence let the caller pass the scatterlist, and introduce a helper to find the active DMA request while we're at it. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
During serial port shutdown, the DMA receive worker function may still be called after the receive DMA cleanup function has been called. Fix this race condition between work_fn_rx() and sci_rx_dma_release() by acquiring the port's spinlock in sci_rx_dma_release(). This requires releasing the spinlock in work_fn_rx() before calling (any function that may call) sci_rx_dma_release(). Terminate all active receive DMA descriptors to release them, and to make sure no more completions come in. Do the same in sci_tx_dma_release() for symmetry, although the serial upper layer will no longer submit more data at this point of time. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kazuya Mizuguchi authored
There is a problem when the sci_dma_rx_complete() is processed before cancel process of work_fn_rx() completes by rx_timer_fn(). This patch locks work_fn_rx(). Signed-off-by: Kazuya Mizuguchi <kazuya.mizuguchi.ks@renesas.com> Signed-off-by: Yoshihiro Kaneko <ykaneko0929@gmail.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Resubmission of DMA descriptors is explicitly forbidden by the DMA engine API. Hence pass DMA_CTRL_ACK to dmaengine_prep_slave_sg(), and prepare a new DMA descriptor instead of reusing the old one. Remove sci_port.desc_rx[], as there's no longer a need to access the active descriptor. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Simplify the error handling in sci_submit_rx() by - Moving it to the end of the function, - Just calling dmaengine_terminate_all() instead of calling async_tx_ack() for all already submitted descriptors. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
As dmaengine_prep_slave_sg() is called with the DMA_CTRL_ACK flag set for DMA transmit requests, there's no need to explicitly acknowledge DMA transmit requests in the DMA transmit completion callback. Hence remove the call to async_tx_ack(), and remove the now unused dma_async_tx_descriptor pointer in the sci_port structure. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Convert the SCI driver from the SHDMAE-specific partial DMA transfer handling to the generic dmaengine residual data framework. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Replace open-coded - calls to dma_async_tx_descriptor.tx_submit() by calls to the dmaengine_submit() helper, - dma_cookie_t comparisons by calls to dma_submit_error(). Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
The mapped transmit buffer is never unmapped. This leaks quite some mappings, as the mapping is done in uart_ops.startup(), i.e. every time the device is opened. Unmap the buffer on device close. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Simplify the DMA transmit code by using dma_map_single() instead of constantly modifying the single-entry scatterlist to match what's currently being transmitted. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
When comparing differently sized types, it's better to use min_t()/max_t() than adding casts. Also use "unsigned int" instead of "int", as that's the right type for the length of an SG entry. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
To function correctly in the presence of an IOMMU, the DMA buffers must be managed using the DMA channel's device instead of the platform device's device. Make sure to free the DMA memory before releasing the channel, not after. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Let sci_request_dma() handle failures to initialize DMA itself. This way sci_tx_dma_release() and sci_rx_dma_release() don't have to consider partial initialization, and thus don't need to reset DMA addresses to DMA_ERROR_CODE, which is not 100% portable access architectures. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Reformat, grammar improvements, use "ms" instead of "msec". Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Geert Uytterhoeven authored
Make the life of the driver developer/debugger easier: - Add __func__ prefix to identical messages, - Add DMA directions to messages, - Add TX failure messages, - Always use "cookie %d" for DMA cookies, - "#%d" is reserved for the DMA cookie/descriptor index. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-