- 04 Dec, 2019 4 commits
-
-
Thierry Reding authored
The IOVA address for the cursor is the result of mapping the buffer object for the given display controller. Make sure to use the proper IOVA address as stored in the cursor's plane state. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
All the display related blocks on Tegra require contiguous memory. Using the DMA API, there is no knowing at import time which device will end up using the buffer, so it's not known whether or not an IOMMU will be used to map the buffer. Move the check for non-contiguous buffers/mappings to the tegra_dc_pin() function which is now the earliest point where it is known if a DMA BUF can be used by the given device or not. v2: add check for contiguous buffer/mapping in tegra_dc_pin() Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Buffers that are imported from a DMA-BUF don't have pages allocated with them. At the same time an SG table for them can't be derived using the DMA API helpers because the necessary information doesn't exist. However there's already an SG table that was created during import, so this can simply be duplicated. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The Tegra DRM never actually took that lock, so the driver was broken until generic locking was added in commit: commit b962a120 Author: Rob Clark <robdclark@gmail.com> Date: Mon Oct 22 14:31:22 2018 +0200 drm/atomic: integrate modeset lock with private objects It's now no longer necessary to take that lock, so drop the check. v2: add rationale for why it is now safe to drop the check (Daniel) Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Thierry Reding <treding@nvidia.com>
-
- 01 Nov, 2019 2 commits
-
-
Thierry Reding authored
Currently configurations can be generated where IOMMU_SUPPORT is disabled but IOMMU_IOVA is built as a module and DRM_TEGRA as built-in. In such a case, the symbols guarded by IOMMU_IOVA will not be available when linking the Tegra DRM driver and cause a linking failure. Simplify this by unconditionally selecting IOMMU_IOVA, which makes sure that it will be forced to =y if DRM_TEGRA=y. Technically we can now get IOMMU_IOVA code built-in even if we don't use it (Tegra DRM only uses it when IOMMU_SUPPORT is also enabled), but such configuration are of a mostly academic nature. In all practical configurations we want IOMMU support anyway. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Currently configurations can be generated where IOMMU_SUPPORT is disabled but IOMMU_IOVA is built as a module and HOST1X as built-in. In such a case, the symbols guarded by IOMMU_IOVA will not be available when linking the host1x driver and cause a linking failure. Simplify this by unconditionally selecting IOMMU_IOVA, which makes sure that it will be forced to =y if HOST1X=y. Technically we can now get IOMMU_IOVA code built-in even if we don't use it (host1x only uses it when IOMMU_SUPPORT is also enabled), but such configuration are of a mostly academic nature. In all practical configurations we want IOMMU support anyway. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
- 29 Oct, 2019 12 commits
-
-
Thierry Reding authored
If a client is already attached to an IOMMU domain that is not the shared domain, don't try to attach it again. This allows using the IOMMU-backed DMA API. Since the IOMMU-backed DMA API is now supported and there's no way to detach from it on 64-bit ARM, don't bother to detach from it on 32-bit ARM either. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
If a display controller is not attached to an explicit IOMMU domain, which usually means that it's connected to an IOMMU domain controlled by the DMA API, make sure to map the framebuffer to the display controller address space. This allows us to transparently handle setups where the display controller is attached to an IOMMU or setups where it isn't. It also allows the driver to work with a DMA API that is backed by an IOMMU. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Rename paddr -> iova and vaddr -> virt to make it clearer how these addresses are used. This is important for a subsequent patch that makes a distinction between the physical address (physical address of the system memory from the CPU's point of view) and the IOVA (physical address of the system memory from the device's point of view). Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Having to provide allocator hooks to the Falcon library is somewhat cumbersome and it doesn't give the users of the library a lot of flexibility to deal with allocations. Instead, remove the notion of Falcon "operations" and let drivers deal with the memory allocations themselves. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
If the Tegra DRM clients are backed by an IOMMU, push buffers are likely to be allocated beyond the 32-bit boundary if sufficient system memory is available. This is problematic on earlier generations of Tegra where host1x supports a maximum of 32 address bits for the GATHER opcode. More recent versions of Tegra (Tegra186 and later) have a wide variant of the GATHER opcode, which allows addressing up to 64 bits of memory. If host1x itself is behind an IOMMU as well this doesn't matter because the IOMMU's input address space is restricted to 32 bits on generations without support for wide GATHER opcodes. However, if host1x is not behind an IOMMU, it won't be able to process push buffers beyond the 32-bit boundary on Tegra generations that don't support wide GATHER opcodes. Restrict the DMA mask to 32 bits on these generations prevents buffers from being allocated from beyond the 32-bit boundary. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
If host1x_bo_pin() returns an SG table, create a DMA mapping for the buffer. For buffers that the host1x client has already mapped itself, host1x_bo_pin() returns NULL and the existing DMA address is used. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Currently when the gather buffers are copied, they are copied to a buffer that is allocated for the host1x client that wants to execute the command streams in the buffers. However, the gather buffers will be read by the host1x device, which causes SMMU faults if the DMA API is backed by an IOMMU. Fix this by allocating the gather buffer copy for the host1x device, which makes sure that it will be mapped into the host1x's IOVA space if the DMA API is backed by an IOMMU. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Add direction flags to host1x relocations performed during job pinning. These flags indicate the kinds of accesses that hardware is allowed to perform on the relocated buffers. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The debugfs files created for host1x are never removed, causing these files to be left dangling in debugfs. This results in a crash when any of these files are accessed after the host1x driver has been removed, as well as a failure to create the debugfs entries when they are added again on driver probe. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The host1x_bo_pin() and host1x_bo_unpin() APIs are used to pin and unpin buffers during host1x job submission. Pinning currently returns the SG table and the DMA address (an IOVA if an IOMMU is used or a physical address if no IOMMU is used) of the buffer. The DMA address is only used for buffers that are relocated, whereas the host1x driver will map gather buffers into its own IOVA space so that they can be processed by the CDMA engine. This approach has a couple of issues. On one hand it's not very useful to return a DMA address for the buffer if host1x doesn't need it. On the other hand, returning the SG table of the buffer is suboptimal because a single SG table cannot be shared for multiple mappings, because the DMA address is stored within the SG table, and the DMA address may be different for different devices. Subsequent patches will move the host1x driver over to the DMA API which doesn't work with a single shared SG table. Fix this by returning a new SG table each time a buffer is pinned. This allows the buffer to be referenced by multiple jobs for different engines. Change the prototypes of host1x_bo_pin() and host1x_bo_unpin() to take a struct device *, specifying the device for which the buffer should be pinned. This is required in order to be able to properly construct the SG table. While at it, make host1x_bo_pin() return the SG table because that allows us to return an ERR_PTR()-encoded error code if we need to, or return NULL to signal that we don't need the SG table to be remapped and can simply use the DMA address as-is. At the same time, returning the DMA address is made optional because in the example of command buffers, host1x doesn't need to know the DMA address since it will have to create its own mapping anyway. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
All the devices that make up the DRM device are now part of the same IOMMU group. This simplifies the handling of the IOMMU attachment and also avoids exhausting the number of IOMMUs available on early Tegra SoC generations. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The ->load() and ->unload() drivers are midlayers and should be avoided in modern drivers. Fix this by moving the code into the driver ->probe() and ->remove() implementations, respectively. v2: kick out conflicting framebuffers before initializing fbdev v3: rebase onto drm/tegra/for-next Tested-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
-
- 28 Oct, 2019 22 commits
-
-
Thierry Reding authored
In order to support different modes (DP in addition to HDMI), split out the audio setup/teardown into callbacks. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The code to enable audio support is split into two parts, one being generic for the SOR and another part that is specific whether the SOR is in HDMI mode or in DP mode. Split out the common part in preparation for reusing the code in DP mode. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
When the SOR is disabled in DP mode as part of an unplug event, do not attempt to power the DP link down. Powering down the link requires the DPAUX to transmit AUX messages which only works if there's a connected sink. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The SOR0 on Tegra210 does, contrary to what was previously assumed, in fact support DisplayPort. The difference between SOR0 and SOR1 is that the latter supports audio and HDCP over DP, whereas the former doesn't. The code for eDP and DP is now almost identical and the differences can easily be parameterized based on the presence of a panel. There is no need any longer to duplicate the code. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The correct I/O pad needs to be powered up before DP can be used. Make sure the correct default is set for Tegra generations where the I/O pad cannot be derived from the SOR instance. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
With the clocks modelled consistently across SoC generations, the clock setup for eDP, HDMI and DP can now be unified. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Reuse parameters from earlier generations to support DisplayPort on Tegra194. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The connector type detection code is duplicated in two places. Keeping both places in sync is an extra maintenance burden that can be avoided by comparing the connector type operations that are set upon the first detection. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
So far the pad clock was only needed on the second SOR instance. The clock does exist for all SOR instances, though, so make sure it is always implemented. This prepares for further unification of the code in subsequent patches. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The device tree bindings for the Tegra210 SOR don't require the controller instance to be defined, since the instance can be derived from the compatible string. The index is never used on Tegra210, so we got away with it not getting set. However, subsequent patches will change that, so make sure the proper index is used. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
It turns out that SOR1 is just another instance of the same block as the SOR0, so there is no need to distinguish them. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Add support for regular DisplayPort on Tegra210 and Tegra186. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
The SOR found on Tegra SoCs does not support all the rates potentially advertised by eDP 1.4. Make sure that the rates that are not supported are filtered out. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Rework eDP code to correspond more closely to what's documented. This also improves the reliability of modesets. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
This is necessary for the output abstraction to retrieve a list of valid modes from the EDID of a connected panel/monitor. This will be useful in conjunction with DisplayPort support that will be added in a subsequent patch, so that the driver can read EDID via the AUX channel. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Make use of the DP link training helpers to implement full and fast link training. While at it, refactor some of the code and remove various code sequences that are not necessary. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Add a helper that will perform link training as described in the DisplayPort specification. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Parses additional link rates from DPCD if the sink supports eDP 1.4. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
This helper chooses an appropriate configuration, according to the bitrate requirements of the video mode and the capabilities of the DisplayPort sink. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
If the sink is eDP and supports the alternate scrambler reset, enable it. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Make use of ANSI 8B/10B channel coding if the DisplayPort sink supports it. Signed-off-by: Thierry Reding <treding@nvidia.com>
-
Thierry Reding authored
Store the AUX read interval from DPCD, so that it can be used to wait for the durations given in the specification during link training. Signed-off-by: Thierry Reding <treding@nvidia.com>
-