Commit 342443e9 authored by Stefano Stabellini's avatar Stefano Stabellini Committed by Luis Henriques

xen/arm/arm64: introduce xen_arch_need_swiotlb

commit a4dba130 upstream.

Introduce an arch specific function to find out whether a particular dma
mapping operation needs to bounce on the swiotlb buffer.

On ARM and ARM64, if the page involved is a foreign page and the device
is not coherent, we need to bounce because at unmap time we cannot
execute any required cache maintenance operations (we don't know how to
find the pfn from the mfn).

No change of behaviour for x86.
Signed-off-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
Acked-by: default avatarIan Campbell <ian.campbell@citrix.com>
Acked-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
[ stefano: The commit needs to be slightly modified because
  is_device_dma_coherent is not available on kernels < 3.19, so I just
  removed the call, thus assuming that the device is not coherent on arm
  (slower but safe) ]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
[ luis: backported to 3.16: used backport by stefano ]
Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
parent 648d9a76
......@@ -116,4 +116,8 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
#define xen_unmap(cookie) iounmap((cookie))
bool xen_arch_need_swiotlb(struct device *dev,
unsigned long pfn,
unsigned long mfn);
#endif /* _ASM_ARM_XEN_PAGE_H */
......@@ -16,6 +16,13 @@
#include <asm/xen/hypercall.h>
#include <asm/xen/interface.h>
bool xen_arch_need_swiotlb(struct device *dev,
unsigned long pfn,
unsigned long mfn)
{
return (pfn != mfn);
}
int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
unsigned int address_bits,
dma_addr_t *dma_handle)
......
......@@ -236,4 +236,11 @@ void make_lowmem_page_readwrite(void *vaddr);
#define xen_remap(cookie, size) ioremap((cookie), (size));
#define xen_unmap(cookie) iounmap((cookie))
static inline bool xen_arch_need_swiotlb(struct device *dev,
unsigned long pfn,
unsigned long mfn)
{
return false;
}
#endif /* _ASM_X86_XEN_PAGE_H */
......@@ -397,7 +397,9 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
* buffering it.
*/
if (dma_capable(dev, dev_addr, size) &&
!range_straddles_page_boundary(phys, size) && !swiotlb_force) {
!range_straddles_page_boundary(phys, size) &&
!xen_arch_need_swiotlb(dev, PFN_DOWN(phys), PFN_DOWN(dev_addr)) &&
!swiotlb_force) {
/* we are not interested in the dma_addr returned by
* xen_dma_map_page, only in the potential cache flushes executed
* by the function. */
......@@ -555,6 +557,7 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
dma_addr_t dev_addr = xen_phys_to_bus(paddr);
if (swiotlb_force ||
xen_arch_need_swiotlb(hwdev, PFN_DOWN(paddr), PFN_DOWN(dev_addr)) ||
!dma_capable(hwdev, dev_addr, sg->length) ||
range_straddles_page_boundary(paddr, sg->length)) {
phys_addr_t map = swiotlb_tbl_map_single(hwdev,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment