瀏覽代碼

[ARM] dma: don't touch cache on dma_*_for_cpu()

As per the dma_unmap_* calls, we don't touch the cache when a DMA
buffer transitions from device to CPU ownership.  Presently, no
problems have been identified with speculative cache prefetching
which in itself is a new feature in later architectures.  We may
have to revisit the DMA API later for these architectures anyway.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Russell King 16 年之前
父節點
當前提交
309dbbabee
共有 2 個文件被更改,包括 3 次插入11 次删除
  1. 1 5
      arch/arm/include/asm/dma-mapping.h
  2. 2 6
      arch/arm/mm/dma-mapping.c

+ 1 - 5
arch/arm/include/asm/dma-mapping.h

@@ -376,11 +376,7 @@ static inline void dma_sync_single_range_for_cpu(struct device *dev,
 {
 	BUG_ON(!valid_dma_direction(dir));
 
-	if (!dmabounce_sync_for_cpu(dev, handle, offset, size, dir))
-		return;
-
-	if (!arch_is_coherent())
-		dma_cache_maint(dma_to_virt(dev, handle) + offset, size, dir);
+	dmabounce_sync_for_cpu(dev, handle, offset, size, dir);
 }
 
 static inline void dma_sync_single_range_for_device(struct device *dev,

+ 2 - 6
arch/arm/mm/dma-mapping.c

@@ -585,12 +585,8 @@ void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
 	int i;
 
 	for_each_sg(sg, s, nents, i) {
-		if (!dmabounce_sync_for_cpu(dev, sg_dma_address(s), 0,
-					sg_dma_len(s), dir))
-			continue;
-
-		if (!arch_is_coherent())
-			dma_cache_maint(sg_virt(s), s->length, dir);
+		dmabounce_sync_for_cpu(dev, sg_dma_address(s), 0,
+					sg_dma_len(s), dir);
 	}
 }
 EXPORT_SYMBOL(dma_sync_sg_for_cpu);