浏览代码

mtd: atmel_nand: use CPU I/O when buffer is in vmalloc(ed) region

The previous way of dealing with vmalloc(ed) region by walking
though the pages can not work well actually. We just fall back
to CPU I/O when the buffer address is higher than `high_memory'.

Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Hong Xu <hong.xu@atmel.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Hong Xu 14 年之前
父节点
当前提交
80b4f81a49
共有 1 个文件被更改,包括 2 次插入16 次删除
  1. 2 16
      drivers/mtd/nand/atmel_nand.c

+ 2 - 16
drivers/mtd/nand/atmel_nand.c

@@ -209,22 +209,8 @@ static int atmel_nand_dma_op(struct mtd_info *mtd, void *buf, int len,
 	int err = -EIO;
 	enum dma_data_direction dir = is_read ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
 
-	if (buf >= high_memory) {
-		struct page *pg;
-
-		if (((size_t)buf & PAGE_MASK) !=
-		    ((size_t)(buf + len - 1) & PAGE_MASK)) {
-			dev_warn(host->dev, "Buffer not fit in one page\n");
-			goto err_buf;
-		}
-
-		pg = vmalloc_to_page(buf);
-		if (pg == 0) {
-			dev_err(host->dev, "Failed to vmalloc_to_page\n");
-			goto err_buf;
-		}
-		p = page_address(pg) + ((size_t)buf & ~PAGE_MASK);
-	}
+	if (buf >= high_memory)
+		goto err_buf;
 
 	dma_dev = host->dma_chan->device;