Explorar o código

[PATCH] Generic ioremap_page_range: flush_cache_vmap

The existing implementation of ioremap_page_range(), which was taken
from i386, does this:

	flush_cache_all();
	/* modify page tables */
	flush_tlb_all();

I think this is a bit defensive, so this patch changes the generic
implementation to do:

	/* modify page tables */
	flush_cache_vmap(start, end);

instead, which is similar to what vmalloc() does. This should still
be correct because we never modify existing PTEs. According to
James Bottomley:

The problem the flush_tlb_all() is trying to solve is to avoid stale tlb
entries in the ioremap area.  We're just being conservative by flushing
on both map and unmap.  Technically what vmalloc/vfree does (only flush
the tlb on unmap) is just fine because it means that the only tlb
entries in the remap area must belong to in-use mappings.

Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Andi Kleen <ak@suse.de>
Cc: <linux-m32r@ml.linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@parisc-linux.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Haavard Skinnemoen %!s(int64=19) %!d(string=hai) anos
pai
achega
db71daabad
Modificáronse 1 ficheiros con 1 adicións e 3 borrados
  1. 1 3
      lib/ioremap.c

+ 1 - 3
lib/ioremap.c

@@ -76,8 +76,6 @@ int ioremap_page_range(unsigned long addr,
 
 	BUG_ON(addr >= end);
 
-	flush_cache_all();
-
 	start = addr;
 	phys_addr -= addr;
 	pgd = pgd_offset_k(addr);
@@ -88,7 +86,7 @@ int ioremap_page_range(unsigned long addr,
 			break;
 	} while (pgd++, addr = next, addr != end);
 
-	flush_tlb_all();
+	flush_cache_vmap(start, end);
 
 	return err;
 }