Selaa lähdekoodia

[PATCH] mm: xip_unmap ZERO_PAGE fix

Small fix to the PageReserved patch: the mips ZERO_PAGE(address) depends on
address, so __xip_unmap is wrong to initialize page with that before address
is initialized; and in fact must re-evaluate it each iteration.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Hugh Dickins 19 vuotta sitten
vanhempi
commit
67b02f119d
1 muutettua tiedostoa jossa 2 lisäystä ja 1 poistoa
  1. 2 1
      mm/filemap_xip.c

+ 2 - 1
mm/filemap_xip.c

@@ -174,7 +174,7 @@ __xip_unmap (struct address_space * mapping,
 	unsigned long address;
 	unsigned long address;
 	pte_t *pte;
 	pte_t *pte;
 	pte_t pteval;
 	pte_t pteval;
-	struct page *page = ZERO_PAGE(address);
+	struct page *page;
 
 
 	spin_lock(&mapping->i_mmap_lock);
 	spin_lock(&mapping->i_mmap_lock);
 	vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
 	vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
@@ -182,6 +182,7 @@ __xip_unmap (struct address_space * mapping,
 		address = vma->vm_start +
 		address = vma->vm_start +
 			((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
 			((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
 		BUG_ON(address < vma->vm_start || address >= vma->vm_end);
 		BUG_ON(address < vma->vm_start || address >= vma->vm_end);
+		page = ZERO_PAGE(address);
 		/*
 		/*
 		 * We need the page_table_lock to protect us from page faults,
 		 * We need the page_table_lock to protect us from page faults,
 		 * munmap, fork, etc...
 		 * munmap, fork, etc...