Browse Source

[POWERPC] hugepage BUG fix

On Tue, 2006-08-15 at 08:22 -0700, Dave Hansen wrote:
> kernel BUG in cache_free_debugcheck at mm/slab.c:2748!

Alright, this one is only triggered when slab debugging is enabled.  The
slabs are assumed to be aligned on a HUGEPTE_TABLE_SIZE boundary.  The free
path makes use of this assumption and uses the lowest nibble to pass around
an index into an array of kmem_cache pointers.  With slab debugging turned
on, the slab is still aligned, but the "working" object pointer is not.
This would break the assumption above that a full nibble is available for
the PGF_CACHENUM_MASK.

The following patch reduces PGF_CACHENUM_MASK to cover only the two least
significant bits, which is enough to cover the current number of 4 pgtable
cache types.  Then use this constant to mask out the appropriate part of
the huge pte pointer.

Signed-off-by: Adam Litke <agl@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Adam Litke 19 years ago
parent
commit
c9169f8747
2 changed files with 2 additions and 2 deletions
  1. 1 1
      arch/powerpc/mm/hugetlbpage.c
  2. 1 1
      include/asm-powerpc/pgalloc.h

+ 1 - 1
arch/powerpc/mm/hugetlbpage.c

@@ -153,7 +153,7 @@ static void free_hugepte_range(struct mmu_gather *tlb, hugepd_t *hpdp)
 	hpdp->pd = 0;
 	hpdp->pd = 0;
 	tlb->need_flush = 1;
 	tlb->need_flush = 1;
 	pgtable_free_tlb(tlb, pgtable_free_cache(hugepte, HUGEPTE_CACHE_NUM,
 	pgtable_free_tlb(tlb, pgtable_free_cache(hugepte, HUGEPTE_CACHE_NUM,
-						 HUGEPTE_TABLE_SIZE-1));
+						 PGF_CACHENUM_MASK));
 }
 }
 
 
 #ifdef CONFIG_PPC_64K_PAGES
 #ifdef CONFIG_PPC_64K_PAGES

+ 1 - 1
include/asm-powerpc/pgalloc.h

@@ -117,7 +117,7 @@ static inline void pte_free(struct page *ptepage)
 	pte_free_kernel(page_address(ptepage));
 	pte_free_kernel(page_address(ptepage));
 }
 }
 
 
-#define PGF_CACHENUM_MASK	0xf
+#define PGF_CACHENUM_MASK	0x3
 
 
 typedef struct pgtable_free {
 typedef struct pgtable_free {
 	unsigned long val;
 	unsigned long val;