浏览代码

mm/hugetlb.c: avoid bogus counter of surplus huge page

If we have to hand back the newly allocated huge page to page allocator,
for any reason, the changed counter should be recovered.

This affects only s390 at present.

Signed-off-by: Hillf Danton <dhillf@gmail.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hillf Danton 13 年之前
父节点
当前提交
ea5768c74b
共有 1 个文件被更改,包括 1 次插入1 次删除
  1. 1 1
      mm/hugetlb.c

+ 1 - 1
mm/hugetlb.c

@@ -800,7 +800,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)
 
 
 	if (page && arch_prepare_hugepage(page)) {
 	if (page && arch_prepare_hugepage(page)) {
 		__free_pages(page, huge_page_order(h));
 		__free_pages(page, huge_page_order(h));
-		return NULL;
+		page = NULL;
 	}
 	}
 
 
 	spin_lock(&hugetlb_lock);
 	spin_lock(&hugetlb_lock);