Browse Source

s390/mm: fix deadlock in unmap_hugepage_range()

git commit cd2934a3 moved the flush_tlb_range() within
__unmap_hugepage_range() inside the mm->page_table_lock, which
triggered a deadlock in s390 tlb flushing code. __tlb_flush_mm_cond()
also tries to acquire the mm->page_table_lock, but that is not needed
because all callers already have mm->mmap_sem or mm->page_table_lock,
so it can be safely removed to fix the deadlock.

Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Gerald Schaefer 13 years ago
parent
commit
d5feaea364
1 changed files with 0 additions and 2 deletions
  1. 0 2
      arch/s390/include/asm/tlbflush.h

+ 0 - 2
arch/s390/include/asm/tlbflush.h

@@ -90,12 +90,10 @@ static inline void __tlb_flush_mm(struct mm_struct * mm)
 
 static inline void __tlb_flush_mm_cond(struct mm_struct * mm)
 {
-	spin_lock(&mm->page_table_lock);
 	if (mm->context.flush_mm) {
 		__tlb_flush_mm(mm);
 		mm->context.flush_mm = 0;
 	}
-	spin_unlock(&mm->page_table_lock);
 }
 
 /*