Эх сурвалжийг харах

[IA64] Fix another IA64 preemption problem

There's another problem shown up by Ingo's recent patch to make
smp_processor_id() complain if it's called with preemption enabled.
local_finish_flush_tlb_mm() calls activate_context() in a situation
where it could be rescheduled to another processor.  This patch
disables preemption around the call.

Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Peter Chubb 20 жил өмнө
parent
commit
a68db763af

+ 3 - 0
arch/ia64/kernel/smp.c

@@ -231,13 +231,16 @@ smp_flush_tlb_all (void)
 void
 void
 smp_flush_tlb_mm (struct mm_struct *mm)
 smp_flush_tlb_mm (struct mm_struct *mm)
 {
 {
+	preempt_disable();
 	/* this happens for the common case of a single-threaded fork():  */
 	/* this happens for the common case of a single-threaded fork():  */
 	if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1))
 	if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1))
 	{
 	{
 		local_finish_flush_tlb_mm(mm);
 		local_finish_flush_tlb_mm(mm);
+		preempt_enable();
 		return;
 		return;
 	}
 	}
 
 
+	preempt_enable();
 	/*
 	/*
 	 * We could optimize this further by using mm->cpu_vm_mask to track which CPUs
 	 * We could optimize this further by using mm->cpu_vm_mask to track which CPUs
 	 * have been running in the address space.  It's not clear that this is worth the
 	 * have been running in the address space.  It's not clear that this is worth the

+ 3 - 0
include/asm-ia64/mmu_context.h

@@ -132,6 +132,9 @@ reload_context (mm_context_t context)
 	ia64_srlz_i();			/* srlz.i implies srlz.d */
 	ia64_srlz_i();			/* srlz.i implies srlz.d */
 }
 }
 
 
+/*
+ * Must be called with preemption off
+ */
 static inline void
 static inline void
 activate_context (struct mm_struct *mm)
 activate_context (struct mm_struct *mm)
 {
 {