浏览代码

powerpc: Rearrange SLB preload code

With the new top down layout it is likely that the pc and stack will be in the
same segment, because the pc is most likely in a library allocated via a top
down mmap. Right now we bail out early if these segments match.

Rearrange the SLB preload code to sanity check all SLB preload addresses
are not in the kernel, then check all addresses for conflicts.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Anton Blanchard 16 年之前
父节点
当前提交
5eb9bac040
共有 1 个文件被更改,包括 8 次插入13 次删除
  1. 8 13
      arch/powerpc/mm/slb.c

+ 8 - 13
arch/powerpc/mm/slb.c

@@ -218,23 +218,18 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
 	else
 		unmapped_base = TASK_UNMAPPED_BASE_USER64;
 
-	if (is_kernel_addr(pc))
-		return;
-	slb_allocate(pc);
-
-	if (esids_match(pc,stack))
+	if (is_kernel_addr(pc) || is_kernel_addr(stack) ||
+	    is_kernel_addr(unmapped_base))
 		return;
 
-	if (is_kernel_addr(stack))
-		return;
-	slb_allocate(stack);
+	slb_allocate(pc);
 
-	if (esids_match(pc,unmapped_base) || esids_match(stack,unmapped_base))
-		return;
+	if (!esids_match(pc, stack))
+		slb_allocate(stack);
 
-	if (is_kernel_addr(unmapped_base))
-		return;
-	slb_allocate(unmapped_base);
+	if (!esids_match(pc, unmapped_base) &&
+	    !esids_match(stack, unmapped_base))
+		slb_allocate(unmapped_base);
 }
 
 static inline void patch_slb_encoding(unsigned int *insn_addr,