浏览代码

x86: fence oostores on 64-bit

movnt* instructions are not strongly ordered with respect to other stores,
so if we are to assume stores are strongly ordered in the rest of the 64
bit code, we must fence these off (see similar examples in 32 bit code).

[ The AMD memory ordering document seems to say that nontemporal stores can
  also pass earlier regular stores, so maybe we need sfences _before_
  movnt* everywhere too? ]

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Nick Piggin 17 年之前
父节点
当前提交
df1bdc0667
共有 1 个文件被更改,包括 1 次插入0 次删除
  1. 1 0
      arch/x86/lib/copy_user_nocache_64.S

+ 1 - 0
arch/x86/lib/copy_user_nocache_64.S

@@ -117,6 +117,7 @@ ENTRY(__copy_user_nocache)
 	popq %rbx
 	popq %rbx
 	CFI_ADJUST_CFA_OFFSET -8
 	CFI_ADJUST_CFA_OFFSET -8
 	CFI_RESTORE rbx
 	CFI_RESTORE rbx
+	sfence
 	ret
 	ret
 	CFI_RESTORE_STATE
 	CFI_RESTORE_STATE