浏览代码

perf bench: Also allow measuring alternative memcpy implementations

Intended to be able to support the current selection of the preferred
memcpy() implementation, this patch adds the ability to also measure the
two alternative implementations, again by way of using some
pre-processsor replacement.

While on my Westmere system this proves that the movsb based variant is
worse than the movsq based one (since the ERMS feature isn't there), it
also shows that here for the default as well as small sizes the unrolled
variant outperforms the movsq one.

Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/4F16D728020000780006D732@nat28.tlf.novell.com
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Jan Beulich 13 年之前
父节点
当前提交
800eb01484
共有 2 个文件被更改,包括 12 次插入0 次删除
  1. 8 0
      tools/perf/bench/mem-memcpy-x86-64-asm-def.h
  2. 4 0
      tools/perf/bench/mem-memcpy-x86-64-asm.S

+ 8 - 0
tools/perf/bench/mem-memcpy-x86-64-asm-def.h

@@ -2,3 +2,11 @@
 MEMCPY_FN(__memcpy,
 MEMCPY_FN(__memcpy,
 	"x86-64-unrolled",
 	"x86-64-unrolled",
 	"unrolled memcpy() in arch/x86/lib/memcpy_64.S")
 	"unrolled memcpy() in arch/x86/lib/memcpy_64.S")
+
+MEMCPY_FN(memcpy_c,
+	"x86-64-movsq",
+	"movsq-based memcpy() in arch/x86/lib/memcpy_64.S")
+
+MEMCPY_FN(memcpy_c_e,
+	"x86-64-movsb",
+	"movsb-based memcpy() in arch/x86/lib/memcpy_64.S")

+ 4 - 0
tools/perf/bench/mem-memcpy-x86-64-asm.S

@@ -1,2 +1,6 @@
 #define memcpy MEMCPY /* don't hide glibc's memcpy() */
 #define memcpy MEMCPY /* don't hide glibc's memcpy() */
+#define altinstr_replacement text
+#define globl p2align 4; .globl
+#define Lmemcpy_c globl memcpy_c; memcpy_c
+#define Lmemcpy_c_e globl memcpy_c_e; memcpy_c_e
 #include "../../../arch/x86/lib/memcpy_64.S"
 #include "../../../arch/x86/lib/memcpy_64.S"