Explorar o código

read_barrier_depends arch fixlets

read_barrie_depends has always been a noop (not a compiler barrier) on all
architectures except SMP alpha. This brings UP alpha and frv into line with all
other architectures, and fixes incorrect documentation.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Nick Piggin %!s(int64=17) %!d(string=hai) anos
pai
achega
73f10281ea
Modificáronse 3 ficheiros con 13 adicións e 3 borrados
  1. 11 1
      Documentation/memory-barriers.txt
  2. 1 1
      include/asm-alpha/barrier.h
  3. 1 1
      include/asm-frv/system.h

+ 11 - 1
Documentation/memory-barriers.txt

@@ -994,7 +994,17 @@ The Linux kernel has eight basic CPU memory barriers:
 	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
 	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
 
 
 
 
-All CPU memory barriers unconditionally imply compiler barriers.
+All memory barriers except the data dependency barriers imply a compiler
+barrier. Data dependencies do not impose any additional compiler ordering.
+
+Aside: In the case of data dependencies, the compiler would be expected to
+issue the loads in the correct order (eg. `a[b]` would have to load the value
+of b before loading a[b]), however there is no guarantee in the C specification
+that the compiler may not speculate the value of b (eg. is equal to 1) and load
+a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
+problem of a compiler reloading b after having loaded a[b], thus having a newer
+copy of b than a[b]. A consensus has not yet been reached about these problems,
+however the ACCESS_ONCE macro is a good place to start looking.
 
 
 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
 systems because it is assumed that a CPU will appear to be self-consistent,
 systems because it is assumed that a CPU will appear to be self-consistent,

+ 1 - 1
include/asm-alpha/barrier.h

@@ -24,7 +24,7 @@ __asm__ __volatile__("mb": : :"memory")
 #define smp_mb()	barrier()
 #define smp_mb()	barrier()
 #define smp_rmb()	barrier()
 #define smp_rmb()	barrier()
 #define smp_wmb()	barrier()
 #define smp_wmb()	barrier()
-#define smp_read_barrier_depends()	barrier()
+#define smp_read_barrier_depends()	do { } while (0)
 #endif
 #endif
 
 
 #define set_mb(var, value) \
 #define set_mb(var, value) \

+ 1 - 1
include/asm-frv/system.h

@@ -179,7 +179,7 @@ do {							\
 #define mb()			asm volatile ("membar" : : :"memory")
 #define mb()			asm volatile ("membar" : : :"memory")
 #define rmb()			asm volatile ("membar" : : :"memory")
 #define rmb()			asm volatile ("membar" : : :"memory")
 #define wmb()			asm volatile ("membar" : : :"memory")
 #define wmb()			asm volatile ("membar" : : :"memory")
-#define read_barrier_depends()	barrier()
+#define read_barrier_depends()	do { } while (0)
 
 
 #ifdef CONFIG_SMP
 #ifdef CONFIG_SMP
 #define smp_mb()			mb()
 #define smp_mb()			mb()