spinlocks.txt 8.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220
  1. Lesson 1: Spin locks
  2. The most basic primitive for locking is spinlock.
  3. static DEFINE_SPINLOCK(xxx_lock);
  4. unsigned long flags;
  5. spin_lock_irqsave(&xxx_lock, flags);
  6. ... critical section here ..
  7. spin_unlock_irqrestore(&xxx_lock, flags);
  8. The above is always safe. It will disable interrupts _locally_, but the
  9. spinlock itself will guarantee the global lock, so it will guarantee that
  10. there is only one thread-of-control within the region(s) protected by that
  11. lock. This works well even under UP. The above sequence under UP
  12. essentially is just the same as doing
  13. unsigned long flags;
  14. save_flags(flags); cli();
  15. ... critical section ...
  16. restore_flags(flags);
  17. so the code does _not_ need to worry about UP vs SMP issues: the spinlocks
  18. work correctly under both (and spinlocks are actually more efficient on
  19. architectures that allow doing the "save_flags + cli" in one operation).
  20. NOTE! Implications of spin_locks for memory are further described in:
  21. Documentation/memory-barriers.txt
  22. (5) LOCK operations.
  23. (6) UNLOCK operations.
  24. The above is usually pretty simple (you usually need and want only one
  25. spinlock for most things - using more than one spinlock can make things a
  26. lot more complex and even slower and is usually worth it only for
  27. sequences that you _know_ need to be split up: avoid it at all cost if you
  28. aren't sure). HOWEVER, it _does_ mean that if you have some code that does
  29. cli();
  30. .. critical section ..
  31. sti();
  32. and another sequence that does
  33. spin_lock_irqsave(flags);
  34. .. critical section ..
  35. spin_unlock_irqrestore(flags);
  36. then they are NOT mutually exclusive, and the critical regions can happen
  37. at the same time on two different CPU's. That's fine per se, but the
  38. critical regions had better be critical for different things (ie they
  39. can't stomp on each other).
  40. The above is a problem mainly if you end up mixing code - for example the
  41. routines in ll_rw_block() tend to use cli/sti to protect the atomicity of
  42. their actions, and if a driver uses spinlocks instead then you should
  43. think about issues like the above.
  44. This is really the only really hard part about spinlocks: once you start
  45. using spinlocks they tend to expand to areas you might not have noticed
  46. before, because you have to make sure the spinlocks correctly protect the
  47. shared data structures _everywhere_ they are used. The spinlocks are most
  48. easily added to places that are completely independent of other code (for
  49. example, internal driver data structures that nobody else ever touches).
  50. NOTE! The spin-lock is safe only when you _also_ use the lock itself
  51. to do locking across CPU's, which implies that EVERYTHING that
  52. touches a shared variable has to agree about the spinlock they want
  53. to use.
  54. ----
  55. Lesson 2: reader-writer spinlocks.
  56. If your data accesses have a very natural pattern where you usually tend
  57. to mostly read from the shared variables, the reader-writer locks
  58. (rw_lock) versions of the spinlocks are sometimes useful. They allow multiple
  59. readers to be in the same critical region at once, but if somebody wants
  60. to change the variables it has to get an exclusive write lock.
  61. NOTE! reader-writer locks require more atomic memory operations than
  62. simple spinlocks. Unless the reader critical section is long, you
  63. are better off just using spinlocks.
  64. The routines look the same as above:
  65. rwlock_t xxx_lock = RW_LOCK_UNLOCKED;
  66. unsigned long flags;
  67. read_lock_irqsave(&xxx_lock, flags);
  68. .. critical section that only reads the info ...
  69. read_unlock_irqrestore(&xxx_lock, flags);
  70. write_lock_irqsave(&xxx_lock, flags);
  71. .. read and write exclusive access to the info ...
  72. write_unlock_irqrestore(&xxx_lock, flags);
  73. The above kind of lock may be useful for complex data structures like
  74. linked lists, especially searching for entries without changing the list
  75. itself. The read lock allows many concurrent readers. Anything that
  76. _changes_ the list will have to get the write lock.
  77. NOTE! RCU is better for list traversal, but requires careful
  78. attention to design detail (see Documentation/RCU/listRCU.txt).
  79. Also, you cannot "upgrade" a read-lock to a write-lock, so if you at _any_
  80. time need to do any changes (even if you don't do it every time), you have
  81. to get the write-lock at the very beginning.
  82. NOTE! We are working hard to remove reader-writer spinlocks in most
  83. cases, so please don't add a new one without consensus. (Instead, see
  84. Documentation/RCU/rcu.txt for complete information.)
  85. ----
  86. Lesson 3: spinlocks revisited.
  87. The single spin-lock primitives above are by no means the only ones. They
  88. are the most safe ones, and the ones that work under all circumstances,
  89. but partly _because_ they are safe they are also fairly slow. They are
  90. much faster than a generic global cli/sti pair, but slower than they'd
  91. need to be, because they do have to disable interrupts (which is just a
  92. single instruction on a x86, but it's an expensive one - and on other
  93. architectures it can be worse).
  94. If you have a case where you have to protect a data structure across
  95. several CPU's and you want to use spinlocks you can potentially use
  96. cheaper versions of the spinlocks. IFF you know that the spinlocks are
  97. never used in interrupt handlers, you can use the non-irq versions:
  98. spin_lock(&lock);
  99. ...
  100. spin_unlock(&lock);
  101. (and the equivalent read-write versions too, of course). The spinlock will
  102. guarantee the same kind of exclusive access, and it will be much faster.
  103. This is useful if you know that the data in question is only ever
  104. manipulated from a "process context", ie no interrupts involved.
  105. The reasons you mustn't use these versions if you have interrupts that
  106. play with the spinlock is that you can get deadlocks:
  107. spin_lock(&lock);
  108. ...
  109. <- interrupt comes in:
  110. spin_lock(&lock);
  111. where an interrupt tries to lock an already locked variable. This is ok if
  112. the other interrupt happens on another CPU, but it is _not_ ok if the
  113. interrupt happens on the same CPU that already holds the lock, because the
  114. lock will obviously never be released (because the interrupt is waiting
  115. for the lock, and the lock-holder is interrupted by the interrupt and will
  116. not continue until the interrupt has been processed).
  117. (This is also the reason why the irq-versions of the spinlocks only need
  118. to disable the _local_ interrupts - it's ok to use spinlocks in interrupts
  119. on other CPU's, because an interrupt on another CPU doesn't interrupt the
  120. CPU that holds the lock, so the lock-holder can continue and eventually
  121. releases the lock).
  122. Note that you can be clever with read-write locks and interrupts. For
  123. example, if you know that the interrupt only ever gets a read-lock, then
  124. you can use a non-irq version of read locks everywhere - because they
  125. don't block on each other (and thus there is no dead-lock wrt interrupts.
  126. But when you do the write-lock, you have to use the irq-safe version.
  127. For an example of being clever with rw-locks, see the "waitqueue_lock"
  128. handling in kernel/sched.c - nothing ever _changes_ a wait-queue from
  129. within an interrupt, they only read the queue in order to know whom to
  130. wake up. So read-locks are safe (which is good: they are very common
  131. indeed), while write-locks need to protect themselves against interrupts.
  132. Linus
  133. ----
  134. Reference information:
  135. For dynamic initialization, use spin_lock_init() or rwlock_init() as
  136. appropriate:
  137. spinlock_t xxx_lock;
  138. rwlock_t xxx_rw_lock;
  139. static int __init xxx_init(void)
  140. {
  141. spin_lock_init(&xxx_lock);
  142. rwlock_init(&xxx_rw_lock);
  143. ...
  144. }
  145. module_init(xxx_init);
  146. For static initialization, use DEFINE_SPINLOCK() / DEFINE_RWLOCK() or
  147. __SPIN_LOCK_UNLOCKED() / __RW_LOCK_UNLOCKED() as appropriate.
  148. SPIN_LOCK_UNLOCKED and RW_LOCK_UNLOCKED are deprecated. These interfere
  149. with lockdep state tracking.
  150. Most of the time, you can simply turn:
  151. static spinlock_t xxx_lock = SPIN_LOCK_UNLOCKED;
  152. into:
  153. static DEFINE_SPINLOCK(xxx_lock);
  154. Static structure member variables go from:
  155. struct foo bar {
  156. .lock = SPIN_LOCK_UNLOCKED;
  157. };
  158. to:
  159. struct foo bar {
  160. .lock = __SPIN_LOCK_UNLOCKED(bar.lock);
  161. };
  162. Declaration of static rw_locks undergo a similar transformation.