|
@@ -134,7 +134,7 @@ config RT_MUTEX_TESTER
|
|
|
This option enables a rt-mutex tester.
|
|
|
|
|
|
config DEBUG_SPINLOCK
|
|
|
- bool "Spinlock debugging"
|
|
|
+ bool "Spinlock and rw-lock debugging: basic checks"
|
|
|
depends on DEBUG_KERNEL
|
|
|
help
|
|
|
Say Y here and build SMP to catch missing spinlock initialization
|
|
@@ -142,8 +142,102 @@ config DEBUG_SPINLOCK
|
|
|
best used in conjunction with the NMI watchdog so that spinlock
|
|
|
deadlocks are also debuggable.
|
|
|
|
|
|
+config DEBUG_MUTEXES
|
|
|
+ bool "Mutex debugging: basic checks"
|
|
|
+ depends on DEBUG_KERNEL
|
|
|
+ help
|
|
|
+ This feature allows mutex semantics violations to be detected and
|
|
|
+ reported.
|
|
|
+
|
|
|
+config DEBUG_RWSEMS
|
|
|
+ bool "RW-sem debugging: basic checks"
|
|
|
+ depends on DEBUG_KERNEL
|
|
|
+ help
|
|
|
+ This feature allows read-write semaphore semantics violations to
|
|
|
+ be detected and reported.
|
|
|
+
|
|
|
+config DEBUG_LOCK_ALLOC
|
|
|
+ bool "Lock debugging: detect incorrect freeing of live locks"
|
|
|
+ depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
|
|
|
+ select DEBUG_SPINLOCK
|
|
|
+ select DEBUG_MUTEXES
|
|
|
+ select DEBUG_RWSEMS
|
|
|
+ select LOCKDEP
|
|
|
+ help
|
|
|
+ This feature will check whether any held lock (spinlock, rwlock,
|
|
|
+ mutex or rwsem) is incorrectly freed by the kernel, via any of the
|
|
|
+ memory-freeing routines (kfree(), kmem_cache_free(), free_pages(),
|
|
|
+ vfree(), etc.), whether a live lock is incorrectly reinitialized via
|
|
|
+ spin_lock_init()/mutex_init()/etc., or whether there is any lock
|
|
|
+ held during task exit.
|
|
|
+
|
|
|
+config PROVE_LOCKING
|
|
|
+ bool "Lock debugging: prove locking correctness"
|
|
|
+ depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
|
|
|
+ select LOCKDEP
|
|
|
+ select DEBUG_SPINLOCK
|
|
|
+ select DEBUG_MUTEXES
|
|
|
+ select DEBUG_RWSEMS
|
|
|
+ select DEBUG_LOCK_ALLOC
|
|
|
+ default n
|
|
|
+ help
|
|
|
+ This feature enables the kernel to prove that all locking
|
|
|
+ that occurs in the kernel runtime is mathematically
|
|
|
+ correct: that under no circumstance could an arbitrary (and
|
|
|
+ not yet triggered) combination of observed locking
|
|
|
+ sequences (on an arbitrary number of CPUs, running an
|
|
|
+ arbitrary number of tasks and interrupt contexts) cause a
|
|
|
+ deadlock.
|
|
|
+
|
|
|
+ In short, this feature enables the kernel to report locking
|
|
|
+ related deadlocks before they actually occur.
|
|
|
+
|
|
|
+ The proof does not depend on how hard and complex a
|
|
|
+ deadlock scenario would be to trigger: how many
|
|
|
+ participant CPUs, tasks and irq-contexts would be needed
|
|
|
+ for it to trigger. The proof also does not depend on
|
|
|
+ timing: if a race and a resulting deadlock is possible
|
|
|
+ theoretically (no matter how unlikely the race scenario
|
|
|
+ is), it will be proven so and will immediately be
|
|
|
+ reported by the kernel (once the event is observed that
|
|
|
+ makes the deadlock theoretically possible).
|
|
|
+
|
|
|
+ If a deadlock is impossible (i.e. the locking rules, as
|
|
|
+ observed by the kernel, are mathematically correct), the
|
|
|
+ kernel reports nothing.
|
|
|
+
|
|
|
+ NOTE: this feature can also be enabled for rwlocks, mutexes
|
|
|
+ and rwsems - in which case all dependencies between these
|
|
|
+ different locking variants are observed and mapped too, and
|
|
|
+ the proof of observed correctness is also maintained for an
|
|
|
+ arbitrary combination of these separate locking variants.
|
|
|
+
|
|
|
+ For more details, see Documentation/lockdep-design.txt.
|
|
|
+
|
|
|
+config LOCKDEP
|
|
|
+ bool
|
|
|
+ depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
|
|
|
+ select STACKTRACE
|
|
|
+ select FRAME_POINTER
|
|
|
+ select KALLSYMS
|
|
|
+ select KALLSYMS_ALL
|
|
|
+
|
|
|
+config DEBUG_LOCKDEP
|
|
|
+ bool "Lock dependency engine debugging"
|
|
|
+ depends on LOCKDEP
|
|
|
+ help
|
|
|
+ If you say Y here, the lock dependency engine will do
|
|
|
+ additional runtime checks to debug itself, at the price
|
|
|
+ of more runtime overhead.
|
|
|
+
|
|
|
+config TRACE_IRQFLAGS
|
|
|
+ bool
|
|
|
+ default y
|
|
|
+ depends on TRACE_IRQFLAGS_SUPPORT
|
|
|
+ depends on PROVE_LOCKING
|
|
|
+
|
|
|
config DEBUG_SPINLOCK_SLEEP
|
|
|
- bool "Sleep-inside-spinlock checking"
|
|
|
+ bool "Spinlock debugging: sleep-inside-spinlock checking"
|
|
|
depends on DEBUG_KERNEL
|
|
|
help
|
|
|
If you say Y here, various routines which may sleep will become very
|