Kconfig 5.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config NOP_TRACER
  6. bool
  7. config HAVE_FUNCTION_TRACER
  8. bool
  9. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  10. bool
  11. help
  12. This gets selected when the arch tests the function_trace_stop
  13. variable at the mcount call site. Otherwise, this variable
  14. is tested by the called function.
  15. config HAVE_DYNAMIC_FTRACE
  16. bool
  17. config HAVE_FTRACE_MCOUNT_RECORD
  18. bool
  19. config TRACER_MAX_TRACE
  20. bool
  21. config RING_BUFFER
  22. bool
  23. config TRACING
  24. bool
  25. select DEBUG_FS
  26. select RING_BUFFER
  27. select STACKTRACE if STACKTRACE_SUPPORT
  28. select TRACEPOINTS
  29. select NOP_TRACER
  30. menu "Tracers"
  31. config FUNCTION_TRACER
  32. bool "Kernel Function Tracer"
  33. depends on HAVE_FUNCTION_TRACER
  34. depends on DEBUG_KERNEL
  35. select FRAME_POINTER
  36. select TRACING
  37. select CONTEXT_SWITCH_TRACER
  38. help
  39. Enable the kernel to trace every kernel function. This is done
  40. by using a compiler feature to insert a small, 5-byte No-Operation
  41. instruction to the beginning of every kernel function, which NOP
  42. sequence is then dynamically patched into a tracer call when
  43. tracing is enabled by the administrator. If it's runtime disabled
  44. (the bootup default), then the overhead of the instructions is very
  45. small and not measurable even in micro-benchmarks.
  46. config IRQSOFF_TRACER
  47. bool "Interrupts-off Latency Tracer"
  48. default n
  49. depends on TRACE_IRQFLAGS_SUPPORT
  50. depends on GENERIC_TIME
  51. depends on DEBUG_KERNEL
  52. select TRACE_IRQFLAGS
  53. select TRACING
  54. select TRACER_MAX_TRACE
  55. help
  56. This option measures the time spent in irqs-off critical
  57. sections, with microsecond accuracy.
  58. The default measurement method is a maximum search, which is
  59. disabled by default and can be runtime (re-)started
  60. via:
  61. echo 0 > /debugfs/tracing/tracing_max_latency
  62. (Note that kernel size and overhead increases with this option
  63. enabled. This option and the preempt-off timing option can be
  64. used together or separately.)
  65. config PREEMPT_TRACER
  66. bool "Preemption-off Latency Tracer"
  67. default n
  68. depends on GENERIC_TIME
  69. depends on PREEMPT
  70. depends on DEBUG_KERNEL
  71. select TRACING
  72. select TRACER_MAX_TRACE
  73. help
  74. This option measures the time spent in preemption off critical
  75. sections, with microsecond accuracy.
  76. The default measurement method is a maximum search, which is
  77. disabled by default and can be runtime (re-)started
  78. via:
  79. echo 0 > /debugfs/tracing/tracing_max_latency
  80. (Note that kernel size and overhead increases with this option
  81. enabled. This option and the irqs-off timing option can be
  82. used together or separately.)
  83. config SYSPROF_TRACER
  84. bool "Sysprof Tracer"
  85. depends on X86
  86. select TRACING
  87. help
  88. This tracer provides the trace needed by the 'Sysprof' userspace
  89. tool.
  90. config SCHED_TRACER
  91. bool "Scheduling Latency Tracer"
  92. depends on DEBUG_KERNEL
  93. select TRACING
  94. select CONTEXT_SWITCH_TRACER
  95. select TRACER_MAX_TRACE
  96. help
  97. This tracer tracks the latency of the highest priority task
  98. to be scheduled in, starting from the point it has woken up.
  99. config CONTEXT_SWITCH_TRACER
  100. bool "Trace process context switches"
  101. depends on DEBUG_KERNEL
  102. select TRACING
  103. select MARKERS
  104. help
  105. This tracer gets called from the context switch and records
  106. all switching of tasks.
  107. config BOOT_TRACER
  108. bool "Trace boot initcalls"
  109. depends on DEBUG_KERNEL
  110. select TRACING
  111. select CONTEXT_SWITCH_TRACER
  112. help
  113. This tracer helps developers to optimize boot times: it records
  114. the timings of the initcalls and traces key events and the identity
  115. of tasks that can cause boot delays, such as context-switches.
  116. Its aim is to be parsed by the /scripts/bootgraph.pl tool to
  117. produce pretty graphics about boot inefficiencies, giving a visual
  118. representation of the delays during initcalls - but the raw
  119. /debug/tracing/trace text output is readable too.
  120. ( Note that tracing self tests can't be enabled if this tracer is
  121. selected, because the self-tests are an initcall as well and that
  122. would invalidate the boot trace. )
  123. config STACK_TRACER
  124. bool "Trace max stack"
  125. depends on HAVE_FUNCTION_TRACER
  126. depends on DEBUG_KERNEL
  127. select FUNCTION_TRACER
  128. select STACKTRACE
  129. help
  130. This special tracer records the maximum stack footprint of the
  131. kernel and displays it in debugfs/tracing/stack_trace.
  132. This tracer works by hooking into every function call that the
  133. kernel executes, and keeping a maximum stack depth value and
  134. stack-trace saved. Because this logic has to execute in every
  135. kernel function, all the time, this option can slow down the
  136. kernel measurably and is generally intended for kernel
  137. developers only.
  138. Say N if unsure.
  139. config DYNAMIC_FTRACE
  140. bool "enable/disable ftrace tracepoints dynamically"
  141. depends on FUNCTION_TRACER
  142. depends on HAVE_DYNAMIC_FTRACE
  143. depends on DEBUG_KERNEL
  144. default y
  145. help
  146. This option will modify all the calls to ftrace dynamically
  147. (will patch them out of the binary image and replaces them
  148. with a No-Op instruction) as they are called. A table is
  149. created to dynamically enable them again.
  150. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  151. has native performance as long as no tracing is active.
  152. The changes to the code are done by a kernel thread that
  153. wakes up once a second and checks to see if any ftrace calls
  154. were made. If so, it runs stop_machine (stops all CPUS)
  155. and modifies the code to jump over the call to ftrace.
  156. config FTRACE_MCOUNT_RECORD
  157. def_bool y
  158. depends on DYNAMIC_FTRACE
  159. depends on HAVE_FTRACE_MCOUNT_RECORD
  160. config FTRACE_SELFTEST
  161. bool
  162. config FTRACE_STARTUP_TEST
  163. bool "Perform a startup test on ftrace"
  164. depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
  165. select FTRACE_SELFTEST
  166. help
  167. This option performs a series of startup tests on ftrace. On bootup
  168. a series of tests are made to verify that the tracer is
  169. functioning properly. It will do tests on all the configured
  170. tracers of ftrace.
  171. endmenu