Kconfig 8.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config USER_STACKTRACE_SUPPORT
  6. bool
  7. config NOP_TRACER
  8. bool
  9. config HAVE_FUNCTION_TRACER
  10. bool
  11. config HAVE_FUNCTION_RET_TRACER
  12. bool
  13. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  14. bool
  15. help
  16. This gets selected when the arch tests the function_trace_stop
  17. variable at the mcount call site. Otherwise, this variable
  18. is tested by the called function.
  19. config HAVE_DYNAMIC_FTRACE
  20. bool
  21. config HAVE_FTRACE_MCOUNT_RECORD
  22. bool
  23. config HAVE_HW_BRANCH_TRACER
  24. bool
  25. config TRACER_MAX_TRACE
  26. bool
  27. config RING_BUFFER
  28. bool
  29. config TRACING
  30. bool
  31. select DEBUG_FS
  32. select RING_BUFFER
  33. select STACKTRACE if STACKTRACE_SUPPORT
  34. select TRACEPOINTS
  35. select NOP_TRACER
  36. menu "Tracers"
  37. config FUNCTION_TRACER
  38. bool "Kernel Function Tracer"
  39. depends on HAVE_FUNCTION_TRACER
  40. depends on DEBUG_KERNEL
  41. select FRAME_POINTER
  42. select TRACING
  43. select CONTEXT_SWITCH_TRACER
  44. help
  45. Enable the kernel to trace every kernel function. This is done
  46. by using a compiler feature to insert a small, 5-byte No-Operation
  47. instruction to the beginning of every kernel function, which NOP
  48. sequence is then dynamically patched into a tracer call when
  49. tracing is enabled by the administrator. If it's runtime disabled
  50. (the bootup default), then the overhead of the instructions is very
  51. small and not measurable even in micro-benchmarks.
  52. config FUNCTION_RET_TRACER
  53. bool "Kernel Function return Tracer"
  54. depends on HAVE_FUNCTION_RET_TRACER
  55. depends on FUNCTION_TRACER
  56. help
  57. Enable the kernel to trace a function at its return.
  58. It's first purpose is to trace the duration of functions.
  59. This is done by setting the current return address on the thread
  60. info structure of the current task.
  61. config IRQSOFF_TRACER
  62. bool "Interrupts-off Latency Tracer"
  63. default n
  64. depends on TRACE_IRQFLAGS_SUPPORT
  65. depends on GENERIC_TIME
  66. depends on DEBUG_KERNEL
  67. select TRACE_IRQFLAGS
  68. select TRACING
  69. select TRACER_MAX_TRACE
  70. help
  71. This option measures the time spent in irqs-off critical
  72. sections, with microsecond accuracy.
  73. The default measurement method is a maximum search, which is
  74. disabled by default and can be runtime (re-)started
  75. via:
  76. echo 0 > /debugfs/tracing/tracing_max_latency
  77. (Note that kernel size and overhead increases with this option
  78. enabled. This option and the preempt-off timing option can be
  79. used together or separately.)
  80. config PREEMPT_TRACER
  81. bool "Preemption-off Latency Tracer"
  82. default n
  83. depends on GENERIC_TIME
  84. depends on PREEMPT
  85. depends on DEBUG_KERNEL
  86. select TRACING
  87. select TRACER_MAX_TRACE
  88. help
  89. This option measures the time spent in preemption off critical
  90. sections, with microsecond accuracy.
  91. The default measurement method is a maximum search, which is
  92. disabled by default and can be runtime (re-)started
  93. via:
  94. echo 0 > /debugfs/tracing/tracing_max_latency
  95. (Note that kernel size and overhead increases with this option
  96. enabled. This option and the irqs-off timing option can be
  97. used together or separately.)
  98. config SYSPROF_TRACER
  99. bool "Sysprof Tracer"
  100. depends on X86
  101. select TRACING
  102. help
  103. This tracer provides the trace needed by the 'Sysprof' userspace
  104. tool.
  105. config SCHED_TRACER
  106. bool "Scheduling Latency Tracer"
  107. depends on DEBUG_KERNEL
  108. select TRACING
  109. select CONTEXT_SWITCH_TRACER
  110. select TRACER_MAX_TRACE
  111. help
  112. This tracer tracks the latency of the highest priority task
  113. to be scheduled in, starting from the point it has woken up.
  114. config CONTEXT_SWITCH_TRACER
  115. bool "Trace process context switches"
  116. depends on DEBUG_KERNEL
  117. select TRACING
  118. select MARKERS
  119. help
  120. This tracer gets called from the context switch and records
  121. all switching of tasks.
  122. config BOOT_TRACER
  123. bool "Trace boot initcalls"
  124. depends on DEBUG_KERNEL
  125. select TRACING
  126. select CONTEXT_SWITCH_TRACER
  127. help
  128. This tracer helps developers to optimize boot times: it records
  129. the timings of the initcalls and traces key events and the identity
  130. of tasks that can cause boot delays, such as context-switches.
  131. Its aim is to be parsed by the /scripts/bootgraph.pl tool to
  132. produce pretty graphics about boot inefficiencies, giving a visual
  133. representation of the delays during initcalls - but the raw
  134. /debug/tracing/trace text output is readable too.
  135. ( Note that tracing self tests can't be enabled if this tracer is
  136. selected, because the self-tests are an initcall as well and that
  137. would invalidate the boot trace. )
  138. config TRACE_BRANCH_PROFILING
  139. bool "Trace likely/unlikely profiler"
  140. depends on DEBUG_KERNEL
  141. select TRACING
  142. help
  143. This tracer profiles all the the likely and unlikely macros
  144. in the kernel. It will display the results in:
  145. /debugfs/tracing/profile_annotated_branch
  146. Note: this will add a significant overhead, only turn this
  147. on if you need to profile the system's use of these macros.
  148. Say N if unsure.
  149. config PROFILE_ALL_BRANCHES
  150. bool "Profile all if conditionals"
  151. depends on TRACE_BRANCH_PROFILING
  152. help
  153. This tracer profiles all branch conditions. Every if ()
  154. taken in the kernel is recorded whether it hit or miss.
  155. The results will be displayed in:
  156. /debugfs/tracing/profile_branch
  157. This configuration, when enabled, will impose a great overhead
  158. on the system. This should only be enabled when the system
  159. is to be analyzed
  160. Say N if unsure.
  161. config TRACING_BRANCHES
  162. bool
  163. help
  164. Selected by tracers that will trace the likely and unlikely
  165. conditions. This prevents the tracers themselves from being
  166. profiled. Profiling the tracing infrastructure can only happen
  167. when the likelys and unlikelys are not being traced.
  168. config BRANCH_TRACER
  169. bool "Trace likely/unlikely instances"
  170. depends on TRACE_BRANCH_PROFILING
  171. select TRACING_BRANCHES
  172. help
  173. This traces the events of likely and unlikely condition
  174. calls in the kernel. The difference between this and the
  175. "Trace likely/unlikely profiler" is that this is not a
  176. histogram of the callers, but actually places the calling
  177. events into a running trace buffer to see when and where the
  178. events happened, as well as their results.
  179. Say N if unsure.
  180. config STACK_TRACER
  181. bool "Trace max stack"
  182. depends on HAVE_FUNCTION_TRACER
  183. depends on DEBUG_KERNEL
  184. select FUNCTION_TRACER
  185. select STACKTRACE
  186. help
  187. This special tracer records the maximum stack footprint of the
  188. kernel and displays it in debugfs/tracing/stack_trace.
  189. This tracer works by hooking into every function call that the
  190. kernel executes, and keeping a maximum stack depth value and
  191. stack-trace saved. Because this logic has to execute in every
  192. kernel function, all the time, this option can slow down the
  193. kernel measurably and is generally intended for kernel
  194. developers only.
  195. Say N if unsure.
  196. config BTS_TRACER
  197. depends on HAVE_HW_BRANCH_TRACER
  198. bool "Trace branches"
  199. select TRACING
  200. help
  201. This tracer records all branches on the system in a circular
  202. buffer giving access to the last N branches for each cpu.
  203. config DYNAMIC_FTRACE
  204. bool "enable/disable ftrace tracepoints dynamically"
  205. depends on FUNCTION_TRACER
  206. depends on HAVE_DYNAMIC_FTRACE
  207. depends on DEBUG_KERNEL
  208. default y
  209. help
  210. This option will modify all the calls to ftrace dynamically
  211. (will patch them out of the binary image and replaces them
  212. with a No-Op instruction) as they are called. A table is
  213. created to dynamically enable them again.
  214. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  215. has native performance as long as no tracing is active.
  216. The changes to the code are done by a kernel thread that
  217. wakes up once a second and checks to see if any ftrace calls
  218. were made. If so, it runs stop_machine (stops all CPUS)
  219. and modifies the code to jump over the call to ftrace.
  220. config FTRACE_MCOUNT_RECORD
  221. def_bool y
  222. depends on DYNAMIC_FTRACE
  223. depends on HAVE_FTRACE_MCOUNT_RECORD
  224. config FTRACE_SELFTEST
  225. bool
  226. config FTRACE_STARTUP_TEST
  227. bool "Perform a startup test on ftrace"
  228. depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
  229. select FTRACE_SELFTEST
  230. help
  231. This option performs a series of startup tests on ftrace. On bootup
  232. a series of tests are made to verify that the tracer is
  233. functioning properly. It will do tests on all the configured
  234. tracers of ftrace.
  235. endmenu