Kconfig 7.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config USER_STACKTRACE_SUPPORT
  6. bool
  7. config NOP_TRACER
  8. bool
  9. config HAVE_FUNCTION_TRACER
  10. bool
  11. config HAVE_FUNCTION_RET_TRACER
  12. bool
  13. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  14. bool
  15. help
  16. This gets selected when the arch tests the function_trace_stop
  17. variable at the mcount call site. Otherwise, this variable
  18. is tested by the called function.
  19. config HAVE_DYNAMIC_FTRACE
  20. bool
  21. config HAVE_FTRACE_MCOUNT_RECORD
  22. bool
  23. config TRACER_MAX_TRACE
  24. bool
  25. config RING_BUFFER
  26. bool
  27. config TRACING
  28. bool
  29. select DEBUG_FS
  30. select RING_BUFFER
  31. select STACKTRACE if STACKTRACE_SUPPORT
  32. select TRACEPOINTS
  33. select NOP_TRACER
  34. menu "Tracers"
  35. config FUNCTION_TRACER
  36. bool "Kernel Function Tracer"
  37. depends on HAVE_FUNCTION_TRACER
  38. depends on DEBUG_KERNEL
  39. select FRAME_POINTER
  40. select TRACING
  41. select CONTEXT_SWITCH_TRACER
  42. help
  43. Enable the kernel to trace every kernel function. This is done
  44. by using a compiler feature to insert a small, 5-byte No-Operation
  45. instruction to the beginning of every kernel function, which NOP
  46. sequence is then dynamically patched into a tracer call when
  47. tracing is enabled by the administrator. If it's runtime disabled
  48. (the bootup default), then the overhead of the instructions is very
  49. small and not measurable even in micro-benchmarks.
  50. config FUNCTION_RET_TRACER
  51. bool "Kernel Function return Tracer"
  52. depends on HAVE_FUNCTION_RET_TRACER
  53. depends on FUNCTION_TRACER
  54. help
  55. Enable the kernel to trace a function at its return.
  56. It's first purpose is to trace the duration of functions.
  57. This is done by setting the current return address on the thread
  58. info structure of the current task.
  59. config IRQSOFF_TRACER
  60. bool "Interrupts-off Latency Tracer"
  61. default n
  62. depends on TRACE_IRQFLAGS_SUPPORT
  63. depends on GENERIC_TIME
  64. depends on DEBUG_KERNEL
  65. select TRACE_IRQFLAGS
  66. select TRACING
  67. select TRACER_MAX_TRACE
  68. help
  69. This option measures the time spent in irqs-off critical
  70. sections, with microsecond accuracy.
  71. The default measurement method is a maximum search, which is
  72. disabled by default and can be runtime (re-)started
  73. via:
  74. echo 0 > /debugfs/tracing/tracing_max_latency
  75. (Note that kernel size and overhead increases with this option
  76. enabled. This option and the preempt-off timing option can be
  77. used together or separately.)
  78. config PREEMPT_TRACER
  79. bool "Preemption-off Latency Tracer"
  80. default n
  81. depends on GENERIC_TIME
  82. depends on PREEMPT
  83. depends on DEBUG_KERNEL
  84. select TRACING
  85. select TRACER_MAX_TRACE
  86. help
  87. This option measures the time spent in preemption off critical
  88. sections, with microsecond accuracy.
  89. The default measurement method is a maximum search, which is
  90. disabled by default and can be runtime (re-)started
  91. via:
  92. echo 0 > /debugfs/tracing/tracing_max_latency
  93. (Note that kernel size and overhead increases with this option
  94. enabled. This option and the irqs-off timing option can be
  95. used together or separately.)
  96. config SYSPROF_TRACER
  97. bool "Sysprof Tracer"
  98. depends on X86
  99. select TRACING
  100. help
  101. This tracer provides the trace needed by the 'Sysprof' userspace
  102. tool.
  103. config SCHED_TRACER
  104. bool "Scheduling Latency Tracer"
  105. depends on DEBUG_KERNEL
  106. select TRACING
  107. select CONTEXT_SWITCH_TRACER
  108. select TRACER_MAX_TRACE
  109. help
  110. This tracer tracks the latency of the highest priority task
  111. to be scheduled in, starting from the point it has woken up.
  112. config CONTEXT_SWITCH_TRACER
  113. bool "Trace process context switches"
  114. depends on DEBUG_KERNEL
  115. select TRACING
  116. select MARKERS
  117. help
  118. This tracer gets called from the context switch and records
  119. all switching of tasks.
  120. config BOOT_TRACER
  121. bool "Trace boot initcalls"
  122. depends on DEBUG_KERNEL
  123. select TRACING
  124. select CONTEXT_SWITCH_TRACER
  125. help
  126. This tracer helps developers to optimize boot times: it records
  127. the timings of the initcalls and traces key events and the identity
  128. of tasks that can cause boot delays, such as context-switches.
  129. Its aim is to be parsed by the /scripts/bootgraph.pl tool to
  130. produce pretty graphics about boot inefficiencies, giving a visual
  131. representation of the delays during initcalls - but the raw
  132. /debug/tracing/trace text output is readable too.
  133. ( Note that tracing self tests can't be enabled if this tracer is
  134. selected, because the self-tests are an initcall as well and that
  135. would invalidate the boot trace. )
  136. config TRACE_BRANCH_PROFILING
  137. bool "Trace likely/unlikely profiler"
  138. depends on DEBUG_KERNEL
  139. select TRACING
  140. help
  141. This tracer profiles all the the likely and unlikely macros
  142. in the kernel. It will display the results in:
  143. /debugfs/tracing/profile_annotated_branch
  144. Note: this will add a significant overhead, only turn this
  145. on if you need to profile the system's use of these macros.
  146. Say N if unsure.
  147. config PROFILE_ALL_BRANCHES
  148. bool "Profile all if conditionals"
  149. depends on TRACE_BRANCH_PROFILING
  150. help
  151. This tracer profiles all branch conditions. Every if ()
  152. taken in the kernel is recorded whether it hit or miss.
  153. The results will be displayed in:
  154. /debugfs/tracing/profile_branch
  155. This configuration, when enabled, will impose a great overhead
  156. on the system. This should only be enabled when the system
  157. is to be analyzed
  158. Say N if unsure.
  159. config TRACING_BRANCHES
  160. bool
  161. help
  162. Selected by tracers that will trace the likely and unlikely
  163. conditions. This prevents the tracers themselves from being
  164. profiled. Profiling the tracing infrastructure can only happen
  165. when the likelys and unlikelys are not being traced.
  166. config BRANCH_TRACER
  167. bool "Trace likely/unlikely instances"
  168. depends on TRACE_BRANCH_PROFILING
  169. select TRACING_BRANCHES
  170. help
  171. This traces the events of likely and unlikely condition
  172. calls in the kernel. The difference between this and the
  173. "Trace likely/unlikely profiler" is that this is not a
  174. histogram of the callers, but actually places the calling
  175. events into a running trace buffer to see when and where the
  176. events happened, as well as their results.
  177. Say N if unsure.
  178. config STACK_TRACER
  179. bool "Trace max stack"
  180. depends on HAVE_FUNCTION_TRACER
  181. depends on DEBUG_KERNEL
  182. select FUNCTION_TRACER
  183. select STACKTRACE
  184. help
  185. This special tracer records the maximum stack footprint of the
  186. kernel and displays it in debugfs/tracing/stack_trace.
  187. This tracer works by hooking into every function call that the
  188. kernel executes, and keeping a maximum stack depth value and
  189. stack-trace saved. Because this logic has to execute in every
  190. kernel function, all the time, this option can slow down the
  191. kernel measurably and is generally intended for kernel
  192. developers only.
  193. Say N if unsure.
  194. config DYNAMIC_FTRACE
  195. bool "enable/disable ftrace tracepoints dynamically"
  196. depends on FUNCTION_TRACER
  197. depends on HAVE_DYNAMIC_FTRACE
  198. depends on DEBUG_KERNEL
  199. default y
  200. help
  201. This option will modify all the calls to ftrace dynamically
  202. (will patch them out of the binary image and replaces them
  203. with a No-Op instruction) as they are called. A table is
  204. created to dynamically enable them again.
  205. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  206. has native performance as long as no tracing is active.
  207. The changes to the code are done by a kernel thread that
  208. wakes up once a second and checks to see if any ftrace calls
  209. were made. If so, it runs stop_machine (stops all CPUS)
  210. and modifies the code to jump over the call to ftrace.
  211. config FTRACE_MCOUNT_RECORD
  212. def_bool y
  213. depends on DYNAMIC_FTRACE
  214. depends on HAVE_FTRACE_MCOUNT_RECORD
  215. config FTRACE_SELFTEST
  216. bool
  217. config FTRACE_STARTUP_TEST
  218. bool "Perform a startup test on ftrace"
  219. depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
  220. select FTRACE_SELFTEST
  221. help
  222. This option performs a series of startup tests on ftrace. On bootup
  223. a series of tests are made to verify that the tracer is
  224. functioning properly. It will do tests on all the configured
  225. tracers of ftrace.
  226. endmenu