Kconfig 7.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config NOP_TRACER
  6. bool
  7. config HAVE_FUNCTION_TRACER
  8. bool
  9. config HAVE_FUNCTION_RET_TRACER
  10. bool
  11. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  12. bool
  13. help
  14. This gets selected when the arch tests the function_trace_stop
  15. variable at the mcount call site. Otherwise, this variable
  16. is tested by the called function.
  17. config HAVE_DYNAMIC_FTRACE
  18. bool
  19. config HAVE_FTRACE_MCOUNT_RECORD
  20. bool
  21. config TRACER_MAX_TRACE
  22. bool
  23. config RING_BUFFER
  24. bool
  25. config TRACING
  26. bool
  27. select DEBUG_FS
  28. select RING_BUFFER
  29. select STACKTRACE if STACKTRACE_SUPPORT
  30. select TRACEPOINTS
  31. select NOP_TRACER
  32. menu "Tracers"
  33. config FUNCTION_TRACER
  34. bool "Kernel Function Tracer"
  35. depends on HAVE_FUNCTION_TRACER
  36. depends on DEBUG_KERNEL
  37. select FRAME_POINTER
  38. select TRACING
  39. select CONTEXT_SWITCH_TRACER
  40. help
  41. Enable the kernel to trace every kernel function. This is done
  42. by using a compiler feature to insert a small, 5-byte No-Operation
  43. instruction to the beginning of every kernel function, which NOP
  44. sequence is then dynamically patched into a tracer call when
  45. tracing is enabled by the administrator. If it's runtime disabled
  46. (the bootup default), then the overhead of the instructions is very
  47. small and not measurable even in micro-benchmarks.
  48. config FUNCTION_RET_TRACER
  49. bool "Kernel Function return Tracer"
  50. depends on HAVE_FUNCTION_RET_TRACER
  51. depends on FUNCTION_TRACER
  52. help
  53. Enable the kernel to trace a function at its return.
  54. It's first purpose is to trace the duration of functions.
  55. This is done by setting the current return address on the thread
  56. info structure of the current task.
  57. config IRQSOFF_TRACER
  58. bool "Interrupts-off Latency Tracer"
  59. default n
  60. depends on TRACE_IRQFLAGS_SUPPORT
  61. depends on GENERIC_TIME
  62. depends on DEBUG_KERNEL
  63. select TRACE_IRQFLAGS
  64. select TRACING
  65. select TRACER_MAX_TRACE
  66. help
  67. This option measures the time spent in irqs-off critical
  68. sections, with microsecond accuracy.
  69. The default measurement method is a maximum search, which is
  70. disabled by default and can be runtime (re-)started
  71. via:
  72. echo 0 > /debugfs/tracing/tracing_max_latency
  73. (Note that kernel size and overhead increases with this option
  74. enabled. This option and the preempt-off timing option can be
  75. used together or separately.)
  76. config PREEMPT_TRACER
  77. bool "Preemption-off Latency Tracer"
  78. default n
  79. depends on GENERIC_TIME
  80. depends on PREEMPT
  81. depends on DEBUG_KERNEL
  82. select TRACING
  83. select TRACER_MAX_TRACE
  84. help
  85. This option measures the time spent in preemption off critical
  86. sections, with microsecond accuracy.
  87. The default measurement method is a maximum search, which is
  88. disabled by default and can be runtime (re-)started
  89. via:
  90. echo 0 > /debugfs/tracing/tracing_max_latency
  91. (Note that kernel size and overhead increases with this option
  92. enabled. This option and the irqs-off timing option can be
  93. used together or separately.)
  94. config SYSPROF_TRACER
  95. bool "Sysprof Tracer"
  96. depends on X86
  97. select TRACING
  98. help
  99. This tracer provides the trace needed by the 'Sysprof' userspace
  100. tool.
  101. config SCHED_TRACER
  102. bool "Scheduling Latency Tracer"
  103. depends on DEBUG_KERNEL
  104. select TRACING
  105. select CONTEXT_SWITCH_TRACER
  106. select TRACER_MAX_TRACE
  107. help
  108. This tracer tracks the latency of the highest priority task
  109. to be scheduled in, starting from the point it has woken up.
  110. config CONTEXT_SWITCH_TRACER
  111. bool "Trace process context switches"
  112. depends on DEBUG_KERNEL
  113. select TRACING
  114. select MARKERS
  115. help
  116. This tracer gets called from the context switch and records
  117. all switching of tasks.
  118. config BOOT_TRACER
  119. bool "Trace boot initcalls"
  120. depends on DEBUG_KERNEL
  121. select TRACING
  122. select CONTEXT_SWITCH_TRACER
  123. help
  124. This tracer helps developers to optimize boot times: it records
  125. the timings of the initcalls and traces key events and the identity
  126. of tasks that can cause boot delays, such as context-switches.
  127. Its aim is to be parsed by the /scripts/bootgraph.pl tool to
  128. produce pretty graphics about boot inefficiencies, giving a visual
  129. representation of the delays during initcalls - but the raw
  130. /debug/tracing/trace text output is readable too.
  131. ( Note that tracing self tests can't be enabled if this tracer is
  132. selected, because the self-tests are an initcall as well and that
  133. would invalidate the boot trace. )
  134. config TRACE_BRANCH_PROFILING
  135. bool "Trace likely/unlikely profiler"
  136. depends on DEBUG_KERNEL
  137. select TRACING
  138. help
  139. This tracer profiles all the the likely and unlikely macros
  140. in the kernel. It will display the results in:
  141. /debugfs/tracing/profile_likely
  142. /debugfs/tracing/profile_unlikely
  143. Note: this will add a significant overhead, only turn this
  144. on if you need to profile the system's use of these macros.
  145. Say N if unsure.
  146. config TRACING_BRANCHES
  147. bool
  148. help
  149. Selected by tracers that will trace the likely and unlikely
  150. conditions. This prevents the tracers themselves from being
  151. profiled. Profiling the tracing infrastructure can only happen
  152. when the likelys and unlikelys are not being traced.
  153. config BRANCH_TRACER
  154. bool "Trace likely/unlikely instances"
  155. depends on TRACE_BRANCH_PROFILING
  156. select TRACING_BRANCHES
  157. help
  158. This traces the events of likely and unlikely condition
  159. calls in the kernel. The difference between this and the
  160. "Trace likely/unlikely profiler" is that this is not a
  161. histogram of the callers, but actually places the calling
  162. events into a running trace buffer to see when and where the
  163. events happened, as well as their results.
  164. Say N if unsure.
  165. config STACK_TRACER
  166. bool "Trace max stack"
  167. depends on HAVE_FUNCTION_TRACER
  168. depends on DEBUG_KERNEL
  169. select FUNCTION_TRACER
  170. select STACKTRACE
  171. help
  172. This special tracer records the maximum stack footprint of the
  173. kernel and displays it in debugfs/tracing/stack_trace.
  174. This tracer works by hooking into every function call that the
  175. kernel executes, and keeping a maximum stack depth value and
  176. stack-trace saved. Because this logic has to execute in every
  177. kernel function, all the time, this option can slow down the
  178. kernel measurably and is generally intended for kernel
  179. developers only.
  180. Say N if unsure.
  181. config DYNAMIC_FTRACE
  182. bool "enable/disable ftrace tracepoints dynamically"
  183. depends on FUNCTION_TRACER
  184. depends on HAVE_DYNAMIC_FTRACE
  185. depends on DEBUG_KERNEL
  186. default y
  187. help
  188. This option will modify all the calls to ftrace dynamically
  189. (will patch them out of the binary image and replaces them
  190. with a No-Op instruction) as they are called. A table is
  191. created to dynamically enable them again.
  192. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  193. has native performance as long as no tracing is active.
  194. The changes to the code are done by a kernel thread that
  195. wakes up once a second and checks to see if any ftrace calls
  196. were made. If so, it runs stop_machine (stops all CPUS)
  197. and modifies the code to jump over the call to ftrace.
  198. config FTRACE_MCOUNT_RECORD
  199. def_bool y
  200. depends on DYNAMIC_FTRACE
  201. depends on HAVE_FTRACE_MCOUNT_RECORD
  202. config FTRACE_SELFTEST
  203. bool
  204. config FTRACE_STARTUP_TEST
  205. bool "Perform a startup test on ftrace"
  206. depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
  207. select FTRACE_SELFTEST
  208. help
  209. This option performs a series of startup tests on ftrace. On bootup
  210. a series of tests are made to verify that the tracer is
  211. functioning properly. It will do tests on all the configured
  212. tracers of ftrace.
  213. endmenu