Kconfig 9.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config USER_STACKTRACE_SUPPORT
  6. bool
  7. config NOP_TRACER
  8. bool
  9. config HAVE_FUNCTION_TRACER
  10. bool
  11. config HAVE_FUNCTION_GRAPH_TRACER
  12. bool
  13. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  14. bool
  15. help
  16. This gets selected when the arch tests the function_trace_stop
  17. variable at the mcount call site. Otherwise, this variable
  18. is tested by the called function.
  19. config HAVE_DYNAMIC_FTRACE
  20. bool
  21. config HAVE_FTRACE_MCOUNT_RECORD
  22. bool
  23. config HAVE_HW_BRANCH_TRACER
  24. bool
  25. config TRACER_MAX_TRACE
  26. bool
  27. config RING_BUFFER
  28. bool
  29. config TRACING
  30. bool
  31. select DEBUG_FS
  32. select RING_BUFFER
  33. select STACKTRACE if STACKTRACE_SUPPORT
  34. select TRACEPOINTS
  35. select NOP_TRACER
  36. menu "Tracers"
  37. config FUNCTION_TRACER
  38. bool "Kernel Function Tracer"
  39. depends on HAVE_FUNCTION_TRACER
  40. depends on DEBUG_KERNEL
  41. select FRAME_POINTER
  42. select KALLSYMS
  43. select TRACING
  44. select CONTEXT_SWITCH_TRACER
  45. help
  46. Enable the kernel to trace every kernel function. This is done
  47. by using a compiler feature to insert a small, 5-byte No-Operation
  48. instruction to the beginning of every kernel function, which NOP
  49. sequence is then dynamically patched into a tracer call when
  50. tracing is enabled by the administrator. If it's runtime disabled
  51. (the bootup default), then the overhead of the instructions is very
  52. small and not measurable even in micro-benchmarks.
  53. config FUNCTION_GRAPH_TRACER
  54. bool "Kernel Function Graph Tracer"
  55. depends on HAVE_FUNCTION_GRAPH_TRACER
  56. depends on FUNCTION_TRACER
  57. default y
  58. help
  59. Enable the kernel to trace a function at both its return
  60. and its entry.
  61. It's first purpose is to trace the duration of functions and
  62. draw a call graph for each thread with some informations like
  63. the return value.
  64. This is done by setting the current return address on the current
  65. task structure into a stack of calls.
  66. config IRQSOFF_TRACER
  67. bool "Interrupts-off Latency Tracer"
  68. default n
  69. depends on TRACE_IRQFLAGS_SUPPORT
  70. depends on GENERIC_TIME
  71. depends on DEBUG_KERNEL
  72. select TRACE_IRQFLAGS
  73. select TRACING
  74. select TRACER_MAX_TRACE
  75. help
  76. This option measures the time spent in irqs-off critical
  77. sections, with microsecond accuracy.
  78. The default measurement method is a maximum search, which is
  79. disabled by default and can be runtime (re-)started
  80. via:
  81. echo 0 > /debugfs/tracing/tracing_max_latency
  82. (Note that kernel size and overhead increases with this option
  83. enabled. This option and the preempt-off timing option can be
  84. used together or separately.)
  85. config PREEMPT_TRACER
  86. bool "Preemption-off Latency Tracer"
  87. default n
  88. depends on GENERIC_TIME
  89. depends on PREEMPT
  90. depends on DEBUG_KERNEL
  91. select TRACING
  92. select TRACER_MAX_TRACE
  93. help
  94. This option measures the time spent in preemption off critical
  95. sections, with microsecond accuracy.
  96. The default measurement method is a maximum search, which is
  97. disabled by default and can be runtime (re-)started
  98. via:
  99. echo 0 > /debugfs/tracing/tracing_max_latency
  100. (Note that kernel size and overhead increases with this option
  101. enabled. This option and the irqs-off timing option can be
  102. used together or separately.)
  103. config SYSPROF_TRACER
  104. bool "Sysprof Tracer"
  105. depends on X86
  106. select TRACING
  107. help
  108. This tracer provides the trace needed by the 'Sysprof' userspace
  109. tool.
  110. config SCHED_TRACER
  111. bool "Scheduling Latency Tracer"
  112. depends on DEBUG_KERNEL
  113. select TRACING
  114. select CONTEXT_SWITCH_TRACER
  115. select TRACER_MAX_TRACE
  116. help
  117. This tracer tracks the latency of the highest priority task
  118. to be scheduled in, starting from the point it has woken up.
  119. config CONTEXT_SWITCH_TRACER
  120. bool "Trace process context switches"
  121. depends on DEBUG_KERNEL
  122. select TRACING
  123. select MARKERS
  124. help
  125. This tracer gets called from the context switch and records
  126. all switching of tasks.
  127. config BOOT_TRACER
  128. bool "Trace boot initcalls"
  129. depends on DEBUG_KERNEL
  130. select TRACING
  131. select CONTEXT_SWITCH_TRACER
  132. help
  133. This tracer helps developers to optimize boot times: it records
  134. the timings of the initcalls and traces key events and the identity
  135. of tasks that can cause boot delays, such as context-switches.
  136. Its aim is to be parsed by the /scripts/bootgraph.pl tool to
  137. produce pretty graphics about boot inefficiencies, giving a visual
  138. representation of the delays during initcalls - but the raw
  139. /debug/tracing/trace text output is readable too.
  140. ( Note that tracing self tests can't be enabled if this tracer is
  141. selected, because the self-tests are an initcall as well and that
  142. would invalidate the boot trace. )
  143. config TRACE_BRANCH_PROFILING
  144. bool "Trace likely/unlikely profiler"
  145. depends on DEBUG_KERNEL
  146. select TRACING
  147. help
  148. This tracer profiles all the the likely and unlikely macros
  149. in the kernel. It will display the results in:
  150. /debugfs/tracing/profile_annotated_branch
  151. Note: this will add a significant overhead, only turn this
  152. on if you need to profile the system's use of these macros.
  153. Say N if unsure.
  154. config PROFILE_ALL_BRANCHES
  155. bool "Profile all if conditionals"
  156. depends on TRACE_BRANCH_PROFILING
  157. help
  158. This tracer profiles all branch conditions. Every if ()
  159. taken in the kernel is recorded whether it hit or miss.
  160. The results will be displayed in:
  161. /debugfs/tracing/profile_branch
  162. This configuration, when enabled, will impose a great overhead
  163. on the system. This should only be enabled when the system
  164. is to be analyzed
  165. Say N if unsure.
  166. config TRACING_BRANCHES
  167. bool
  168. help
  169. Selected by tracers that will trace the likely and unlikely
  170. conditions. This prevents the tracers themselves from being
  171. profiled. Profiling the tracing infrastructure can only happen
  172. when the likelys and unlikelys are not being traced.
  173. config BRANCH_TRACER
  174. bool "Trace likely/unlikely instances"
  175. depends on TRACE_BRANCH_PROFILING
  176. select TRACING_BRANCHES
  177. help
  178. This traces the events of likely and unlikely condition
  179. calls in the kernel. The difference between this and the
  180. "Trace likely/unlikely profiler" is that this is not a
  181. histogram of the callers, but actually places the calling
  182. events into a running trace buffer to see when and where the
  183. events happened, as well as their results.
  184. Say N if unsure.
  185. config POWER_TRACER
  186. bool "Trace power consumption behavior"
  187. depends on DEBUG_KERNEL
  188. depends on X86
  189. select TRACING
  190. help
  191. This tracer helps developers to analyze and optimize the kernels
  192. power management decisions, specifically the C-state and P-state
  193. behavior.
  194. config STACK_TRACER
  195. bool "Trace max stack"
  196. depends on HAVE_FUNCTION_TRACER
  197. depends on DEBUG_KERNEL
  198. select FUNCTION_TRACER
  199. select STACKTRACE
  200. select KALLSYMS
  201. help
  202. This special tracer records the maximum stack footprint of the
  203. kernel and displays it in debugfs/tracing/stack_trace.
  204. This tracer works by hooking into every function call that the
  205. kernel executes, and keeping a maximum stack depth value and
  206. stack-trace saved. If this is configured with DYNAMIC_FTRACE
  207. then it will not have any overhead while the stack tracer
  208. is disabled.
  209. To enable the stack tracer on bootup, pass in 'stacktrace'
  210. on the kernel command line.
  211. The stack tracer can also be enabled or disabled via the
  212. sysctl kernel.stack_tracer_enabled
  213. Say N if unsure.
  214. config HW_BRANCH_TRACER
  215. depends on HAVE_HW_BRANCH_TRACER
  216. bool "Trace hw branches"
  217. select TRACING
  218. help
  219. This tracer records all branches on the system in a circular
  220. buffer giving access to the last N branches for each cpu.
  221. config DYNAMIC_FTRACE
  222. bool "enable/disable ftrace tracepoints dynamically"
  223. depends on FUNCTION_TRACER
  224. depends on HAVE_DYNAMIC_FTRACE
  225. depends on DEBUG_KERNEL
  226. default y
  227. help
  228. This option will modify all the calls to ftrace dynamically
  229. (will patch them out of the binary image and replaces them
  230. with a No-Op instruction) as they are called. A table is
  231. created to dynamically enable them again.
  232. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  233. has native performance as long as no tracing is active.
  234. The changes to the code are done by a kernel thread that
  235. wakes up once a second and checks to see if any ftrace calls
  236. were made. If so, it runs stop_machine (stops all CPUS)
  237. and modifies the code to jump over the call to ftrace.
  238. config FTRACE_MCOUNT_RECORD
  239. def_bool y
  240. depends on DYNAMIC_FTRACE
  241. depends on HAVE_FTRACE_MCOUNT_RECORD
  242. config FTRACE_SELFTEST
  243. bool
  244. config FTRACE_STARTUP_TEST
  245. bool "Perform a startup test on ftrace"
  246. depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
  247. select FTRACE_SELFTEST
  248. help
  249. This option performs a series of startup tests on ftrace. On bootup
  250. a series of tests are made to verify that the tracer is
  251. functioning properly. It will do tests on all the configured
  252. tracers of ftrace.
  253. config MMIOTRACE
  254. bool "Memory mapped IO tracing"
  255. depends on HAVE_MMIOTRACE_SUPPORT && DEBUG_KERNEL && PCI
  256. select TRACING
  257. help
  258. Mmiotrace traces Memory Mapped I/O access and is meant for
  259. debugging and reverse engineering. It is called from the ioremap
  260. implementation and works via page faults. Tracing is disabled by
  261. default and can be enabled at run-time.
  262. See Documentation/tracers/mmiotrace.txt.
  263. If you are not helping to develop drivers, say N.
  264. config MMIOTRACE_TEST
  265. tristate "Test module for mmiotrace"
  266. depends on MMIOTRACE && m
  267. help
  268. This is a dumb module for testing mmiotrace. It is very dangerous
  269. as it will write garbage to IO memory starting at a given address.
  270. However, it should be safe to use on e.g. unused portion of VRAM.
  271. Say N, unless you absolutely know what you are doing.
  272. endmenu