Kconfig 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config USER_STACKTRACE_SUPPORT
  6. bool
  7. config NOP_TRACER
  8. bool
  9. config HAVE_FTRACE_NMI_ENTER
  10. bool
  11. config HAVE_FUNCTION_TRACER
  12. bool
  13. config HAVE_FUNCTION_GRAPH_TRACER
  14. bool
  15. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  16. bool
  17. help
  18. This gets selected when the arch tests the function_trace_stop
  19. variable at the mcount call site. Otherwise, this variable
  20. is tested by the called function.
  21. config HAVE_DYNAMIC_FTRACE
  22. bool
  23. config HAVE_FTRACE_MCOUNT_RECORD
  24. bool
  25. config HAVE_HW_BRANCH_TRACER
  26. bool
  27. config HAVE_FTRACE_SYSCALLS
  28. bool
  29. config TRACER_MAX_TRACE
  30. bool
  31. config RING_BUFFER
  32. bool
  33. config FTRACE_NMI_ENTER
  34. bool
  35. depends on HAVE_FTRACE_NMI_ENTER
  36. default y
  37. config EVENT_TRACING
  38. bool
  39. config TRACING
  40. bool
  41. select DEBUG_FS
  42. select RING_BUFFER
  43. select STACKTRACE if STACKTRACE_SUPPORT
  44. select TRACEPOINTS
  45. select NOP_TRACER
  46. select BINARY_PRINTF
  47. select EVENT_TRACING
  48. #
  49. # Minimum requirements an architecture has to meet for us to
  50. # be able to offer generic tracing facilities:
  51. #
  52. config TRACING_SUPPORT
  53. bool
  54. # PPC32 has no irqflags tracing support, but it can use most of the
  55. # tracers anyway, they were tested to build and work. Note that new
  56. # exceptions to this list aren't welcomed, better implement the
  57. # irqflags tracing for your architecture.
  58. depends on TRACE_IRQFLAGS_SUPPORT || PPC32
  59. depends on STACKTRACE_SUPPORT
  60. default y
  61. if TRACING_SUPPORT
  62. menu "Tracers"
  63. config FUNCTION_TRACER
  64. bool "Kernel Function Tracer"
  65. depends on HAVE_FUNCTION_TRACER
  66. select FRAME_POINTER
  67. select KALLSYMS
  68. select TRACING
  69. select CONTEXT_SWITCH_TRACER
  70. help
  71. Enable the kernel to trace every kernel function. This is done
  72. by using a compiler feature to insert a small, 5-byte No-Operation
  73. instruction to the beginning of every kernel function, which NOP
  74. sequence is then dynamically patched into a tracer call when
  75. tracing is enabled by the administrator. If it's runtime disabled
  76. (the bootup default), then the overhead of the instructions is very
  77. small and not measurable even in micro-benchmarks.
  78. config FUNCTION_GRAPH_TRACER
  79. bool "Kernel Function Graph Tracer"
  80. depends on HAVE_FUNCTION_GRAPH_TRACER
  81. depends on FUNCTION_TRACER
  82. default y
  83. help
  84. Enable the kernel to trace a function at both its return
  85. and its entry.
  86. Its first purpose is to trace the duration of functions and
  87. draw a call graph for each thread with some information like
  88. the return value. This is done by setting the current return
  89. address on the current task structure into a stack of calls.
  90. config IRQSOFF_TRACER
  91. bool "Interrupts-off Latency Tracer"
  92. default n
  93. depends on TRACE_IRQFLAGS_SUPPORT
  94. depends on GENERIC_TIME
  95. select TRACE_IRQFLAGS
  96. select TRACING
  97. select TRACER_MAX_TRACE
  98. help
  99. This option measures the time spent in irqs-off critical
  100. sections, with microsecond accuracy.
  101. The default measurement method is a maximum search, which is
  102. disabled by default and can be runtime (re-)started
  103. via:
  104. echo 0 > /debugfs/tracing/tracing_max_latency
  105. (Note that kernel size and overhead increases with this option
  106. enabled. This option and the preempt-off timing option can be
  107. used together or separately.)
  108. config PREEMPT_TRACER
  109. bool "Preemption-off Latency Tracer"
  110. default n
  111. depends on GENERIC_TIME
  112. depends on PREEMPT
  113. select TRACING
  114. select TRACER_MAX_TRACE
  115. help
  116. This option measures the time spent in preemption off critical
  117. sections, with microsecond accuracy.
  118. The default measurement method is a maximum search, which is
  119. disabled by default and can be runtime (re-)started
  120. via:
  121. echo 0 > /debugfs/tracing/tracing_max_latency
  122. (Note that kernel size and overhead increases with this option
  123. enabled. This option and the irqs-off timing option can be
  124. used together or separately.)
  125. config SYSPROF_TRACER
  126. bool "Sysprof Tracer"
  127. depends on X86
  128. select TRACING
  129. select CONTEXT_SWITCH_TRACER
  130. help
  131. This tracer provides the trace needed by the 'Sysprof' userspace
  132. tool.
  133. config SCHED_TRACER
  134. bool "Scheduling Latency Tracer"
  135. select TRACING
  136. select CONTEXT_SWITCH_TRACER
  137. select TRACER_MAX_TRACE
  138. help
  139. This tracer tracks the latency of the highest priority task
  140. to be scheduled in, starting from the point it has woken up.
  141. config CONTEXT_SWITCH_TRACER
  142. bool "Trace process context switches"
  143. select TRACING
  144. select MARKERS
  145. help
  146. This tracer gets called from the context switch and records
  147. all switching of tasks.
  148. config EVENT_TRACER
  149. bool "Trace various events in the kernel"
  150. select TRACING
  151. help
  152. This tracer hooks to various trace points in the kernel
  153. allowing the user to pick and choose which trace point they
  154. want to trace.
  155. config FTRACE_SYSCALLS
  156. bool "Trace syscalls"
  157. depends on HAVE_FTRACE_SYSCALLS
  158. select TRACING
  159. select KALLSYMS
  160. help
  161. Basic tracer to catch the syscall entry and exit events.
  162. config BOOT_TRACER
  163. bool "Trace boot initcalls"
  164. select TRACING
  165. select CONTEXT_SWITCH_TRACER
  166. help
  167. This tracer helps developers to optimize boot times: it records
  168. the timings of the initcalls and traces key events and the identity
  169. of tasks that can cause boot delays, such as context-switches.
  170. Its aim is to be parsed by the /scripts/bootgraph.pl tool to
  171. produce pretty graphics about boot inefficiencies, giving a visual
  172. representation of the delays during initcalls - but the raw
  173. /debug/tracing/trace text output is readable too.
  174. You must pass in ftrace=initcall to the kernel command line
  175. to enable this on bootup.
  176. config TRACE_BRANCH_PROFILING
  177. bool "Trace likely/unlikely profiler"
  178. select TRACING
  179. help
  180. This tracer profiles all the the likely and unlikely macros
  181. in the kernel. It will display the results in:
  182. /debugfs/tracing/profile_annotated_branch
  183. Note: this will add a significant overhead, only turn this
  184. on if you need to profile the system's use of these macros.
  185. Say N if unsure.
  186. config PROFILE_ALL_BRANCHES
  187. bool "Profile all if conditionals"
  188. depends on TRACE_BRANCH_PROFILING
  189. help
  190. This tracer profiles all branch conditions. Every if ()
  191. taken in the kernel is recorded whether it hit or miss.
  192. The results will be displayed in:
  193. /debugfs/tracing/profile_branch
  194. This configuration, when enabled, will impose a great overhead
  195. on the system. This should only be enabled when the system
  196. is to be analyzed
  197. Say N if unsure.
  198. config TRACING_BRANCHES
  199. bool
  200. help
  201. Selected by tracers that will trace the likely and unlikely
  202. conditions. This prevents the tracers themselves from being
  203. profiled. Profiling the tracing infrastructure can only happen
  204. when the likelys and unlikelys are not being traced.
  205. config BRANCH_TRACER
  206. bool "Trace likely/unlikely instances"
  207. depends on TRACE_BRANCH_PROFILING
  208. select TRACING_BRANCHES
  209. help
  210. This traces the events of likely and unlikely condition
  211. calls in the kernel. The difference between this and the
  212. "Trace likely/unlikely profiler" is that this is not a
  213. histogram of the callers, but actually places the calling
  214. events into a running trace buffer to see when and where the
  215. events happened, as well as their results.
  216. Say N if unsure.
  217. config POWER_TRACER
  218. bool "Trace power consumption behavior"
  219. depends on X86
  220. select TRACING
  221. help
  222. This tracer helps developers to analyze and optimize the kernels
  223. power management decisions, specifically the C-state and P-state
  224. behavior.
  225. config STACK_TRACER
  226. bool "Trace max stack"
  227. depends on HAVE_FUNCTION_TRACER
  228. select FUNCTION_TRACER
  229. select STACKTRACE
  230. select KALLSYMS
  231. help
  232. This special tracer records the maximum stack footprint of the
  233. kernel and displays it in debugfs/tracing/stack_trace.
  234. This tracer works by hooking into every function call that the
  235. kernel executes, and keeping a maximum stack depth value and
  236. stack-trace saved. If this is configured with DYNAMIC_FTRACE
  237. then it will not have any overhead while the stack tracer
  238. is disabled.
  239. To enable the stack tracer on bootup, pass in 'stacktrace'
  240. on the kernel command line.
  241. The stack tracer can also be enabled or disabled via the
  242. sysctl kernel.stack_tracer_enabled
  243. Say N if unsure.
  244. config HW_BRANCH_TRACER
  245. depends on HAVE_HW_BRANCH_TRACER
  246. bool "Trace hw branches"
  247. select TRACING
  248. help
  249. This tracer records all branches on the system in a circular
  250. buffer giving access to the last N branches for each cpu.
  251. config KMEMTRACE
  252. bool "Trace SLAB allocations"
  253. select TRACING
  254. help
  255. kmemtrace provides tracing for slab allocator functions, such as
  256. kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
  257. data is then fed to the userspace application in order to analyse
  258. allocation hotspots, internal fragmentation and so on, making it
  259. possible to see how well an allocator performs, as well as debug
  260. and profile kernel code.
  261. This requires an userspace application to use. See
  262. Documentation/vm/kmemtrace.txt for more information.
  263. Saying Y will make the kernel somewhat larger and slower. However,
  264. if you disable kmemtrace at run-time or boot-time, the performance
  265. impact is minimal (depending on the arch the kernel is built for).
  266. If unsure, say N.
  267. config WORKQUEUE_TRACER
  268. bool "Trace workqueues"
  269. select TRACING
  270. help
  271. The workqueue tracer provides some statistical informations
  272. about each cpu workqueue thread such as the number of the
  273. works inserted and executed since their creation. It can help
  274. to evaluate the amount of work each of them have to perform.
  275. For example it can help a developer to decide whether he should
  276. choose a per cpu workqueue instead of a singlethreaded one.
  277. config BLK_DEV_IO_TRACE
  278. bool "Support for tracing block io actions"
  279. depends on SYSFS
  280. depends on BLOCK
  281. select RELAY
  282. select DEBUG_FS
  283. select TRACEPOINTS
  284. select TRACING
  285. select STACKTRACE
  286. help
  287. Say Y here if you want to be able to trace the block layer actions
  288. on a given queue. Tracing allows you to see any traffic happening
  289. on a block device queue. For more information (and the userspace
  290. support tools needed), fetch the blktrace tools from:
  291. git://git.kernel.dk/blktrace.git
  292. Tracing also is possible using the ftrace interface, e.g.:
  293. echo 1 > /sys/block/sda/sda1/trace/enable
  294. echo blk > /sys/kernel/debug/tracing/current_tracer
  295. cat /sys/kernel/debug/tracing/trace_pipe
  296. If unsure, say N.
  297. config DYNAMIC_FTRACE
  298. bool "enable/disable ftrace tracepoints dynamically"
  299. depends on FUNCTION_TRACER
  300. depends on HAVE_DYNAMIC_FTRACE
  301. default y
  302. help
  303. This option will modify all the calls to ftrace dynamically
  304. (will patch them out of the binary image and replaces them
  305. with a No-Op instruction) as they are called. A table is
  306. created to dynamically enable them again.
  307. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  308. has native performance as long as no tracing is active.
  309. The changes to the code are done by a kernel thread that
  310. wakes up once a second and checks to see if any ftrace calls
  311. were made. If so, it runs stop_machine (stops all CPUS)
  312. and modifies the code to jump over the call to ftrace.
  313. config FUNCTION_PROFILER
  314. bool "Kernel function profiler"
  315. depends on FUNCTION_TRACER
  316. default n
  317. help
  318. This option enables the kernel function profiler. A file is created
  319. in debugfs called function_profile_enabled which defaults to zero.
  320. When a 1 is echoed into this file profiling begins, and when a
  321. zero is entered, profiling stops. A file in the trace_stats
  322. directory called functions, that show the list of functions that
  323. have been hit and their counters.
  324. If in doubt, say N
  325. config FTRACE_MCOUNT_RECORD
  326. def_bool y
  327. depends on DYNAMIC_FTRACE
  328. depends on HAVE_FTRACE_MCOUNT_RECORD
  329. config FTRACE_SELFTEST
  330. bool
  331. config FTRACE_STARTUP_TEST
  332. bool "Perform a startup test on ftrace"
  333. depends on TRACING
  334. select FTRACE_SELFTEST
  335. help
  336. This option performs a series of startup tests on ftrace. On bootup
  337. a series of tests are made to verify that the tracer is
  338. functioning properly. It will do tests on all the configured
  339. tracers of ftrace.
  340. config MMIOTRACE
  341. bool "Memory mapped IO tracing"
  342. depends on HAVE_MMIOTRACE_SUPPORT && PCI
  343. select TRACING
  344. help
  345. Mmiotrace traces Memory Mapped I/O access and is meant for
  346. debugging and reverse engineering. It is called from the ioremap
  347. implementation and works via page faults. Tracing is disabled by
  348. default and can be enabled at run-time.
  349. See Documentation/tracers/mmiotrace.txt.
  350. If you are not helping to develop drivers, say N.
  351. config MMIOTRACE_TEST
  352. tristate "Test module for mmiotrace"
  353. depends on MMIOTRACE && m
  354. help
  355. This is a dumb module for testing mmiotrace. It is very dangerous
  356. as it will write garbage to IO memory starting at a given address.
  357. However, it should be safe to use on e.g. unused portion of VRAM.
  358. Say N, unless you absolutely know what you are doing.
  359. endmenu
  360. endif # TRACING_SUPPORT