Kconfig 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config USER_STACKTRACE_SUPPORT
  6. bool
  7. config NOP_TRACER
  8. bool
  9. config HAVE_FTRACE_NMI_ENTER
  10. bool
  11. config HAVE_FUNCTION_TRACER
  12. bool
  13. config HAVE_FUNCTION_GRAPH_TRACER
  14. bool
  15. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  16. bool
  17. help
  18. This gets selected when the arch tests the function_trace_stop
  19. variable at the mcount call site. Otherwise, this variable
  20. is tested by the called function.
  21. config HAVE_DYNAMIC_FTRACE
  22. bool
  23. config HAVE_FTRACE_MCOUNT_RECORD
  24. bool
  25. config HAVE_HW_BRANCH_TRACER
  26. bool
  27. config HAVE_FTRACE_SYSCALLS
  28. bool
  29. config TRACER_MAX_TRACE
  30. bool
  31. config RING_BUFFER
  32. bool
  33. config FTRACE_NMI_ENTER
  34. bool
  35. depends on HAVE_FTRACE_NMI_ENTER
  36. default y
  37. config TRACING
  38. bool
  39. select DEBUG_FS
  40. select RING_BUFFER
  41. select STACKTRACE if STACKTRACE_SUPPORT
  42. select TRACEPOINTS
  43. select NOP_TRACER
  44. select BINARY_PRINTF
  45. #
  46. # Minimum requirements an architecture has to meet for us to
  47. # be able to offer generic tracing facilities:
  48. #
  49. config TRACING_SUPPORT
  50. bool
  51. # PPC32 has no irqflags tracing support, but it can use most of the
  52. # tracers anyway, they were tested to build and work. Note that new
  53. # exceptions to this list aren't welcomed, better implement the
  54. # irqflags tracing for your architecture.
  55. depends on TRACE_IRQFLAGS_SUPPORT || PPC32
  56. depends on STACKTRACE_SUPPORT
  57. default y
  58. if TRACING_SUPPORT
  59. menu "Tracers"
  60. config FUNCTION_TRACER
  61. bool "Kernel Function Tracer"
  62. depends on HAVE_FUNCTION_TRACER
  63. select FRAME_POINTER
  64. select KALLSYMS
  65. select TRACING
  66. select CONTEXT_SWITCH_TRACER
  67. help
  68. Enable the kernel to trace every kernel function. This is done
  69. by using a compiler feature to insert a small, 5-byte No-Operation
  70. instruction to the beginning of every kernel function, which NOP
  71. sequence is then dynamically patched into a tracer call when
  72. tracing is enabled by the administrator. If it's runtime disabled
  73. (the bootup default), then the overhead of the instructions is very
  74. small and not measurable even in micro-benchmarks.
  75. config FUNCTION_GRAPH_TRACER
  76. bool "Kernel Function Graph Tracer"
  77. depends on HAVE_FUNCTION_GRAPH_TRACER
  78. depends on FUNCTION_TRACER
  79. default y
  80. help
  81. Enable the kernel to trace a function at both its return
  82. and its entry.
  83. It's first purpose is to trace the duration of functions and
  84. draw a call graph for each thread with some informations like
  85. the return value.
  86. This is done by setting the current return address on the current
  87. task structure into a stack of calls.
  88. config IRQSOFF_TRACER
  89. bool "Interrupts-off Latency Tracer"
  90. default n
  91. depends on TRACE_IRQFLAGS_SUPPORT
  92. depends on GENERIC_TIME
  93. select TRACE_IRQFLAGS
  94. select TRACING
  95. select TRACER_MAX_TRACE
  96. help
  97. This option measures the time spent in irqs-off critical
  98. sections, with microsecond accuracy.
  99. The default measurement method is a maximum search, which is
  100. disabled by default and can be runtime (re-)started
  101. via:
  102. echo 0 > /debugfs/tracing/tracing_max_latency
  103. (Note that kernel size and overhead increases with this option
  104. enabled. This option and the preempt-off timing option can be
  105. used together or separately.)
  106. config PREEMPT_TRACER
  107. bool "Preemption-off Latency Tracer"
  108. default n
  109. depends on GENERIC_TIME
  110. depends on PREEMPT
  111. select TRACING
  112. select TRACER_MAX_TRACE
  113. help
  114. This option measures the time spent in preemption off critical
  115. sections, with microsecond accuracy.
  116. The default measurement method is a maximum search, which is
  117. disabled by default and can be runtime (re-)started
  118. via:
  119. echo 0 > /debugfs/tracing/tracing_max_latency
  120. (Note that kernel size and overhead increases with this option
  121. enabled. This option and the irqs-off timing option can be
  122. used together or separately.)
  123. config SYSPROF_TRACER
  124. bool "Sysprof Tracer"
  125. depends on X86
  126. select TRACING
  127. select CONTEXT_SWITCH_TRACER
  128. help
  129. This tracer provides the trace needed by the 'Sysprof' userspace
  130. tool.
  131. config SCHED_TRACER
  132. bool "Scheduling Latency Tracer"
  133. select TRACING
  134. select CONTEXT_SWITCH_TRACER
  135. select TRACER_MAX_TRACE
  136. help
  137. This tracer tracks the latency of the highest priority task
  138. to be scheduled in, starting from the point it has woken up.
  139. config CONTEXT_SWITCH_TRACER
  140. bool "Trace process context switches"
  141. select TRACING
  142. select MARKERS
  143. help
  144. This tracer gets called from the context switch and records
  145. all switching of tasks.
  146. config EVENT_TRACER
  147. bool "Trace various events in the kernel"
  148. select TRACING
  149. help
  150. This tracer hooks to various trace points in the kernel
  151. allowing the user to pick and choose which trace point they
  152. want to trace.
  153. config FTRACE_SYSCALLS
  154. bool "Trace syscalls"
  155. depends on HAVE_FTRACE_SYSCALLS
  156. select TRACING
  157. select KALLSYMS
  158. help
  159. Basic tracer to catch the syscall entry and exit events.
  160. config BOOT_TRACER
  161. bool "Trace boot initcalls"
  162. select TRACING
  163. select CONTEXT_SWITCH_TRACER
  164. help
  165. This tracer helps developers to optimize boot times: it records
  166. the timings of the initcalls and traces key events and the identity
  167. of tasks that can cause boot delays, such as context-switches.
  168. Its aim is to be parsed by the /scripts/bootgraph.pl tool to
  169. produce pretty graphics about boot inefficiencies, giving a visual
  170. representation of the delays during initcalls - but the raw
  171. /debug/tracing/trace text output is readable too.
  172. You must pass in ftrace=initcall to the kernel command line
  173. to enable this on bootup.
  174. config TRACE_BRANCH_PROFILING
  175. bool "Trace likely/unlikely profiler"
  176. select TRACING
  177. help
  178. This tracer profiles all the the likely and unlikely macros
  179. in the kernel. It will display the results in:
  180. /debugfs/tracing/profile_annotated_branch
  181. Note: this will add a significant overhead, only turn this
  182. on if you need to profile the system's use of these macros.
  183. Say N if unsure.
  184. config PROFILE_ALL_BRANCHES
  185. bool "Profile all if conditionals"
  186. depends on TRACE_BRANCH_PROFILING
  187. help
  188. This tracer profiles all branch conditions. Every if ()
  189. taken in the kernel is recorded whether it hit or miss.
  190. The results will be displayed in:
  191. /debugfs/tracing/profile_branch
  192. This configuration, when enabled, will impose a great overhead
  193. on the system. This should only be enabled when the system
  194. is to be analyzed
  195. Say N if unsure.
  196. config TRACING_BRANCHES
  197. bool
  198. help
  199. Selected by tracers that will trace the likely and unlikely
  200. conditions. This prevents the tracers themselves from being
  201. profiled. Profiling the tracing infrastructure can only happen
  202. when the likelys and unlikelys are not being traced.
  203. config BRANCH_TRACER
  204. bool "Trace likely/unlikely instances"
  205. depends on TRACE_BRANCH_PROFILING
  206. select TRACING_BRANCHES
  207. help
  208. This traces the events of likely and unlikely condition
  209. calls in the kernel. The difference between this and the
  210. "Trace likely/unlikely profiler" is that this is not a
  211. histogram of the callers, but actually places the calling
  212. events into a running trace buffer to see when and where the
  213. events happened, as well as their results.
  214. Say N if unsure.
  215. config POWER_TRACER
  216. bool "Trace power consumption behavior"
  217. depends on X86
  218. select TRACING
  219. help
  220. This tracer helps developers to analyze and optimize the kernels
  221. power management decisions, specifically the C-state and P-state
  222. behavior.
  223. config STACK_TRACER
  224. bool "Trace max stack"
  225. depends on HAVE_FUNCTION_TRACER
  226. select FUNCTION_TRACER
  227. select STACKTRACE
  228. select KALLSYMS
  229. help
  230. This special tracer records the maximum stack footprint of the
  231. kernel and displays it in debugfs/tracing/stack_trace.
  232. This tracer works by hooking into every function call that the
  233. kernel executes, and keeping a maximum stack depth value and
  234. stack-trace saved. If this is configured with DYNAMIC_FTRACE
  235. then it will not have any overhead while the stack tracer
  236. is disabled.
  237. To enable the stack tracer on bootup, pass in 'stacktrace'
  238. on the kernel command line.
  239. The stack tracer can also be enabled or disabled via the
  240. sysctl kernel.stack_tracer_enabled
  241. Say N if unsure.
  242. config HW_BRANCH_TRACER
  243. depends on HAVE_HW_BRANCH_TRACER
  244. bool "Trace hw branches"
  245. select TRACING
  246. help
  247. This tracer records all branches on the system in a circular
  248. buffer giving access to the last N branches for each cpu.
  249. config KMEMTRACE
  250. bool "Trace SLAB allocations"
  251. select TRACING
  252. help
  253. kmemtrace provides tracing for slab allocator functions, such as
  254. kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
  255. data is then fed to the userspace application in order to analyse
  256. allocation hotspots, internal fragmentation and so on, making it
  257. possible to see how well an allocator performs, as well as debug
  258. and profile kernel code.
  259. This requires an userspace application to use. See
  260. Documentation/vm/kmemtrace.txt for more information.
  261. Saying Y will make the kernel somewhat larger and slower. However,
  262. if you disable kmemtrace at run-time or boot-time, the performance
  263. impact is minimal (depending on the arch the kernel is built for).
  264. If unsure, say N.
  265. config WORKQUEUE_TRACER
  266. bool "Trace workqueues"
  267. select TRACING
  268. help
  269. The workqueue tracer provides some statistical informations
  270. about each cpu workqueue thread such as the number of the
  271. works inserted and executed since their creation. It can help
  272. to evaluate the amount of work each of them have to perform.
  273. For example it can help a developer to decide whether he should
  274. choose a per cpu workqueue instead of a singlethreaded one.
  275. config BLK_DEV_IO_TRACE
  276. bool "Support for tracing block io actions"
  277. depends on SYSFS
  278. depends on BLOCK
  279. select RELAY
  280. select DEBUG_FS
  281. select TRACEPOINTS
  282. select TRACING
  283. select STACKTRACE
  284. help
  285. Say Y here if you want to be able to trace the block layer actions
  286. on a given queue. Tracing allows you to see any traffic happening
  287. on a block device queue. For more information (and the userspace
  288. support tools needed), fetch the blktrace tools from:
  289. git://git.kernel.dk/blktrace.git
  290. Tracing also is possible using the ftrace interface, e.g.:
  291. echo 1 > /sys/block/sda/sda1/trace/enable
  292. echo blk > /sys/kernel/debug/tracing/current_tracer
  293. cat /sys/kernel/debug/tracing/trace_pipe
  294. If unsure, say N.
  295. config DYNAMIC_FTRACE
  296. bool "enable/disable ftrace tracepoints dynamically"
  297. depends on FUNCTION_TRACER
  298. depends on HAVE_DYNAMIC_FTRACE
  299. default y
  300. help
  301. This option will modify all the calls to ftrace dynamically
  302. (will patch them out of the binary image and replaces them
  303. with a No-Op instruction) as they are called. A table is
  304. created to dynamically enable them again.
  305. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  306. has native performance as long as no tracing is active.
  307. The changes to the code are done by a kernel thread that
  308. wakes up once a second and checks to see if any ftrace calls
  309. were made. If so, it runs stop_machine (stops all CPUS)
  310. and modifies the code to jump over the call to ftrace.
  311. config FUNCTION_PROFILER
  312. bool "Kernel function profiler"
  313. depends on DYNAMIC_FTRACE
  314. default n
  315. help
  316. This option enables the kernel function profiler. When the dynamic
  317. function tracing is enabled, a counter is added into the function
  318. records used by the dynamic function tracer. A file is created in
  319. debugfs called function_profile_enabled which defaults to zero.
  320. When a 1 is echoed into this file profiling begins, and when a
  321. zero is entered, profiling stops. A file in the trace_stats
  322. directory called functions, that show the list of functions that
  323. have been hit and their counters.
  324. This takes up around 320K more memory.
  325. If in doubt, say N
  326. config FTRACE_MCOUNT_RECORD
  327. def_bool y
  328. depends on DYNAMIC_FTRACE
  329. depends on HAVE_FTRACE_MCOUNT_RECORD
  330. config FTRACE_SELFTEST
  331. bool
  332. config FTRACE_STARTUP_TEST
  333. bool "Perform a startup test on ftrace"
  334. depends on TRACING
  335. select FTRACE_SELFTEST
  336. help
  337. This option performs a series of startup tests on ftrace. On bootup
  338. a series of tests are made to verify that the tracer is
  339. functioning properly. It will do tests on all the configured
  340. tracers of ftrace.
  341. config MMIOTRACE
  342. bool "Memory mapped IO tracing"
  343. depends on HAVE_MMIOTRACE_SUPPORT && PCI
  344. select TRACING
  345. help
  346. Mmiotrace traces Memory Mapped I/O access and is meant for
  347. debugging and reverse engineering. It is called from the ioremap
  348. implementation and works via page faults. Tracing is disabled by
  349. default and can be enabled at run-time.
  350. See Documentation/tracers/mmiotrace.txt.
  351. If you are not helping to develop drivers, say N.
  352. config MMIOTRACE_TEST
  353. tristate "Test module for mmiotrace"
  354. depends on MMIOTRACE && m
  355. help
  356. This is a dumb module for testing mmiotrace. It is very dangerous
  357. as it will write garbage to IO memory starting at a given address.
  358. However, it should be safe to use on e.g. unused portion of VRAM.
  359. Say N, unless you absolutely know what you are doing.
  360. endmenu
  361. endif # TRACING_SUPPORT