Kconfig 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config USER_STACKTRACE_SUPPORT
  6. bool
  7. config NOP_TRACER
  8. bool
  9. config HAVE_FTRACE_NMI_ENTER
  10. bool
  11. config HAVE_FUNCTION_TRACER
  12. bool
  13. config HAVE_FUNCTION_GRAPH_TRACER
  14. bool
  15. config HAVE_FUNCTION_GRAPH_FP_TEST
  16. bool
  17. help
  18. An arch may pass in a unique value (frame pointer) to both the
  19. entering and exiting of a function. On exit, the value is compared
  20. and if it does not match, then it will panic the kernel.
  21. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  22. bool
  23. help
  24. This gets selected when the arch tests the function_trace_stop
  25. variable at the mcount call site. Otherwise, this variable
  26. is tested by the called function.
  27. config HAVE_DYNAMIC_FTRACE
  28. bool
  29. config HAVE_FTRACE_MCOUNT_RECORD
  30. bool
  31. config HAVE_HW_BRANCH_TRACER
  32. bool
  33. config HAVE_SYSCALL_TRACEPOINTS
  34. bool
  35. config TRACER_MAX_TRACE
  36. bool
  37. config RING_BUFFER
  38. bool
  39. config FTRACE_NMI_ENTER
  40. bool
  41. depends on HAVE_FTRACE_NMI_ENTER
  42. default y
  43. config EVENT_TRACING
  44. select CONTEXT_SWITCH_TRACER
  45. bool
  46. config CONTEXT_SWITCH_TRACER
  47. bool
  48. # All tracer options should select GENERIC_TRACER. For those options that are
  49. # enabled by all tracers (context switch and event tracer) they select TRACING.
  50. # This allows those options to appear when no other tracer is selected. But the
  51. # options do not appear when something else selects it. We need the two options
  52. # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
  53. # hidding of the automatic options options.
  54. config TRACING
  55. bool
  56. select DEBUG_FS
  57. select RING_BUFFER
  58. select STACKTRACE if STACKTRACE_SUPPORT
  59. select TRACEPOINTS
  60. select NOP_TRACER
  61. select BINARY_PRINTF
  62. select EVENT_TRACING
  63. config GENERIC_TRACER
  64. bool
  65. select TRACING
  66. #
  67. # Minimum requirements an architecture has to meet for us to
  68. # be able to offer generic tracing facilities:
  69. #
  70. config TRACING_SUPPORT
  71. bool
  72. # PPC32 has no irqflags tracing support, but it can use most of the
  73. # tracers anyway, they were tested to build and work. Note that new
  74. # exceptions to this list aren't welcomed, better implement the
  75. # irqflags tracing for your architecture.
  76. depends on TRACE_IRQFLAGS_SUPPORT || PPC32
  77. depends on STACKTRACE_SUPPORT
  78. default y
  79. if TRACING_SUPPORT
  80. menuconfig FTRACE
  81. bool "Tracers"
  82. default y if DEBUG_KERNEL
  83. help
  84. Enable the kernel tracing infrastructure.
  85. if FTRACE
  86. config FUNCTION_TRACER
  87. bool "Kernel Function Tracer"
  88. depends on HAVE_FUNCTION_TRACER
  89. select FRAME_POINTER
  90. select KALLSYMS
  91. select GENERIC_TRACER
  92. select CONTEXT_SWITCH_TRACER
  93. help
  94. Enable the kernel to trace every kernel function. This is done
  95. by using a compiler feature to insert a small, 5-byte No-Operation
  96. instruction to the beginning of every kernel function, which NOP
  97. sequence is then dynamically patched into a tracer call when
  98. tracing is enabled by the administrator. If it's runtime disabled
  99. (the bootup default), then the overhead of the instructions is very
  100. small and not measurable even in micro-benchmarks.
  101. config FUNCTION_GRAPH_TRACER
  102. bool "Kernel Function Graph Tracer"
  103. depends on HAVE_FUNCTION_GRAPH_TRACER
  104. depends on FUNCTION_TRACER
  105. depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
  106. default y
  107. help
  108. Enable the kernel to trace a function at both its return
  109. and its entry.
  110. Its first purpose is to trace the duration of functions and
  111. draw a call graph for each thread with some information like
  112. the return value. This is done by setting the current return
  113. address on the current task structure into a stack of calls.
  114. config IRQSOFF_TRACER
  115. bool "Interrupts-off Latency Tracer"
  116. default n
  117. depends on TRACE_IRQFLAGS_SUPPORT
  118. depends on GENERIC_TIME
  119. select TRACE_IRQFLAGS
  120. select GENERIC_TRACER
  121. select TRACER_MAX_TRACE
  122. help
  123. This option measures the time spent in irqs-off critical
  124. sections, with microsecond accuracy.
  125. The default measurement method is a maximum search, which is
  126. disabled by default and can be runtime (re-)started
  127. via:
  128. echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
  129. (Note that kernel size and overhead increases with this option
  130. enabled. This option and the preempt-off timing option can be
  131. used together or separately.)
  132. config PREEMPT_TRACER
  133. bool "Preemption-off Latency Tracer"
  134. default n
  135. depends on GENERIC_TIME
  136. depends on PREEMPT
  137. select GENERIC_TRACER
  138. select TRACER_MAX_TRACE
  139. help
  140. This option measures the time spent in preemption off critical
  141. sections, with microsecond accuracy.
  142. The default measurement method is a maximum search, which is
  143. disabled by default and can be runtime (re-)started
  144. via:
  145. echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
  146. (Note that kernel size and overhead increases with this option
  147. enabled. This option and the irqs-off timing option can be
  148. used together or separately.)
  149. config SYSPROF_TRACER
  150. bool "Sysprof Tracer"
  151. depends on X86
  152. select GENERIC_TRACER
  153. select CONTEXT_SWITCH_TRACER
  154. help
  155. This tracer provides the trace needed by the 'Sysprof' userspace
  156. tool.
  157. config SCHED_TRACER
  158. bool "Scheduling Latency Tracer"
  159. select GENERIC_TRACER
  160. select CONTEXT_SWITCH_TRACER
  161. select TRACER_MAX_TRACE
  162. help
  163. This tracer tracks the latency of the highest priority task
  164. to be scheduled in, starting from the point it has woken up.
  165. config ENABLE_DEFAULT_TRACERS
  166. bool "Trace process context switches and events"
  167. depends on !GENERIC_TRACER
  168. select TRACING
  169. help
  170. This tracer hooks to various trace points in the kernel
  171. allowing the user to pick and choose which trace point they
  172. want to trace. It also includes the sched_switch tracer plugin.
  173. config FTRACE_SYSCALLS
  174. bool "Trace syscalls"
  175. depends on HAVE_SYSCALL_TRACEPOINTS
  176. select GENERIC_TRACER
  177. select KALLSYMS
  178. help
  179. Basic tracer to catch the syscall entry and exit events.
  180. config BOOT_TRACER
  181. bool "Trace boot initcalls"
  182. select GENERIC_TRACER
  183. select CONTEXT_SWITCH_TRACER
  184. help
  185. This tracer helps developers to optimize boot times: it records
  186. the timings of the initcalls and traces key events and the identity
  187. of tasks that can cause boot delays, such as context-switches.
  188. Its aim is to be parsed by the scripts/bootgraph.pl tool to
  189. produce pretty graphics about boot inefficiencies, giving a visual
  190. representation of the delays during initcalls - but the raw
  191. /debug/tracing/trace text output is readable too.
  192. You must pass in initcall_debug and ftrace=initcall to the kernel
  193. command line to enable this on bootup.
  194. config TRACE_BRANCH_PROFILING
  195. bool
  196. select GENERIC_TRACER
  197. choice
  198. prompt "Branch Profiling"
  199. default BRANCH_PROFILE_NONE
  200. help
  201. The branch profiling is a software profiler. It will add hooks
  202. into the C conditionals to test which path a branch takes.
  203. The likely/unlikely profiler only looks at the conditions that
  204. are annotated with a likely or unlikely macro.
  205. The "all branch" profiler will profile every if statement in the
  206. kernel. This profiler will also enable the likely/unlikely
  207. profiler as well.
  208. Either of the above profilers add a bit of overhead to the system.
  209. If unsure choose "No branch profiling".
  210. config BRANCH_PROFILE_NONE
  211. bool "No branch profiling"
  212. help
  213. No branch profiling. Branch profiling adds a bit of overhead.
  214. Only enable it if you want to analyse the branching behavior.
  215. Otherwise keep it disabled.
  216. config PROFILE_ANNOTATED_BRANCHES
  217. bool "Trace likely/unlikely profiler"
  218. select TRACE_BRANCH_PROFILING
  219. help
  220. This tracer profiles all the the likely and unlikely macros
  221. in the kernel. It will display the results in:
  222. /sys/kernel/debug/tracing/profile_annotated_branch
  223. Note: this will add a significant overhead, only turn this
  224. on if you need to profile the system's use of these macros.
  225. config PROFILE_ALL_BRANCHES
  226. bool "Profile all if conditionals"
  227. select TRACE_BRANCH_PROFILING
  228. help
  229. This tracer profiles all branch conditions. Every if ()
  230. taken in the kernel is recorded whether it hit or miss.
  231. The results will be displayed in:
  232. /sys/kernel/debug/tracing/profile_branch
  233. This option also enables the likely/unlikely profiler.
  234. This configuration, when enabled, will impose a great overhead
  235. on the system. This should only be enabled when the system
  236. is to be analyzed
  237. endchoice
  238. config TRACING_BRANCHES
  239. bool
  240. help
  241. Selected by tracers that will trace the likely and unlikely
  242. conditions. This prevents the tracers themselves from being
  243. profiled. Profiling the tracing infrastructure can only happen
  244. when the likelys and unlikelys are not being traced.
  245. config BRANCH_TRACER
  246. bool "Trace likely/unlikely instances"
  247. depends on TRACE_BRANCH_PROFILING
  248. select TRACING_BRANCHES
  249. help
  250. This traces the events of likely and unlikely condition
  251. calls in the kernel. The difference between this and the
  252. "Trace likely/unlikely profiler" is that this is not a
  253. histogram of the callers, but actually places the calling
  254. events into a running trace buffer to see when and where the
  255. events happened, as well as their results.
  256. Say N if unsure.
  257. config POWER_TRACER
  258. bool "Trace power consumption behavior"
  259. depends on X86
  260. select GENERIC_TRACER
  261. help
  262. This tracer helps developers to analyze and optimize the kernels
  263. power management decisions, specifically the C-state and P-state
  264. behavior.
  265. config STACK_TRACER
  266. bool "Trace max stack"
  267. depends on HAVE_FUNCTION_TRACER
  268. select FUNCTION_TRACER
  269. select STACKTRACE
  270. select KALLSYMS
  271. help
  272. This special tracer records the maximum stack footprint of the
  273. kernel and displays it in /sys/kernel/debug/tracing/stack_trace.
  274. This tracer works by hooking into every function call that the
  275. kernel executes, and keeping a maximum stack depth value and
  276. stack-trace saved. If this is configured with DYNAMIC_FTRACE
  277. then it will not have any overhead while the stack tracer
  278. is disabled.
  279. To enable the stack tracer on bootup, pass in 'stacktrace'
  280. on the kernel command line.
  281. The stack tracer can also be enabled or disabled via the
  282. sysctl kernel.stack_tracer_enabled
  283. Say N if unsure.
  284. config HW_BRANCH_TRACER
  285. depends on HAVE_HW_BRANCH_TRACER
  286. bool "Trace hw branches"
  287. select GENERIC_TRACER
  288. help
  289. This tracer records all branches on the system in a circular
  290. buffer giving access to the last N branches for each cpu.
  291. config KMEMTRACE
  292. bool "Trace SLAB allocations"
  293. select GENERIC_TRACER
  294. help
  295. kmemtrace provides tracing for slab allocator functions, such as
  296. kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
  297. data is then fed to the userspace application in order to analyse
  298. allocation hotspots, internal fragmentation and so on, making it
  299. possible to see how well an allocator performs, as well as debug
  300. and profile kernel code.
  301. This requires an userspace application to use. See
  302. Documentation/trace/kmemtrace.txt for more information.
  303. Saying Y will make the kernel somewhat larger and slower. However,
  304. if you disable kmemtrace at run-time or boot-time, the performance
  305. impact is minimal (depending on the arch the kernel is built for).
  306. If unsure, say N.
  307. config WORKQUEUE_TRACER
  308. bool "Trace workqueues"
  309. select GENERIC_TRACER
  310. help
  311. The workqueue tracer provides some statistical informations
  312. about each cpu workqueue thread such as the number of the
  313. works inserted and executed since their creation. It can help
  314. to evaluate the amount of work each of them have to perform.
  315. For example it can help a developer to decide whether he should
  316. choose a per cpu workqueue instead of a singlethreaded one.
  317. config BLK_DEV_IO_TRACE
  318. bool "Support for tracing block io actions"
  319. depends on SYSFS
  320. depends on BLOCK
  321. select RELAY
  322. select DEBUG_FS
  323. select TRACEPOINTS
  324. select GENERIC_TRACER
  325. select STACKTRACE
  326. help
  327. Say Y here if you want to be able to trace the block layer actions
  328. on a given queue. Tracing allows you to see any traffic happening
  329. on a block device queue. For more information (and the userspace
  330. support tools needed), fetch the blktrace tools from:
  331. git://git.kernel.dk/blktrace.git
  332. Tracing also is possible using the ftrace interface, e.g.:
  333. echo 1 > /sys/block/sda/sda1/trace/enable
  334. echo blk > /sys/kernel/debug/tracing/current_tracer
  335. cat /sys/kernel/debug/tracing/trace_pipe
  336. If unsure, say N.
  337. config DYNAMIC_FTRACE
  338. bool "enable/disable ftrace tracepoints dynamically"
  339. depends on FUNCTION_TRACER
  340. depends on HAVE_DYNAMIC_FTRACE
  341. default y
  342. help
  343. This option will modify all the calls to ftrace dynamically
  344. (will patch them out of the binary image and replaces them
  345. with a No-Op instruction) as they are called. A table is
  346. created to dynamically enable them again.
  347. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  348. has native performance as long as no tracing is active.
  349. The changes to the code are done by a kernel thread that
  350. wakes up once a second and checks to see if any ftrace calls
  351. were made. If so, it runs stop_machine (stops all CPUS)
  352. and modifies the code to jump over the call to ftrace.
  353. config FUNCTION_PROFILER
  354. bool "Kernel function profiler"
  355. depends on FUNCTION_TRACER
  356. default n
  357. help
  358. This option enables the kernel function profiler. A file is created
  359. in debugfs called function_profile_enabled which defaults to zero.
  360. When a 1 is echoed into this file profiling begins, and when a
  361. zero is entered, profiling stops. A file in the trace_stats
  362. directory called functions, that show the list of functions that
  363. have been hit and their counters.
  364. If in doubt, say N
  365. config FTRACE_MCOUNT_RECORD
  366. def_bool y
  367. depends on DYNAMIC_FTRACE
  368. depends on HAVE_FTRACE_MCOUNT_RECORD
  369. config FTRACE_SELFTEST
  370. bool
  371. config FTRACE_STARTUP_TEST
  372. bool "Perform a startup test on ftrace"
  373. depends on GENERIC_TRACER
  374. select FTRACE_SELFTEST
  375. help
  376. This option performs a series of startup tests on ftrace. On bootup
  377. a series of tests are made to verify that the tracer is
  378. functioning properly. It will do tests on all the configured
  379. tracers of ftrace.
  380. config MMIOTRACE
  381. bool "Memory mapped IO tracing"
  382. depends on HAVE_MMIOTRACE_SUPPORT && PCI
  383. select GENERIC_TRACER
  384. help
  385. Mmiotrace traces Memory Mapped I/O access and is meant for
  386. debugging and reverse engineering. It is called from the ioremap
  387. implementation and works via page faults. Tracing is disabled by
  388. default and can be enabled at run-time.
  389. See Documentation/trace/mmiotrace.txt.
  390. If you are not helping to develop drivers, say N.
  391. config MMIOTRACE_TEST
  392. tristate "Test module for mmiotrace"
  393. depends on MMIOTRACE && m
  394. help
  395. This is a dumb module for testing mmiotrace. It is very dangerous
  396. as it will write garbage to IO memory starting at a given address.
  397. However, it should be safe to use on e.g. unused portion of VRAM.
  398. Say N, unless you absolutely know what you are doing.
  399. config RING_BUFFER_BENCHMARK
  400. tristate "Ring buffer benchmark stress tester"
  401. depends on RING_BUFFER
  402. help
  403. This option creates a test to stress the ring buffer and bench mark it.
  404. It creates its own ring buffer such that it will not interfer with
  405. any other users of the ring buffer (such as ftrace). It then creates
  406. a producer and consumer that will run for 10 seconds and sleep for
  407. 10 seconds. Each interval it will print out the number of events
  408. it recorded and give a rough estimate of how long each iteration took.
  409. It does not disable interrupts or raise its priority, so it may be
  410. affected by processes that are running.
  411. If unsure, say N
  412. endif # FTRACE
  413. endif # TRACING_SUPPORT