Kconfig 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config USER_STACKTRACE_SUPPORT
  6. bool
  7. config NOP_TRACER
  8. bool
  9. config HAVE_FTRACE_NMI_ENTER
  10. bool
  11. config HAVE_FUNCTION_TRACER
  12. bool
  13. config HAVE_FUNCTION_GRAPH_TRACER
  14. bool
  15. config HAVE_FUNCTION_GRAPH_FP_TEST
  16. bool
  17. help
  18. An arch may pass in a unique value (frame pointer) to both the
  19. entering and exiting of a function. On exit, the value is compared
  20. and if it does not match, then it will panic the kernel.
  21. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  22. bool
  23. help
  24. This gets selected when the arch tests the function_trace_stop
  25. variable at the mcount call site. Otherwise, this variable
  26. is tested by the called function.
  27. config HAVE_DYNAMIC_FTRACE
  28. bool
  29. config HAVE_FTRACE_MCOUNT_RECORD
  30. bool
  31. config HAVE_HW_BRANCH_TRACER
  32. bool
  33. config HAVE_FTRACE_SYSCALLS
  34. bool
  35. config TRACER_MAX_TRACE
  36. bool
  37. config RING_BUFFER
  38. bool
  39. config FTRACE_NMI_ENTER
  40. bool
  41. depends on HAVE_FTRACE_NMI_ENTER
  42. default y
  43. config EVENT_TRACING
  44. select CONTEXT_SWITCH_TRACER
  45. bool
  46. config CONTEXT_SWITCH_TRACER
  47. select MARKERS
  48. bool
  49. # All tracer options should select GENERIC_TRACER. For those options that are
  50. # enabled by all tracers (context switch and event tracer) they select TRACING.
  51. # This allows those options to appear when no other tracer is selected. But the
  52. # options do not appear when something else selects it. We need the two options
  53. # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
  54. # hidding of the automatic options options.
  55. config TRACING
  56. bool
  57. select DEBUG_FS
  58. select RING_BUFFER
  59. select STACKTRACE if STACKTRACE_SUPPORT
  60. select TRACEPOINTS
  61. select NOP_TRACER
  62. select BINARY_PRINTF
  63. select EVENT_TRACING
  64. config GENERIC_TRACER
  65. bool
  66. select TRACING
  67. #
  68. # Minimum requirements an architecture has to meet for us to
  69. # be able to offer generic tracing facilities:
  70. #
  71. config TRACING_SUPPORT
  72. bool
  73. # PPC32 has no irqflags tracing support, but it can use most of the
  74. # tracers anyway, they were tested to build and work. Note that new
  75. # exceptions to this list aren't welcomed, better implement the
  76. # irqflags tracing for your architecture.
  77. depends on TRACE_IRQFLAGS_SUPPORT || PPC32
  78. depends on STACKTRACE_SUPPORT
  79. default y
  80. if TRACING_SUPPORT
  81. menuconfig FTRACE
  82. bool "Tracers"
  83. default y if DEBUG_KERNEL
  84. help
  85. Enable the kernel tracing infrastructure.
  86. if FTRACE
  87. config FUNCTION_TRACER
  88. bool "Kernel Function Tracer"
  89. depends on HAVE_FUNCTION_TRACER
  90. select FRAME_POINTER
  91. select KALLSYMS
  92. select GENERIC_TRACER
  93. select CONTEXT_SWITCH_TRACER
  94. help
  95. Enable the kernel to trace every kernel function. This is done
  96. by using a compiler feature to insert a small, 5-byte No-Operation
  97. instruction to the beginning of every kernel function, which NOP
  98. sequence is then dynamically patched into a tracer call when
  99. tracing is enabled by the administrator. If it's runtime disabled
  100. (the bootup default), then the overhead of the instructions is very
  101. small and not measurable even in micro-benchmarks.
  102. config FUNCTION_GRAPH_TRACER
  103. bool "Kernel Function Graph Tracer"
  104. depends on HAVE_FUNCTION_GRAPH_TRACER
  105. depends on FUNCTION_TRACER
  106. depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
  107. default y
  108. help
  109. Enable the kernel to trace a function at both its return
  110. and its entry.
  111. Its first purpose is to trace the duration of functions and
  112. draw a call graph for each thread with some information like
  113. the return value. This is done by setting the current return
  114. address on the current task structure into a stack of calls.
  115. config IRQSOFF_TRACER
  116. bool "Interrupts-off Latency Tracer"
  117. default n
  118. depends on TRACE_IRQFLAGS_SUPPORT
  119. depends on GENERIC_TIME
  120. select TRACE_IRQFLAGS
  121. select GENERIC_TRACER
  122. select TRACER_MAX_TRACE
  123. help
  124. This option measures the time spent in irqs-off critical
  125. sections, with microsecond accuracy.
  126. The default measurement method is a maximum search, which is
  127. disabled by default and can be runtime (re-)started
  128. via:
  129. echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
  130. (Note that kernel size and overhead increases with this option
  131. enabled. This option and the preempt-off timing option can be
  132. used together or separately.)
  133. config PREEMPT_TRACER
  134. bool "Preemption-off Latency Tracer"
  135. default n
  136. depends on GENERIC_TIME
  137. depends on PREEMPT
  138. select GENERIC_TRACER
  139. select TRACER_MAX_TRACE
  140. help
  141. This option measures the time spent in preemption off critical
  142. sections, with microsecond accuracy.
  143. The default measurement method is a maximum search, which is
  144. disabled by default and can be runtime (re-)started
  145. via:
  146. echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
  147. (Note that kernel size and overhead increases with this option
  148. enabled. This option and the irqs-off timing option can be
  149. used together or separately.)
  150. config SYSPROF_TRACER
  151. bool "Sysprof Tracer"
  152. depends on X86
  153. select GENERIC_TRACER
  154. select CONTEXT_SWITCH_TRACER
  155. help
  156. This tracer provides the trace needed by the 'Sysprof' userspace
  157. tool.
  158. config SCHED_TRACER
  159. bool "Scheduling Latency Tracer"
  160. select GENERIC_TRACER
  161. select CONTEXT_SWITCH_TRACER
  162. select TRACER_MAX_TRACE
  163. help
  164. This tracer tracks the latency of the highest priority task
  165. to be scheduled in, starting from the point it has woken up.
  166. config ENABLE_DEFAULT_TRACERS
  167. bool "Trace process context switches and events"
  168. depends on !GENERIC_TRACER
  169. select TRACING
  170. help
  171. This tracer hooks to various trace points in the kernel
  172. allowing the user to pick and choose which trace point they
  173. want to trace. It also includes the sched_switch tracer plugin.
  174. config FTRACE_SYSCALLS
  175. bool "Trace syscalls"
  176. depends on HAVE_FTRACE_SYSCALLS
  177. select GENERIC_TRACER
  178. select KALLSYMS
  179. help
  180. Basic tracer to catch the syscall entry and exit events.
  181. config BOOT_TRACER
  182. bool "Trace boot initcalls"
  183. select GENERIC_TRACER
  184. select CONTEXT_SWITCH_TRACER
  185. help
  186. This tracer helps developers to optimize boot times: it records
  187. the timings of the initcalls and traces key events and the identity
  188. of tasks that can cause boot delays, such as context-switches.
  189. Its aim is to be parsed by the scripts/bootgraph.pl tool to
  190. produce pretty graphics about boot inefficiencies, giving a visual
  191. representation of the delays during initcalls - but the raw
  192. /debug/tracing/trace text output is readable too.
  193. You must pass in initcall_debug and ftrace=initcall to the kernel
  194. command line to enable this on bootup.
  195. config TRACE_BRANCH_PROFILING
  196. bool
  197. select GENERIC_TRACER
  198. choice
  199. prompt "Branch Profiling"
  200. default BRANCH_PROFILE_NONE
  201. help
  202. The branch profiling is a software profiler. It will add hooks
  203. into the C conditionals to test which path a branch takes.
  204. The likely/unlikely profiler only looks at the conditions that
  205. are annotated with a likely or unlikely macro.
  206. The "all branch" profiler will profile every if statement in the
  207. kernel. This profiler will also enable the likely/unlikely
  208. profiler as well.
  209. Either of the above profilers add a bit of overhead to the system.
  210. If unsure choose "No branch profiling".
  211. config BRANCH_PROFILE_NONE
  212. bool "No branch profiling"
  213. help
  214. No branch profiling. Branch profiling adds a bit of overhead.
  215. Only enable it if you want to analyse the branching behavior.
  216. Otherwise keep it disabled.
  217. config PROFILE_ANNOTATED_BRANCHES
  218. bool "Trace likely/unlikely profiler"
  219. select TRACE_BRANCH_PROFILING
  220. help
  221. This tracer profiles all the the likely and unlikely macros
  222. in the kernel. It will display the results in:
  223. /sys/kernel/debug/tracing/profile_annotated_branch
  224. Note: this will add a significant overhead, only turn this
  225. on if you need to profile the system's use of these macros.
  226. config PROFILE_ALL_BRANCHES
  227. bool "Profile all if conditionals"
  228. select TRACE_BRANCH_PROFILING
  229. help
  230. This tracer profiles all branch conditions. Every if ()
  231. taken in the kernel is recorded whether it hit or miss.
  232. The results will be displayed in:
  233. /sys/kernel/debug/tracing/profile_branch
  234. This option also enables the likely/unlikely profiler.
  235. This configuration, when enabled, will impose a great overhead
  236. on the system. This should only be enabled when the system
  237. is to be analyzed
  238. endchoice
  239. config TRACING_BRANCHES
  240. bool
  241. help
  242. Selected by tracers that will trace the likely and unlikely
  243. conditions. This prevents the tracers themselves from being
  244. profiled. Profiling the tracing infrastructure can only happen
  245. when the likelys and unlikelys are not being traced.
  246. config BRANCH_TRACER
  247. bool "Trace likely/unlikely instances"
  248. depends on TRACE_BRANCH_PROFILING
  249. select TRACING_BRANCHES
  250. help
  251. This traces the events of likely and unlikely condition
  252. calls in the kernel. The difference between this and the
  253. "Trace likely/unlikely profiler" is that this is not a
  254. histogram of the callers, but actually places the calling
  255. events into a running trace buffer to see when and where the
  256. events happened, as well as their results.
  257. Say N if unsure.
  258. config POWER_TRACER
  259. bool "Trace power consumption behavior"
  260. depends on X86
  261. select GENERIC_TRACER
  262. help
  263. This tracer helps developers to analyze and optimize the kernels
  264. power management decisions, specifically the C-state and P-state
  265. behavior.
  266. config STACK_TRACER
  267. bool "Trace max stack"
  268. depends on HAVE_FUNCTION_TRACER
  269. select FUNCTION_TRACER
  270. select STACKTRACE
  271. select KALLSYMS
  272. help
  273. This special tracer records the maximum stack footprint of the
  274. kernel and displays it in /sys/kernel/debug/tracing/stack_trace.
  275. This tracer works by hooking into every function call that the
  276. kernel executes, and keeping a maximum stack depth value and
  277. stack-trace saved. If this is configured with DYNAMIC_FTRACE
  278. then it will not have any overhead while the stack tracer
  279. is disabled.
  280. To enable the stack tracer on bootup, pass in 'stacktrace'
  281. on the kernel command line.
  282. The stack tracer can also be enabled or disabled via the
  283. sysctl kernel.stack_tracer_enabled
  284. Say N if unsure.
  285. config HW_BRANCH_TRACER
  286. depends on HAVE_HW_BRANCH_TRACER
  287. bool "Trace hw branches"
  288. select GENERIC_TRACER
  289. help
  290. This tracer records all branches on the system in a circular
  291. buffer giving access to the last N branches for each cpu.
  292. config KMEMTRACE
  293. bool "Trace SLAB allocations"
  294. select GENERIC_TRACER
  295. help
  296. kmemtrace provides tracing for slab allocator functions, such as
  297. kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
  298. data is then fed to the userspace application in order to analyse
  299. allocation hotspots, internal fragmentation and so on, making it
  300. possible to see how well an allocator performs, as well as debug
  301. and profile kernel code.
  302. This requires an userspace application to use. See
  303. Documentation/trace/kmemtrace.txt for more information.
  304. Saying Y will make the kernel somewhat larger and slower. However,
  305. if you disable kmemtrace at run-time or boot-time, the performance
  306. impact is minimal (depending on the arch the kernel is built for).
  307. If unsure, say N.
  308. config WORKQUEUE_TRACER
  309. bool "Trace workqueues"
  310. select GENERIC_TRACER
  311. help
  312. The workqueue tracer provides some statistical informations
  313. about each cpu workqueue thread such as the number of the
  314. works inserted and executed since their creation. It can help
  315. to evaluate the amount of work each of them have to perform.
  316. For example it can help a developer to decide whether he should
  317. choose a per cpu workqueue instead of a singlethreaded one.
  318. config BLK_DEV_IO_TRACE
  319. bool "Support for tracing block io actions"
  320. depends on SYSFS
  321. depends on BLOCK
  322. select RELAY
  323. select DEBUG_FS
  324. select TRACEPOINTS
  325. select GENERIC_TRACER
  326. select STACKTRACE
  327. help
  328. Say Y here if you want to be able to trace the block layer actions
  329. on a given queue. Tracing allows you to see any traffic happening
  330. on a block device queue. For more information (and the userspace
  331. support tools needed), fetch the blktrace tools from:
  332. git://git.kernel.dk/blktrace.git
  333. Tracing also is possible using the ftrace interface, e.g.:
  334. echo 1 > /sys/block/sda/sda1/trace/enable
  335. echo blk > /sys/kernel/debug/tracing/current_tracer
  336. cat /sys/kernel/debug/tracing/trace_pipe
  337. If unsure, say N.
  338. config DYNAMIC_FTRACE
  339. bool "enable/disable ftrace tracepoints dynamically"
  340. depends on FUNCTION_TRACER
  341. depends on HAVE_DYNAMIC_FTRACE
  342. default y
  343. help
  344. This option will modify all the calls to ftrace dynamically
  345. (will patch them out of the binary image and replaces them
  346. with a No-Op instruction) as they are called. A table is
  347. created to dynamically enable them again.
  348. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  349. has native performance as long as no tracing is active.
  350. The changes to the code are done by a kernel thread that
  351. wakes up once a second and checks to see if any ftrace calls
  352. were made. If so, it runs stop_machine (stops all CPUS)
  353. and modifies the code to jump over the call to ftrace.
  354. config FUNCTION_PROFILER
  355. bool "Kernel function profiler"
  356. depends on FUNCTION_TRACER
  357. default n
  358. help
  359. This option enables the kernel function profiler. A file is created
  360. in debugfs called function_profile_enabled which defaults to zero.
  361. When a 1 is echoed into this file profiling begins, and when a
  362. zero is entered, profiling stops. A file in the trace_stats
  363. directory called functions, that show the list of functions that
  364. have been hit and their counters.
  365. If in doubt, say N
  366. config FTRACE_MCOUNT_RECORD
  367. def_bool y
  368. depends on DYNAMIC_FTRACE
  369. depends on HAVE_FTRACE_MCOUNT_RECORD
  370. config FTRACE_SELFTEST
  371. bool
  372. config FTRACE_STARTUP_TEST
  373. bool "Perform a startup test on ftrace"
  374. depends on GENERIC_TRACER
  375. select FTRACE_SELFTEST
  376. help
  377. This option performs a series of startup tests on ftrace. On bootup
  378. a series of tests are made to verify that the tracer is
  379. functioning properly. It will do tests on all the configured
  380. tracers of ftrace.
  381. config MMIOTRACE
  382. bool "Memory mapped IO tracing"
  383. depends on HAVE_MMIOTRACE_SUPPORT && PCI
  384. select GENERIC_TRACER
  385. help
  386. Mmiotrace traces Memory Mapped I/O access and is meant for
  387. debugging and reverse engineering. It is called from the ioremap
  388. implementation and works via page faults. Tracing is disabled by
  389. default and can be enabled at run-time.
  390. See Documentation/trace/mmiotrace.txt.
  391. If you are not helping to develop drivers, say N.
  392. config MMIOTRACE_TEST
  393. tristate "Test module for mmiotrace"
  394. depends on MMIOTRACE && m
  395. help
  396. This is a dumb module for testing mmiotrace. It is very dangerous
  397. as it will write garbage to IO memory starting at a given address.
  398. However, it should be safe to use on e.g. unused portion of VRAM.
  399. Say N, unless you absolutely know what you are doing.
  400. config RING_BUFFER_BENCHMARK
  401. tristate "Ring buffer benchmark stress tester"
  402. depends on RING_BUFFER
  403. help
  404. This option creates a test to stress the ring buffer and bench mark it.
  405. It creates its own ring buffer such that it will not interfer with
  406. any other users of the ring buffer (such as ftrace). It then creates
  407. a producer and consumer that will run for 10 seconds and sleep for
  408. 10 seconds. Each interval it will print out the number of events
  409. it recorded and give a rough estimate of how long each iteration took.
  410. It does not disable interrupts or raise its priority, so it may be
  411. affected by processes that are running.
  412. If unsure, say N
  413. endif # FTRACE
  414. endif # TRACING_SUPPORT