Kconfig 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512
  1. #
  2. # Architectures that offer an FUNCTION_TRACER implementation should
  3. # select HAVE_FUNCTION_TRACER:
  4. #
  5. config USER_STACKTRACE_SUPPORT
  6. bool
  7. config NOP_TRACER
  8. bool
  9. config HAVE_FTRACE_NMI_ENTER
  10. bool
  11. config HAVE_FUNCTION_TRACER
  12. bool
  13. config HAVE_FUNCTION_GRAPH_TRACER
  14. bool
  15. config HAVE_FUNCTION_TRACE_MCOUNT_TEST
  16. bool
  17. help
  18. This gets selected when the arch tests the function_trace_stop
  19. variable at the mcount call site. Otherwise, this variable
  20. is tested by the called function.
  21. config HAVE_DYNAMIC_FTRACE
  22. bool
  23. config HAVE_FTRACE_MCOUNT_RECORD
  24. bool
  25. config HAVE_HW_BRANCH_TRACER
  26. bool
  27. config HAVE_FTRACE_SYSCALLS
  28. bool
  29. config TRACER_MAX_TRACE
  30. bool
  31. config RING_BUFFER
  32. bool
  33. config FTRACE_NMI_ENTER
  34. bool
  35. depends on HAVE_FTRACE_NMI_ENTER
  36. default y
  37. config EVENT_TRACING
  38. select CONTEXT_SWITCH_TRACER
  39. bool
  40. config CONTEXT_SWITCH_TRACER
  41. select MARKERS
  42. bool
  43. # All tracer options should select GENERIC_TRACER. For those options that are
  44. # enabled by all tracers (context switch and event tracer) they select TRACING.
  45. # This allows those options to appear when no other tracer is selected. But the
  46. # options do not appear when something else selects it. We need the two options
  47. # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
  48. # hidding of the automatic options options.
  49. config TRACING
  50. bool
  51. select DEBUG_FS
  52. select RING_BUFFER
  53. select STACKTRACE if STACKTRACE_SUPPORT
  54. select TRACEPOINTS
  55. select NOP_TRACER
  56. select BINARY_PRINTF
  57. select EVENT_TRACING
  58. config GENERIC_TRACER
  59. bool
  60. select TRACING
  61. #
  62. # Minimum requirements an architecture has to meet for us to
  63. # be able to offer generic tracing facilities:
  64. #
  65. config TRACING_SUPPORT
  66. bool
  67. # PPC32 has no irqflags tracing support, but it can use most of the
  68. # tracers anyway, they were tested to build and work. Note that new
  69. # exceptions to this list aren't welcomed, better implement the
  70. # irqflags tracing for your architecture.
  71. depends on TRACE_IRQFLAGS_SUPPORT || PPC32
  72. depends on STACKTRACE_SUPPORT
  73. default y
  74. if TRACING_SUPPORT
  75. menuconfig FTRACE
  76. bool "Tracers"
  77. default y if DEBUG_KERNEL
  78. help
  79. Enable the kernel tracing infrastructure.
  80. if FTRACE
  81. config FUNCTION_TRACER
  82. bool "Kernel Function Tracer"
  83. depends on HAVE_FUNCTION_TRACER
  84. select FRAME_POINTER
  85. select KALLSYMS
  86. select GENERIC_TRACER
  87. select CONTEXT_SWITCH_TRACER
  88. help
  89. Enable the kernel to trace every kernel function. This is done
  90. by using a compiler feature to insert a small, 5-byte No-Operation
  91. instruction to the beginning of every kernel function, which NOP
  92. sequence is then dynamically patched into a tracer call when
  93. tracing is enabled by the administrator. If it's runtime disabled
  94. (the bootup default), then the overhead of the instructions is very
  95. small and not measurable even in micro-benchmarks.
  96. config FUNCTION_GRAPH_TRACER
  97. bool "Kernel Function Graph Tracer"
  98. depends on HAVE_FUNCTION_GRAPH_TRACER
  99. depends on FUNCTION_TRACER
  100. default y
  101. help
  102. Enable the kernel to trace a function at both its return
  103. and its entry.
  104. Its first purpose is to trace the duration of functions and
  105. draw a call graph for each thread with some information like
  106. the return value. This is done by setting the current return
  107. address on the current task structure into a stack of calls.
  108. config IRQSOFF_TRACER
  109. bool "Interrupts-off Latency Tracer"
  110. default n
  111. depends on TRACE_IRQFLAGS_SUPPORT
  112. depends on GENERIC_TIME
  113. select TRACE_IRQFLAGS
  114. select GENERIC_TRACER
  115. select TRACER_MAX_TRACE
  116. help
  117. This option measures the time spent in irqs-off critical
  118. sections, with microsecond accuracy.
  119. The default measurement method is a maximum search, which is
  120. disabled by default and can be runtime (re-)started
  121. via:
  122. echo 0 > /debugfs/tracing/tracing_max_latency
  123. (Note that kernel size and overhead increases with this option
  124. enabled. This option and the preempt-off timing option can be
  125. used together or separately.)
  126. config PREEMPT_TRACER
  127. bool "Preemption-off Latency Tracer"
  128. default n
  129. depends on GENERIC_TIME
  130. depends on PREEMPT
  131. select GENERIC_TRACER
  132. select TRACER_MAX_TRACE
  133. help
  134. This option measures the time spent in preemption off critical
  135. sections, with microsecond accuracy.
  136. The default measurement method is a maximum search, which is
  137. disabled by default and can be runtime (re-)started
  138. via:
  139. echo 0 > /debugfs/tracing/tracing_max_latency
  140. (Note that kernel size and overhead increases with this option
  141. enabled. This option and the irqs-off timing option can be
  142. used together or separately.)
  143. config SYSPROF_TRACER
  144. bool "Sysprof Tracer"
  145. depends on X86
  146. select GENERIC_TRACER
  147. select CONTEXT_SWITCH_TRACER
  148. help
  149. This tracer provides the trace needed by the 'Sysprof' userspace
  150. tool.
  151. config SCHED_TRACER
  152. bool "Scheduling Latency Tracer"
  153. select GENERIC_TRACER
  154. select CONTEXT_SWITCH_TRACER
  155. select TRACER_MAX_TRACE
  156. help
  157. This tracer tracks the latency of the highest priority task
  158. to be scheduled in, starting from the point it has woken up.
  159. config ENABLE_CONTEXT_SWITCH_TRACER
  160. bool "Trace process context switches"
  161. depends on !GENERIC_TRACER
  162. select TRACING
  163. select CONTEXT_SWITCH_TRACER
  164. help
  165. This tracer gets called from the context switch and records
  166. all switching of tasks.
  167. config ENABLE_EVENT_TRACING
  168. bool "Trace various events in the kernel"
  169. depends on !GENERIC_TRACER
  170. select TRACING
  171. help
  172. This tracer hooks to various trace points in the kernel
  173. allowing the user to pick and choose which trace point they
  174. want to trace.
  175. Note, all tracers enable event tracing. This option is
  176. only a convenience to enable event tracing when no other
  177. tracers are selected.
  178. config FTRACE_SYSCALLS
  179. bool "Trace syscalls"
  180. depends on HAVE_FTRACE_SYSCALLS
  181. select GENERIC_TRACER
  182. select KALLSYMS
  183. help
  184. Basic tracer to catch the syscall entry and exit events.
  185. config BOOT_TRACER
  186. bool "Trace boot initcalls"
  187. select GENERIC_TRACER
  188. select CONTEXT_SWITCH_TRACER
  189. help
  190. This tracer helps developers to optimize boot times: it records
  191. the timings of the initcalls and traces key events and the identity
  192. of tasks that can cause boot delays, such as context-switches.
  193. Its aim is to be parsed by the /scripts/bootgraph.pl tool to
  194. produce pretty graphics about boot inefficiencies, giving a visual
  195. representation of the delays during initcalls - but the raw
  196. /debug/tracing/trace text output is readable too.
  197. You must pass in ftrace=initcall to the kernel command line
  198. to enable this on bootup.
  199. config TRACE_BRANCH_PROFILING
  200. bool
  201. select GENERIC_TRACER
  202. choice
  203. prompt "Branch Profiling"
  204. default BRANCH_PROFILE_NONE
  205. help
  206. The branch profiling is a software profiler. It will add hooks
  207. into the C conditionals to test which path a branch takes.
  208. The likely/unlikely profiler only looks at the conditions that
  209. are annotated with a likely or unlikely macro.
  210. The "all branch" profiler will profile every if statement in the
  211. kernel. This profiler will also enable the likely/unlikely
  212. profiler as well.
  213. Either of the above profilers add a bit of overhead to the system.
  214. If unsure choose "No branch profiling".
  215. config BRANCH_PROFILE_NONE
  216. bool "No branch profiling"
  217. help
  218. No branch profiling. Branch profiling adds a bit of overhead.
  219. Only enable it if you want to analyse the branching behavior.
  220. Otherwise keep it disabled.
  221. config PROFILE_ANNOTATED_BRANCHES
  222. bool "Trace likely/unlikely profiler"
  223. select TRACE_BRANCH_PROFILING
  224. help
  225. This tracer profiles all the the likely and unlikely macros
  226. in the kernel. It will display the results in:
  227. /debugfs/tracing/profile_annotated_branch
  228. Note: this will add a significant overhead, only turn this
  229. on if you need to profile the system's use of these macros.
  230. config PROFILE_ALL_BRANCHES
  231. bool "Profile all if conditionals"
  232. select TRACE_BRANCH_PROFILING
  233. help
  234. This tracer profiles all branch conditions. Every if ()
  235. taken in the kernel is recorded whether it hit or miss.
  236. The results will be displayed in:
  237. /debugfs/tracing/profile_branch
  238. This option also enables the likely/unlikely profiler.
  239. This configuration, when enabled, will impose a great overhead
  240. on the system. This should only be enabled when the system
  241. is to be analyzed
  242. endchoice
  243. config TRACING_BRANCHES
  244. bool
  245. help
  246. Selected by tracers that will trace the likely and unlikely
  247. conditions. This prevents the tracers themselves from being
  248. profiled. Profiling the tracing infrastructure can only happen
  249. when the likelys and unlikelys are not being traced.
  250. config BRANCH_TRACER
  251. bool "Trace likely/unlikely instances"
  252. depends on TRACE_BRANCH_PROFILING
  253. select TRACING_BRANCHES
  254. help
  255. This traces the events of likely and unlikely condition
  256. calls in the kernel. The difference between this and the
  257. "Trace likely/unlikely profiler" is that this is not a
  258. histogram of the callers, but actually places the calling
  259. events into a running trace buffer to see when and where the
  260. events happened, as well as their results.
  261. Say N if unsure.
  262. config POWER_TRACER
  263. bool "Trace power consumption behavior"
  264. depends on X86
  265. select GENERIC_TRACER
  266. help
  267. This tracer helps developers to analyze and optimize the kernels
  268. power management decisions, specifically the C-state and P-state
  269. behavior.
  270. config STACK_TRACER
  271. bool "Trace max stack"
  272. depends on HAVE_FUNCTION_TRACER
  273. select FUNCTION_TRACER
  274. select STACKTRACE
  275. select KALLSYMS
  276. help
  277. This special tracer records the maximum stack footprint of the
  278. kernel and displays it in debugfs/tracing/stack_trace.
  279. This tracer works by hooking into every function call that the
  280. kernel executes, and keeping a maximum stack depth value and
  281. stack-trace saved. If this is configured with DYNAMIC_FTRACE
  282. then it will not have any overhead while the stack tracer
  283. is disabled.
  284. To enable the stack tracer on bootup, pass in 'stacktrace'
  285. on the kernel command line.
  286. The stack tracer can also be enabled or disabled via the
  287. sysctl kernel.stack_tracer_enabled
  288. Say N if unsure.
  289. config HW_BRANCH_TRACER
  290. depends on HAVE_HW_BRANCH_TRACER
  291. bool "Trace hw branches"
  292. select GENERIC_TRACER
  293. help
  294. This tracer records all branches on the system in a circular
  295. buffer giving access to the last N branches for each cpu.
  296. config KMEMTRACE
  297. bool "Trace SLAB allocations"
  298. select GENERIC_TRACER
  299. help
  300. kmemtrace provides tracing for slab allocator functions, such as
  301. kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
  302. data is then fed to the userspace application in order to analyse
  303. allocation hotspots, internal fragmentation and so on, making it
  304. possible to see how well an allocator performs, as well as debug
  305. and profile kernel code.
  306. This requires an userspace application to use. See
  307. Documentation/trace/kmemtrace.txt for more information.
  308. Saying Y will make the kernel somewhat larger and slower. However,
  309. if you disable kmemtrace at run-time or boot-time, the performance
  310. impact is minimal (depending on the arch the kernel is built for).
  311. If unsure, say N.
  312. config WORKQUEUE_TRACER
  313. bool "Trace workqueues"
  314. select GENERIC_TRACER
  315. help
  316. The workqueue tracer provides some statistical informations
  317. about each cpu workqueue thread such as the number of the
  318. works inserted and executed since their creation. It can help
  319. to evaluate the amount of work each of them have to perform.
  320. For example it can help a developer to decide whether he should
  321. choose a per cpu workqueue instead of a singlethreaded one.
  322. config BLK_DEV_IO_TRACE
  323. bool "Support for tracing block io actions"
  324. depends on SYSFS
  325. depends on BLOCK
  326. select RELAY
  327. select DEBUG_FS
  328. select TRACEPOINTS
  329. select GENERIC_TRACER
  330. select STACKTRACE
  331. help
  332. Say Y here if you want to be able to trace the block layer actions
  333. on a given queue. Tracing allows you to see any traffic happening
  334. on a block device queue. For more information (and the userspace
  335. support tools needed), fetch the blktrace tools from:
  336. git://git.kernel.dk/blktrace.git
  337. Tracing also is possible using the ftrace interface, e.g.:
  338. echo 1 > /sys/block/sda/sda1/trace/enable
  339. echo blk > /sys/kernel/debug/tracing/current_tracer
  340. cat /sys/kernel/debug/tracing/trace_pipe
  341. If unsure, say N.
  342. config DYNAMIC_FTRACE
  343. bool "enable/disable ftrace tracepoints dynamically"
  344. depends on FUNCTION_TRACER
  345. depends on HAVE_DYNAMIC_FTRACE
  346. default y
  347. help
  348. This option will modify all the calls to ftrace dynamically
  349. (will patch them out of the binary image and replaces them
  350. with a No-Op instruction) as they are called. A table is
  351. created to dynamically enable them again.
  352. This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
  353. has native performance as long as no tracing is active.
  354. The changes to the code are done by a kernel thread that
  355. wakes up once a second and checks to see if any ftrace calls
  356. were made. If so, it runs stop_machine (stops all CPUS)
  357. and modifies the code to jump over the call to ftrace.
  358. config FUNCTION_PROFILER
  359. bool "Kernel function profiler"
  360. depends on FUNCTION_TRACER
  361. default n
  362. help
  363. This option enables the kernel function profiler. A file is created
  364. in debugfs called function_profile_enabled which defaults to zero.
  365. When a 1 is echoed into this file profiling begins, and when a
  366. zero is entered, profiling stops. A file in the trace_stats
  367. directory called functions, that show the list of functions that
  368. have been hit and their counters.
  369. If in doubt, say N
  370. config FTRACE_MCOUNT_RECORD
  371. def_bool y
  372. depends on DYNAMIC_FTRACE
  373. depends on HAVE_FTRACE_MCOUNT_RECORD
  374. config FTRACE_SELFTEST
  375. bool
  376. config FTRACE_STARTUP_TEST
  377. bool "Perform a startup test on ftrace"
  378. depends on GENERIC_TRACER
  379. select FTRACE_SELFTEST
  380. help
  381. This option performs a series of startup tests on ftrace. On bootup
  382. a series of tests are made to verify that the tracer is
  383. functioning properly. It will do tests on all the configured
  384. tracers of ftrace.
  385. config MMIOTRACE
  386. bool "Memory mapped IO tracing"
  387. depends on HAVE_MMIOTRACE_SUPPORT && PCI
  388. select GENERIC_TRACER
  389. help
  390. Mmiotrace traces Memory Mapped I/O access and is meant for
  391. debugging and reverse engineering. It is called from the ioremap
  392. implementation and works via page faults. Tracing is disabled by
  393. default and can be enabled at run-time.
  394. See Documentation/trace/mmiotrace.txt.
  395. If you are not helping to develop drivers, say N.
  396. config MMIOTRACE_TEST
  397. tristate "Test module for mmiotrace"
  398. depends on MMIOTRACE && m
  399. help
  400. This is a dumb module for testing mmiotrace. It is very dangerous
  401. as it will write garbage to IO memory starting at a given address.
  402. However, it should be safe to use on e.g. unused portion of VRAM.
  403. Say N, unless you absolutely know what you are doing.
  404. config RING_BUFFER_BENCHMARK
  405. tristate "Ring buffer benchmark stress tester"
  406. depends on RING_BUFFER
  407. help
  408. This option creates a test to stress the ring buffer and bench mark it.
  409. It creates its own ring buffer such that it will not interfer with
  410. any other users of the ring buffer (such as ftrace). It then creates
  411. a producer and consumer that will run for 10 seconds and sleep for
  412. 10 seconds. Each interval it will print out the number of events
  413. it recorded and give a rough estimate of how long each iteration took.
  414. It does not disable interrupts or raise its priority, so it may be
  415. affected by processes that are running.
  416. If unsure, say N
  417. endif # FTRACE
  418. endif # TRACING_SUPPORT