perf-record.txt 3.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153
  1. perf-record(1)
  2. ==============
  3. NAME
  4. ----
  5. perf-record - Run a command and record its profile into perf.data
  6. SYNOPSIS
  7. --------
  8. [verse]
  9. 'perf record' [-e <EVENT> | --event=EVENT] [-l] [-a] <command>
  10. 'perf record' [-e <EVENT> | --event=EVENT] [-l] [-a] -- <command> [<options>]
  11. DESCRIPTION
  12. -----------
  13. This command runs a command and gathers a performance counter profile
  14. from it, into perf.data - without displaying anything.
  15. This file can then be inspected later on, using 'perf report'.
  16. OPTIONS
  17. -------
  18. <command>...::
  19. Any command you can specify in a shell.
  20. -e::
  21. --event=::
  22. Select the PMU event. Selection can be:
  23. - a symbolic event name (use 'perf list' to list all events)
  24. - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
  25. hexadecimal event descriptor.
  26. - a hardware breakpoint event in the form of '\mem:addr[:access]'
  27. where addr is the address in memory you want to break in.
  28. Access is the memory access type (read, write, execute) it can
  29. be passed as follows: '\mem:addr[:[r][w][x]]'.
  30. If you want to profile read-write accesses in 0x1000, just set
  31. 'mem:0x1000:rw'.
  32. --filter=<filter>::
  33. Event filter.
  34. -a::
  35. --all-cpus::
  36. System-wide collection from all CPUs.
  37. -l::
  38. Scale counter values.
  39. -p::
  40. --pid=::
  41. Record events on existing process ID.
  42. -t::
  43. --tid=::
  44. Record events on existing thread ID.
  45. -r::
  46. --realtime=::
  47. Collect data with this RT SCHED_FIFO priority.
  48. -D::
  49. --no-delay::
  50. Collect data without buffering.
  51. -A::
  52. --append::
  53. Append to the output file to do incremental profiling.
  54. -f::
  55. --force::
  56. Overwrite existing data file. (deprecated)
  57. -c::
  58. --count=::
  59. Event period to sample.
  60. -o::
  61. --output=::
  62. Output file name.
  63. -i::
  64. --no-inherit::
  65. Child tasks do not inherit counters.
  66. -F::
  67. --freq=::
  68. Profile at this frequency.
  69. -m::
  70. --mmap-pages=::
  71. Number of mmap data pages.
  72. -g::
  73. --call-graph::
  74. Do call-graph (stack chain/backtrace) recording.
  75. -q::
  76. --quiet::
  77. Don't print any message, useful for scripting.
  78. -v::
  79. --verbose::
  80. Be more verbose (show counter open errors, etc).
  81. -s::
  82. --stat::
  83. Per thread counts.
  84. -d::
  85. --data::
  86. Sample addresses.
  87. -T::
  88. --timestamp::
  89. Sample timestamps. Use it with 'perf report -D' to see the timestamps,
  90. for instance.
  91. -n::
  92. --no-samples::
  93. Don't sample.
  94. -R::
  95. --raw-samples::
  96. Collect raw sample records from all opened counters (default for tracepoint counters).
  97. -C::
  98. --cpu::
  99. Collect samples only on the list of CPUs provided. Multiple CPUs can be provided as a
  100. comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
  101. In per-thread mode with inheritance mode on (default), samples are captured only when
  102. the thread executes on the designated CPUs. Default is to monitor all CPUs.
  103. -N::
  104. --no-buildid-cache::
  105. Do not update the builid cache. This saves some overhead in situations
  106. where the information in the perf.data file (which includes buildids)
  107. is sufficient.
  108. -G name,...::
  109. --cgroup name,...::
  110. monitor only in the container (cgroup) called "name". This option is available only
  111. in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
  112. container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
  113. can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
  114. to first event, second cgroup to second event and so on. It is possible to provide
  115. an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
  116. corresponding events, i.e., they always refer to events defined earlier on the command
  117. line.
  118. SEE ALSO
  119. --------
  120. linkperf:perf-stat[1], linkperf:perf-list[1]