浏览代码

Merge branches 'tracing/fastboot', 'tracing/ftrace' and 'tracing/urgent' into tracing/core

Ingo Molnar 16 年之前
父节点
当前提交
c91add5fa6
共有 5 个文件被更改,包括 55 次插入44 次删除
  1. 17 29
      Documentation/ftrace.txt
  2. 1 2
      kernel/trace/ftrace.c
  3. 6 0
      kernel/trace/ring_buffer.c
  4. 30 13
      kernel/trace/trace.c
  5. 1 0
      kernel/trace/trace.h

+ 17 - 29
Documentation/ftrace.txt

@@ -82,7 +82,7 @@ of ftrace. Here is a list of some of the key files:
 		tracer is not adding more data, they will display
 		tracer is not adding more data, they will display
 		the same information every time they are read.
 		the same information every time they are read.
 
 
-  iter_ctrl: This file lets the user control the amount of data
+  trace_options: This file lets the user control the amount of data
 		that is displayed in one of the above output
 		that is displayed in one of the above output
 		files.
 		files.
 
 
@@ -94,10 +94,10 @@ of ftrace. Here is a list of some of the key files:
 		only be recorded if the latency is greater than
 		only be recorded if the latency is greater than
 		the value in this file. (in microseconds)
 		the value in this file. (in microseconds)
 
 
-  trace_entries: This sets or displays the number of bytes each CPU
+  buffer_size_kb: This sets or displays the number of kilobytes each CPU
 		buffer can hold. The tracer buffers are the same size
 		buffer can hold. The tracer buffers are the same size
 		for each CPU. The displayed number is the size of the
 		for each CPU. The displayed number is the size of the
-		 CPU buffer and not total size of all buffers. The
+		CPU buffer and not total size of all buffers. The
 		trace buffers are allocated in pages (blocks of memory
 		trace buffers are allocated in pages (blocks of memory
 		that the kernel uses for allocation, usually 4 KB in size).
 		that the kernel uses for allocation, usually 4 KB in size).
 		If the last page allocated has room for more bytes
 		If the last page allocated has room for more bytes
@@ -316,23 +316,23 @@ The above is mostly meaningful for kernel developers.
   The rest is the same as the 'trace' file.
   The rest is the same as the 'trace' file.
 
 
 
 
-iter_ctrl
----------
+trace_options
+-------------
 
 
-The iter_ctrl file is used to control what gets printed in the trace
+The trace_options file is used to control what gets printed in the trace
 output. To see what is available, simply cat the file:
 output. To see what is available, simply cat the file:
 
 
-  cat /debug/tracing/iter_ctrl
+  cat /debug/tracing/trace_options
   print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
   print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
  noblock nostacktrace nosched-tree
  noblock nostacktrace nosched-tree
 
 
 To disable one of the options, echo in the option prepended with "no".
 To disable one of the options, echo in the option prepended with "no".
 
 
-  echo noprint-parent > /debug/tracing/iter_ctrl
+  echo noprint-parent > /debug/tracing/trace_options
 
 
 To enable an option, leave off the "no".
 To enable an option, leave off the "no".
 
 
-  echo sym-offset > /debug/tracing/iter_ctrl
+  echo sym-offset > /debug/tracing/trace_options
 
 
 Here are the available options:
 Here are the available options:
 
 
@@ -1299,41 +1299,29 @@ trace entries
 -------------
 -------------
 
 
 Having too much or not enough data can be troublesome in diagnosing
 Having too much or not enough data can be troublesome in diagnosing
-an issue in the kernel. The file trace_entries is used to modify
+an issue in the kernel. The file buffer_size_kb is used to modify
 the size of the internal trace buffers. The number listed
 the size of the internal trace buffers. The number listed
 is the number of entries that can be recorded per CPU. To know
 is the number of entries that can be recorded per CPU. To know
 the full size, multiply the number of possible CPUS with the
 the full size, multiply the number of possible CPUS with the
 number of entries.
 number of entries.
 
 
- # cat /debug/tracing/trace_entries
-65620
+ # cat /debug/tracing/buffer_size_kb
+1408 (units kilobytes)
 
 
 Note, to modify this, you must have tracing completely disabled. To do that,
 Note, to modify this, you must have tracing completely disabled. To do that,
 echo "nop" into the current_tracer. If the current_tracer is not set
 echo "nop" into the current_tracer. If the current_tracer is not set
 to "nop", an EINVAL error will be returned.
 to "nop", an EINVAL error will be returned.
 
 
  # echo nop > /debug/tracing/current_tracer
  # echo nop > /debug/tracing/current_tracer
- # echo 100000 > /debug/tracing/trace_entries
- # cat /debug/tracing/trace_entries
-100045
-
-
-Notice that we echoed in 100,000 but the size is 100,045. The entries
-are held in individual pages. It allocates the number of pages it takes
-to fulfill the request. If more entries may fit on the last page
-then they will be added.
-
- # echo 1 > /debug/tracing/trace_entries
- # cat /debug/tracing/trace_entries
-85
-
-This shows us that 85 entries can fit in a single page.
+ # echo 10000 > /debug/tracing/buffer_size_kb
+ # cat /debug/tracing/buffer_size_kb
+10000 (units kilobytes)
 
 
 The number of pages which will be allocated is limited to a percentage
 The number of pages which will be allocated is limited to a percentage
 of available memory. Allocating too much will produce an error.
 of available memory. Allocating too much will produce an error.
 
 
- # echo 1000000000000 > /debug/tracing/trace_entries
+ # echo 1000000000000 > /debug/tracing/buffer_size_kb
 -bash: echo: write error: Cannot allocate memory
 -bash: echo: write error: Cannot allocate memory
- # cat /debug/tracing/trace_entries
+ # cat /debug/tracing/buffer_size_kb
 85
 85
 
 

+ 1 - 2
kernel/trace/ftrace.c

@@ -179,8 +179,7 @@ static int __unregister_ftrace_function(struct ftrace_ops *ops)
 
 
 	if (ftrace_enabled) {
 	if (ftrace_enabled) {
 		/* If we only have one func left, then call that directly */
 		/* If we only have one func left, then call that directly */
-		if (ftrace_list == &ftrace_list_end ||
-		    ftrace_list->next == &ftrace_list_end)
+		if (ftrace_list->next == &ftrace_list_end)
 			ftrace_trace_function = ftrace_list->func;
 			ftrace_trace_function = ftrace_list->func;
 	}
 	}
 
 

+ 6 - 0
kernel/trace/ring_buffer.c

@@ -533,6 +533,12 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size)
 	LIST_HEAD(pages);
 	LIST_HEAD(pages);
 	int i, cpu;
 	int i, cpu;
 
 
+	/*
+	 * Always succeed at resizing a non-existent buffer:
+	 */
+	if (!buffer)
+		return size;
+
 	size = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
 	size = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
 	size *= BUF_PAGE_SIZE;
 	size *= BUF_PAGE_SIZE;
 	buffer_size = buffer->pages * BUF_PAGE_SIZE;
 	buffer_size = buffer->pages * BUF_PAGE_SIZE;

+ 30 - 13
kernel/trace/trace.c

@@ -204,8 +204,9 @@ static DEFINE_MUTEX(trace_types_lock);
 /* trace_wait is a waitqueue for tasks blocked on trace_poll */
 /* trace_wait is a waitqueue for tasks blocked on trace_poll */
 static DECLARE_WAIT_QUEUE_HEAD(trace_wait);
 static DECLARE_WAIT_QUEUE_HEAD(trace_wait);
 
 
-/* trace_flags holds iter_ctrl options */
-unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK;
+/* trace_flags holds trace_options default values */
+unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
+	TRACE_ITER_ANNOTATE;
 
 
 /**
 /**
  * trace_wake_up - wake up tasks waiting for trace input
  * trace_wake_up - wake up tasks waiting for trace input
@@ -261,6 +262,7 @@ static const char *trace_options[] = {
 #ifdef CONFIG_BRANCH_TRACER
 #ifdef CONFIG_BRANCH_TRACER
 	"branch",
 	"branch",
 #endif
 #endif
+	"annotate",
 	NULL
 	NULL
 };
 };
 
 
@@ -1113,6 +1115,7 @@ void tracing_stop_function_trace(void)
 
 
 enum trace_file_type {
 enum trace_file_type {
 	TRACE_FILE_LAT_FMT	= 1,
 	TRACE_FILE_LAT_FMT	= 1,
+	TRACE_FILE_ANNOTATE	= 2,
 };
 };
 
 
 static void trace_iterator_increment(struct trace_iterator *iter, int cpu)
 static void trace_iterator_increment(struct trace_iterator *iter, int cpu)
@@ -1532,6 +1535,12 @@ static void test_cpu_buff_start(struct trace_iterator *iter)
 {
 {
 	struct trace_seq *s = &iter->seq;
 	struct trace_seq *s = &iter->seq;
 
 
+	if (!(trace_flags & TRACE_ITER_ANNOTATE))
+		return;
+
+	if (!(iter->iter_flags & TRACE_FILE_ANNOTATE))
+		return;
+
 	if (cpu_isset(iter->cpu, iter->started))
 	if (cpu_isset(iter->cpu, iter->started))
 		return;
 		return;
 
 
@@ -2132,6 +2141,11 @@ __tracing_open(struct inode *inode, struct file *file, int *ret)
 	iter->trace = current_trace;
 	iter->trace = current_trace;
 	iter->pos = -1;
 	iter->pos = -1;
 
 
+	/* Annotate start of buffers if we had overruns */
+	if (ring_buffer_overruns(iter->tr->buffer))
+		iter->iter_flags |= TRACE_FILE_ANNOTATE;
+
+
 	for_each_tracing_cpu(cpu) {
 	for_each_tracing_cpu(cpu) {
 
 
 		iter->buffer_iter[cpu] =
 		iter->buffer_iter[cpu] =
@@ -2411,7 +2425,7 @@ static struct file_operations tracing_cpumask_fops = {
 };
 };
 
 
 static ssize_t
 static ssize_t
-tracing_iter_ctrl_read(struct file *filp, char __user *ubuf,
+tracing_trace_options_read(struct file *filp, char __user *ubuf,
 		       size_t cnt, loff_t *ppos)
 		       size_t cnt, loff_t *ppos)
 {
 {
 	char *buf;
 	char *buf;
@@ -2448,7 +2462,7 @@ tracing_iter_ctrl_read(struct file *filp, char __user *ubuf,
 }
 }
 
 
 static ssize_t
 static ssize_t
-tracing_iter_ctrl_write(struct file *filp, const char __user *ubuf,
+tracing_trace_options_write(struct file *filp, const char __user *ubuf,
 			size_t cnt, loff_t *ppos)
 			size_t cnt, loff_t *ppos)
 {
 {
 	char buf[64];
 	char buf[64];
@@ -2493,8 +2507,8 @@ tracing_iter_ctrl_write(struct file *filp, const char __user *ubuf,
 
 
 static struct file_operations tracing_iter_fops = {
 static struct file_operations tracing_iter_fops = {
 	.open		= tracing_open_generic,
 	.open		= tracing_open_generic,
-	.read		= tracing_iter_ctrl_read,
-	.write		= tracing_iter_ctrl_write,
+	.read		= tracing_trace_options_read,
+	.write		= tracing_trace_options_write,
 };
 };
 
 
 static const char readme_msg[] =
 static const char readme_msg[] =
@@ -2508,9 +2522,9 @@ static const char readme_msg[] =
 	"# echo sched_switch > /debug/tracing/current_tracer\n"
 	"# echo sched_switch > /debug/tracing/current_tracer\n"
 	"# cat /debug/tracing/current_tracer\n"
 	"# cat /debug/tracing/current_tracer\n"
 	"sched_switch\n"
 	"sched_switch\n"
-	"# cat /debug/tracing/iter_ctrl\n"
+	"# cat /debug/tracing/trace_options\n"
 	"noprint-parent nosym-offset nosym-addr noverbose\n"
 	"noprint-parent nosym-offset nosym-addr noverbose\n"
-	"# echo print-parent > /debug/tracing/iter_ctrl\n"
+	"# echo print-parent > /debug/tracing/trace_options\n"
 	"# echo 1 > /debug/tracing/tracing_enabled\n"
 	"# echo 1 > /debug/tracing/tracing_enabled\n"
 	"# cat /debug/tracing/trace > /tmp/trace.txt\n"
 	"# cat /debug/tracing/trace > /tmp/trace.txt\n"
 	"echo 0 > /debug/tracing/tracing_enabled\n"
 	"echo 0 > /debug/tracing/tracing_enabled\n"
@@ -2905,7 +2919,7 @@ tracing_entries_read(struct file *filp, char __user *ubuf,
 	char buf[64];
 	char buf[64];
 	int r;
 	int r;
 
 
-	r = sprintf(buf, "%lu\n", tr->entries);
+	r = sprintf(buf, "%lu\n", tr->entries >> 10);
 	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
 	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
 }
 }
 
 
@@ -2945,6 +2959,9 @@ tracing_entries_write(struct file *filp, const char __user *ubuf,
 			atomic_inc(&max_tr.data[cpu]->disabled);
 			atomic_inc(&max_tr.data[cpu]->disabled);
 	}
 	}
 
 
+	/* value is in KB */
+	val <<= 10;
+
 	if (val != global_trace.entries) {
 	if (val != global_trace.entries) {
 		ret = ring_buffer_resize(global_trace.buffer, val);
 		ret = ring_buffer_resize(global_trace.buffer, val);
 		if (ret < 0) {
 		if (ret < 0) {
@@ -3145,10 +3162,10 @@ static __init int tracer_init_debugfs(void)
 	if (!entry)
 	if (!entry)
 		pr_warning("Could not create debugfs 'tracing_enabled' entry\n");
 		pr_warning("Could not create debugfs 'tracing_enabled' entry\n");
 
 
-	entry = debugfs_create_file("iter_ctrl", 0644, d_tracer,
+	entry = debugfs_create_file("trace_options", 0644, d_tracer,
 				    NULL, &tracing_iter_fops);
 				    NULL, &tracing_iter_fops);
 	if (!entry)
 	if (!entry)
-		pr_warning("Could not create debugfs 'iter_ctrl' entry\n");
+		pr_warning("Could not create debugfs 'trace_options' entry\n");
 
 
 	entry = debugfs_create_file("tracing_cpumask", 0644, d_tracer,
 	entry = debugfs_create_file("tracing_cpumask", 0644, d_tracer,
 				    NULL, &tracing_cpumask_fops);
 				    NULL, &tracing_cpumask_fops);
@@ -3198,11 +3215,11 @@ static __init int tracer_init_debugfs(void)
 		pr_warning("Could not create debugfs "
 		pr_warning("Could not create debugfs "
 			   "'trace_pipe' entry\n");
 			   "'trace_pipe' entry\n");
 
 
-	entry = debugfs_create_file("trace_entries", 0644, d_tracer,
+	entry = debugfs_create_file("buffer_size_kb", 0644, d_tracer,
 				    &global_trace, &tracing_entries_fops);
 				    &global_trace, &tracing_entries_fops);
 	if (!entry)
 	if (!entry)
 		pr_warning("Could not create debugfs "
 		pr_warning("Could not create debugfs "
-			   "'trace_entries' entry\n");
+			   "'buffer_size_kb' entry\n");
 
 
 	entry = debugfs_create_file("trace_marker", 0220, d_tracer,
 	entry = debugfs_create_file("trace_marker", 0220, d_tracer,
 				    NULL, &tracing_mark_fops);
 				    NULL, &tracing_mark_fops);

+ 1 - 0
kernel/trace/trace.h

@@ -473,6 +473,7 @@ enum trace_iterator_flags {
 #ifdef CONFIG_BRANCH_TRACER
 #ifdef CONFIG_BRANCH_TRACER
 	TRACE_ITER_BRANCH		= 0x1000,
 	TRACE_ITER_BRANCH		= 0x1000,
 #endif
 #endif
+	TRACE_ITER_ANNOTATE		= 0x2000,
 };
 };
 
 
 /*
 /*