瀏覽代碼

clockevents: Remove the per cpu tick skew

Historically, Linux has tried to make the regular timer tick on the
various CPUs not happen at the same time, to avoid contention on
xtime_lock.

Nowadays, with the tickless kernel, this contention no longer happens
since time keeping and updating are done differently. In addition,
this skew is actually hurting power consumption in a measurable way on
many-core systems.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
LKML-Reference: <20100727210210.58d3118c@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Arjan van de Ven 15 年之前
父節點
當前提交
af5ab277de
共有 1 個文件被更改,包括 0 次插入5 次删除
  1. 0 5
      kernel/time/tick-sched.c

+ 0 - 5
kernel/time/tick-sched.c

@@ -780,7 +780,6 @@ void tick_setup_sched_timer(void)
 {
 {
 	struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
 	struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
 	ktime_t now = ktime_get();
 	ktime_t now = ktime_get();
-	u64 offset;
 
 
 	/*
 	/*
 	 * Emulate tick processing via per-CPU hrtimers:
 	 * Emulate tick processing via per-CPU hrtimers:
@@ -790,10 +789,6 @@ void tick_setup_sched_timer(void)
 
 
 	/* Get the next period (per cpu) */
 	/* Get the next period (per cpu) */
 	hrtimer_set_expires(&ts->sched_timer, tick_init_jiffy_update());
 	hrtimer_set_expires(&ts->sched_timer, tick_init_jiffy_update());
-	offset = ktime_to_ns(tick_period) >> 1;
-	do_div(offset, num_possible_cpus());
-	offset *= smp_processor_id();
-	hrtimer_add_expires_ns(&ts->sched_timer, offset);
 
 
 	for (;;) {
 	for (;;) {
 		hrtimer_forward(&ts->sched_timer, now, tick_period);
 		hrtimer_forward(&ts->sched_timer, now, tick_period);