Peter Zijlstra
|
911b2898b3
sched: Optimize task_sched_runtime()
|
11 years ago |
Preeti U Murthy
|
37dc6b50ce
sched: Remove unnecessary iteration over sched domains to update nr_busy_cpus
|
11 years ago |
Peter Zijlstra
|
b8a216269e
sched: Move completion code from core.c to completion.c
|
11 years ago |
Peter Zijlstra
|
b4145872f7
sched: Move wait code from core.c to wait.c
|
11 years ago |
Ben Segall
|
1ee14e6c8c
sched: Fix race on toggling cfs_bandwidth_used
|
11 years ago |
Michael wang
|
ac9ff7997b
sched: Remove extra put_online_cpus() inside sched_setaffinity()
|
11 years ago |
Peter Zijlstra
|
6acce3ef84
sched: Remove get_online_cpus() usage
|
11 years ago |
Peter Zijlstra
|
746023159c
sched: Fix race in migrate_swap_stop()
|
11 years ago |
Rik van Riel
|
1e3646ffc6
mm: numa: Revert temporarily disabling of NUMA migration
|
11 years ago |
Mel Gorman
|
930aa174fc
sched/numa: Remove the numa_balancing_scan_period_reset sysctl
|
11 years ago |
Peter Zijlstra
|
0ec8aa00f2
sched/numa: Avoid migrating tasks that are placed on their preferred node
|
11 years ago |
Rik van Riel
|
5e1576ed0e
sched/numa: Stay on the same node if CLONE_VM
|
11 years ago |
Peter Zijlstra
|
8c8a743c50
sched/numa: Use {cpu, pid} to create task groups for shared faults
|
11 years ago |
Mel Gorman
|
fb13c7ee0e
sched/numa: Use a system-wide search to find swap/migration candidates
|
11 years ago |
Peter Zijlstra
|
ac66f54772
sched/numa: Introduce migrate_swap()
|
11 years ago |
Rik van Riel
|
6fe6b2d6da
sched/numa: Do not migrate memory immediately after switching node
|
11 years ago |
Mel Gorman
|
e6628d5b0a
sched/numa: Reschedule task on preferred NUMA node once selected
|
11 years ago |
Mel Gorman
|
3a7053b322
sched/numa: Favour moving tasks towards the preferred node
|
11 years ago |
Mel Gorman
|
745d61476d
sched/numa: Update NUMA hinting faults once per scan
|
11 years ago |
Mel Gorman
|
688b7585d1
sched/numa: Select a preferred node with the most numa hinting faults
|
11 years ago |
Mel Gorman
|
f809ca9a55
sched/numa: Track NUMA hinting faults on per-node basis
|
11 years ago |
Mel Gorman
|
7e8d16b6cb
sched/numa: Initialise numa_next_scan properly
|
11 years ago |
Peter Zijlstra
|
a233f1120c
sched: Prepare for per-cpu preempt_count
|
11 years ago |
Peter Zijlstra
|
bdb4380658
sched: Extract the basic add/sub preempt_count modifiers
|
12 years ago |
Peter Zijlstra
|
0102874755
sched: Create more preempt_count accessors
|
12 years ago |
Peter Zijlstra
|
f27dde8dee
sched: Add NEED_RESCHED to the preempt_count
|
12 years ago |
Peter Zijlstra
|
4a2b4b2227
sched: Introduce preempt_count accessor functions
|
12 years ago |
Peter Zijlstra
|
b021fe3e25
sched, rcu: Make RCU use resched_cpu()
|
11 years ago |
Michael S. Tsirkin
|
4314895165
sched: Micro-optimize by dropping unnecessary task_rq() calls
|
11 years ago |
Jason Low
|
9bd721c55c
sched/balancing: Consider max cost of idle balance per sched domain
|
11 years ago |