aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched_fair.c
diff options
context:
space:
mode:
authorGregory Haskins <ghaskins@novell.com>2008-04-28 12:40:01 -0400
committerIngo Molnar <mingo@elte.hu>2008-05-05 23:56:18 +0200
commit104f64549c961a797ff5f7c59946a7caa335c5b0 (patch)
treed63d707ee5b9d1dbc8e5796e142ca584736f01b9 /kernel/sched_fair.c
parent8ae121ac8666b0421aa20fd80d4597ec66fa54bc (diff)
downloadkernel_samsung_smdk4412-104f64549c961a797ff5f7c59946a7caa335c5b0.tar.gz
kernel_samsung_smdk4412-104f64549c961a797ff5f7c59946a7caa335c5b0.tar.bz2
kernel_samsung_smdk4412-104f64549c961a797ff5f7c59946a7caa335c5b0.zip
sched: fix SCHED_FAIR wake-idle logic error
We currently use an optimization to skip the overhead of wake-idle processing if more than one task is assigned to a run-queue. The assumption is that the system must already be load-balanced or we wouldnt be overloaded to begin with. The problem is that we are looking at rq->nr_running, which may include RT tasks in addition to CFS tasks. Since the presence of RT tasks really has no bearing on the balance status of CFS tasks, this throws the calculation off. This patch changes the logic to only consider the number of CFS tasks when making the decision to optimze the wake-idle. Signed-off-by: Gregory Haskins <ghaskins@novell.com> CC: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r--kernel/sched_fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index e8e5ad2614b..1d5f35b4636 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1009,7 +1009,7 @@ static int wake_idle(int cpu, struct task_struct *p)
* sibling runqueue info. This will avoid the checks and cache miss
* penalities associated with that.
*/
- if (idle_cpu(cpu) || cpu_rq(cpu)->nr_running > 1)
+ if (idle_cpu(cpu) || cpu_rq(cpu)->cfs.nr_running > 1)
return cpu;
for_each_domain(cpu, sd) {