summaryrefslogtreecommitdiff
path: root/kernel/sched/fair.c
AgeCommit message (Expand)Author
2016-11-16sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_listVincent Guittot
2016-11-16sched/fair: Factorize attach/detach entityVincent Guittot
2016-11-16sched/fair: Fix incorrect comment for capacity_marginMorten Rasmussen
2016-11-16sched/fair: Avoid pulling tasks from non-overloaded higher capacity groupsMorten Rasmussen
2016-11-16sched/fair: Add per-CPU min capacity to sched_group_capacityMorten Rasmussen
2016-11-16sched/fair: Consider spare capacity in find_idlest_group()Morten Rasmussen
2016-11-16sched/fair: Compute task/cpu utilization at wake-up correctlyMorten Rasmussen
2016-11-11Merge branch 'linus' into sched/core, to pick up fixesIngo Molnar
2016-10-27sched/fair: Remove unused but set variable 'rq'Tobias Klauser
2016-10-20sched/fair: Kill the unused 'sched_shares_window_ns' tunableMatt Fleming
2016-10-19sched/fair: Fix incorrect task group ->load_avgVincent Guittot
2016-10-18Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/k...Linus Torvalds
2016-10-15Merge tag 'gcc-plugins-v4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel...Linus Torvalds
2016-10-11sched/fair: Fix sched domains NULL dereference in select_idle_sibling()Wanpeng Li
2016-10-10latent_entropy: Mark functions with __latent_entropyEmese Revfy
2016-10-03Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/ker...Linus Torvalds
2016-09-30sched/fair: Fix min_vruntime trackingPeter Zijlstra
2016-09-30sched/debug: Add SCHED_WARN_ON()Peter Zijlstra
2016-09-30sched/core: Optimize SCHED_SMTPeter Zijlstra
2016-09-30sched/core: Rewrite and improve select_idle_siblings()Peter Zijlstra
2016-09-30sched/core: Replace sd_busy/nr_busy_cpus with sched_domain_sharedPeter Zijlstra
2016-09-30sched/fair: Fix fixed point arithmetic width for shares and effective loadDietmar Eggemann
2016-09-22sched/fair: Fix SCHED_HRTICK bug leading to late preemption of tasksSrivatsa Vaddagiri
2016-09-13cpufreq / sched: SCHED_CPUFREQ_IOWAIT flag to indicate iowait conditionRafael J. Wysocki
2016-09-10Revert "sched/fair: Make update_min_vruntime() more readable"Peter Zijlstra
2016-09-05sched/debug: Remove several CONFIG_SCHEDSTATS guardsJosh Poimboeuf
2016-09-05sched/debug: Clean up schedstat macrosJosh Poimboeuf
2016-09-05sched/debug: Rename and move enqueue_sleeper()Josh Poimboeuf
2016-09-05sched/fair: Fix load_above_capacity fixed point arithmetic widthDietmar Eggemann
2016-09-05sched/fair: Make update_min_vruntime() more readableByungchul Park
2016-08-18sched/fair: Let asymmetric CPU configurations balance at wake-upMorten Rasmussen
2016-08-16cpufreq / sched: Pass runqueue pointer to cpufreq_update_util()Rafael J. Wysocki
2016-08-16cpufreq / sched: Pass flags to cpufreq_update_util()Rafael J. Wysocki
2016-08-10sched/fair: Optimize find_idlest_cpu() when there is no choiceMorten Rasmussen
2016-08-10sched/fair: Make the use of prev_cpu consistent in the wakeup pathMorten Rasmussen
2016-08-10sched/fair: Improve PELT stuff some morePeter Zijlstra
2016-08-10sched/fair: Remove 'cpu_busy' parameter from update_next_balance()Leo Yan
2016-08-10sched/fair: Fix typo in sync_throttle()Xunlei Pang
2016-06-27sched/fair: Rework throttle_count syncPeter Zijlstra
2016-06-27sched/fair: Reorder cgroup creation codePeter Zijlstra
2016-06-27sched/fair: Apply more PELT fixesPeter Zijlstra
2016-06-27sched/fair: Fix PELT integrity for new tasksPeter Zijlstra
2016-06-27sched/cgroup: Fix cpu_cgroup_fork() handlingVincent Guittot
2016-06-27sched/fair: Fix PELT integrity for new groupsPeter Zijlstra
2016-06-27sched/fair: Fix and optimize the fork() pathPeter Zijlstra
2016-06-27Merge branch 'sched/urgent' into sched/core, to pick up fixesIngo Molnar
2016-06-27sched/fair: Fix calc_cfs_shares() fixed point arithmetics width confusionPeter Zijlstra
2016-06-27sched/fair: Fix effective_load() to consistently use smoothed loadPeter Zijlstra
2016-06-24sched/fair: Do not announce throttled next buddy in dequeue_task_fair()Konstantin Khlebnikov
2016-06-24sched/fair: Initialize throttle_count for new task-groups lazilyKonstantin Khlebnikov