summaryrefslogtreecommitdiff
path: root/kernel/sched
diff options
context:
space:
mode:
authorMel Gorman <mgorman@suse.de>2013-10-07 11:29:13 +0100
committerIngo Molnar <mingo@kernel.org>2013-10-09 12:40:42 +0200
commit4591ce4f2d22dc9de7a6719161ce409b5fd1caac (patch)
tree1ba60c69568eaa476960bd62e0f1ea2dc8b96a65 /kernel/sched
parent06ea5e035b4e66cc77790457a89fc7e368060c4b (diff)
sched/numa: Do not trap hinting faults for shared libraries
NUMA hinting faults will not migrate a shared executable page mapped by multiple processes on the grounds that the data is probably in the CPU cache already and the page may just bounce between tasks running on multipl nodes. Even if the migration is avoided, there is still the overhead of trapping the fault, updating the statistics, making scheduler placement decisions based on the information etc. If we are never going to migrate the page, it is overhead for no gain and worse a process may be placed on a sub-optimal node for shared executable pages. This patch avoids trapping faults for shared libraries entirely. Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-36-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c10
1 files changed, 10 insertions, 0 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index de9b4d8eb853..fbc0c84a8a04 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1231,6 +1231,16 @@ void task_numa_work(struct callback_head *work)
if (!vma_migratable(vma) || !vma_policy_mof(p, vma))
continue;
+ /*
+ * Shared library pages mapped by multiple processes are not
+ * migrated as it is expected they are cache replicated. Avoid
+ * hinting faults in read-only file-backed mappings or the vdso
+ * as migrating the pages will be of marginal benefit.
+ */
+ if (!vma->vm_mm ||
+ (vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) == (VM_READ)))
+ continue;
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);