summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-11-16sched/fair: Consider spare capacity in find_idlest_group()Morten Rasmussen
In low-utilization scenarios comparing relative loads in find_idlest_group() doesn't always lead to the most optimum choice. Systems with groups containing different numbers of cpus and/or cpus of different compute capacity are significantly better off when considering spare capacity rather than relative load in those scenarios. In addition to existing load based search an alternative spare capacity based candidate sched_group is found and selected instead if sufficient spare capacity exists. If not, existing behaviour is preserved. Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dietmar.eggemann@arm.com Cc: freedom.tan@mediatek.com Cc: keita.kobayashi.ym@renesas.com Cc: mgalbraith@suse.de Cc: sgurrappadi@nvidia.com Cc: vincent.guittot@linaro.org Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1476452472-24740-3-git-send-email-morten.rasmussen@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16sched/fair: Compute task/cpu utilization at wake-up correctlyMorten Rasmussen
At task wake-up load-tracking isn't updated until the task is enqueued. The task's own view of its utilization contribution may therefore not be aligned with its contribution to the cfs_rq load-tracking which may have been updated in the meantime. Basically, the task's own utilization hasn't yet accounted for the sleep decay, while the cfs_rq may have (partially). Estimating the cfs_rq utilization in case the task is migrated at wake-up as task_rq(p)->cfs.avg.util_avg - p->se.avg.util_avg is therefore incorrect as the two load-tracking signals aren't time synchronized (different last update). To solve this problem, this patch synchronizes the task utilization with its previous rq before the task utilization is used in the wake-up path. Currently the update/synchronization is done _after_ the task has been placed by select_task_rq_fair(). The synchronization is done without having to take the rq lock using the existing mechanism used in remove_entity_load_avg(). Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dietmar.eggemann@arm.com Cc: freedom.tan@mediatek.com Cc: keita.kobayashi.ym@renesas.com Cc: mgalbraith@suse.de Cc: sgurrappadi@nvidia.com Cc: vincent.guittot@linaro.org Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1476452472-24740-2-git-send-email-morten.rasmussen@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16sched/x86: Do not clear PREEMPT_NEED_RESCHED on preempt count resetMartin Schwidefsky
The per-cpu preempt count of x86 contains two values, the actual preempt count and the inverted PREEMPT_NEED_RESCHED bit. If a corrupted preempt count is detected the preempt_count_set() function is used to reset the preempt count. In case the inverted PREEMPT_NEED_RESCHED bit is zero at the time of the reset, the preemption indication is lost. Use raw_cpu_cmpxchg_4() to reset only the count part and leave the PREEMPT_NEED_RESCHED bit as it is. This improves the kernel's behavior when it runs into preempt count leaks and tries to fix them up. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1478523660-733-1-git-send-email-schwidefsky@de.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16sched/cpuacct: Avoid %lld seq_printf warningMartin Schwidefsky
For s390 kernel builds I keep getting this warning: kernel/sched/cpuacct.c: In function 'cpuacct_stats_show': kernel/sched/cpuacct.c:298:25: warning: format '%lld' expects argument of type 'long long int', but argument 4 has type 'clock_t {aka long int}' [-Wformat=] seq_printf(sf, "%s %lld\n", Silence the warning by adding an explicit cast. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20161111142749.6545-1-schwidefsky@de.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16x86/msr: Cleanup/streamline MSR helpersBorislav Petkov
Make the MSR argument an unsigned int, both low and high u32, put "notrace" last in the function signature. Reflow function signatures for better readability and cleanup white space. Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16Merge branch 'linus' into sched/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16locking/core, arch: Remove cpu_relax_lowlatency()Christian Borntraeger
As there are no users left, we can remove cpu_relax_lowlatency() implementations from every architecture. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Cc: <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1477386195-32736-6-git-send-email-borntraeger@de.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16locking/core: Remove cpu_relax_lowlatency() usersChristian Borntraeger
With the s390 special case of a yielding cpu_relax() implementation gone, we can now remove all users of cpu_relax_lowlatency() and replace them with cpu_relax(). Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/1477386195-32736-5-git-send-email-borntraeger@de.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16locking/core, s390: Make cpu_relax() a barrier againChristian Borntraeger
stop_machine() seemed to be the only important place for yielding during cpu_relax(). This was fixed by using cpu_relax_yield(). Therefore, we can now redefine cpu_relax() to be a barrier instead on s390, making s390 identical to all other architectures. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/1477386195-32736-4-git-send-email-borntraeger@de.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16locking/core, stop_machine: Yield the CPU during stop machine()Christian Borntraeger
Some time ago the following commit: 57f2ffe14fd125c2 ("s390: remove diag 44 calls from cpu_relax()") ... stopped cpu_relax() on s390 yielding to the hypervisor. As it turns out this made stop_machine() run really slow on virtualized overcommited systems. For example the kprobes test during bootup took several seconds instead of just running unnoticed with large guests. Therefore, yielding was reintroduced with commit: 4d92f50249eb ("s390: reintroduce diag 44 calls for cpu_relax()") ... but in fact the stop machine code seems to be the only place where this yielding was really necessary. This place is probably the most important one as it makes all but one guest CPUs wait for one guest CPU. As we now have cpu_relax_yield(), we can use this in multi_cpu_stop(). For now lets only add it here. We can add it later in other places when necessary. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/1477386195-32736-3-git-send-email-borntraeger@de.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16locking/core: Introduce cpu_relax_yield()Christian Borntraeger
For spinning loops people do often use barrier() or cpu_relax(). For most architectures cpu_relax and barrier are the same, but on some architectures cpu_relax can add some latency. For example on power,sparc64 and arc, cpu_relax can shift the CPU towards other hardware threads in an SMT environment. On s390 cpu_relax does even more, it uses an hypercall to the hypervisor to give up the timeslice. In contrast to the SMT yielding this can result in larger latencies. In some places this latency is unwanted, so another variant "cpu_relax_lowlatency" was introduced. Before this is used in more and more places, lets revert the logic and provide a cpu_relax_yield that can be called in places where yielding is more important than latency. By default this is the same as cpu_relax on all architectures. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/1477386195-32736-2-git-send-email-borntraeger@de.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16x86/mce/AMD: Reset Threshold Limit after logging errorYazen Ghannam
The error count field in MCA_MISC does not get reset by hardware when the threshold has been reached. Software is expected to reset it. Currently, the threshold limit only gets reset during init or when a user writes to sysfs. If the user is not monitoring threshold interrupts and resetting the limit then the user will only see 1 interrupt when the limit is first hit. So if, for example, the limit is set to 10 then only 1 interrupt will be recorded after 10 errors even if 100 errors have occurred. The user may then assume that only 10 errors have occurred. Signed-off-by: Yazen Ghannam <Yazen.Ghannam@amd.com> Cc: Aravind Gopalakrishnan <aravindksg.lkml@gmail.com> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/1479244433-69267-1-git-send-email-Yazen.Ghannam@amd.com Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16perf/x86/uncore: Fix crash by removing bogus event_list[] handling for SNB ↵Kan Liang
client uncore IMC Vince Weaver reported the following bug when KASAN is enabled: [ 205.748005] BUG: KASAN: slab-out-of-bounds in snb_uncore_imc_event_del+0x6c/0xa0 at addr ffff8800caa43768 [ 205.758324] Read of size 8 by task perf_fuzzer/6618 It's caused by accessing box->event_list. For client IMC, there are no generic counters. It defines its own fixed free running counters. So event_list and n_events are unused. They can be removed safely, which fixes the bug. ( There's still the separate question of how uninitialized state snuck into this data structure - but that's a separate fix. ) Reported-by: Vince Weaver <vincent.weaver@maine.edu> Tested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: davej@codemonkey.org.uk Cc: dvyukov@google.com Cc: eranian@gmail.com Link: http://lkml.kernel.org/r/1479235210-29090-1-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16x86/sysfb: Fix lfb_size calculationDavid Herrmann
The screen_info.lfb_size field is shifted by 16 bits *only* in case of VBE. This has historical reasons since VBE advertised it similarly. However, in case of EFI framebuffers, the size is no longer shifted. Fix the x86 simple-framebuffer setup code to use the correct size in the non-VBE case. While at it, avoid variable abbreviations and rename 'len' to 'length', and use the correct types matching the screen_info definition. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt.fleming@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tom Gundersen <teg@jklm.no> Link: http://lkml.kernel.org/r/20161115120158.15388-3-dh.herrmann@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16x86/sysfb: Add support for 64bit EFI lfb_baseDavid Herrmann
The screen_info object was extended to support 64-bit lfb_base addresses in: ae2ee627dc87 ("efifb: Add support for 64-bit frame buffer addresses") However, the x86 simple-framebuffer setup code never made use of it. Fix it to properly assemble and verify the lfb_base before advertising simple-framebuffer devices. In particular, this means if VIDEO_CAPABILITY_64BIT_BASE is set, the screen_info->ext_lfb_base field will contain the upper 32bit of the actual lfb_base. Make sure the address is not 0 (i.e., unset), as well as does not overflow the physical address type. Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt.fleming@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tom Gundersen <teg@jklm.no> Link: http://lkml.kernel.org/r/20161115120158.15388-2-dh.herrmann@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-16x86/mcheck: Move CPU_DEAD to hotplug state machineSebastian Andrzej Siewior
This moves the last piece of the old hotplug notifier code in MCE to the new hotplug state machine. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Tony Luck <tony.luck@intel.com> Cc: rt@linutronix.de Cc: linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/20161110174447.11848-8-bigeasy@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16x86/mcheck: Move CPU_ONLINE and CPU_DOWN_PREPARE to hotplug state machineSebastian Andrzej Siewior
The CPU_ONLINE and CPU_DOWN_PREPARE look fully symmetrical and could be move to the hotplug state machine. On a failure during registration we have the tear down callback invoked (mce_cpu_pre_down()) so there should be no timer around and so no need to need keep notifier installed (this was the reason according to the comment why the notifier was registered despite of errors). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Tony Luck <tony.luck@intel.com> Cc: rt@linutronix.de Cc: linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/20161110174447.11848-7-bigeasy@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16x86/mcheck: Reorganize the hotplug callbacksSebastian Andrzej Siewior
Initially I wanted to remove mcheck_cpu_init() from identify_cpu() and let it become an independent early hotplug callback. The main problem here was that the init on the boot CPU may happen too late (device_initcall_sync(mcheck_init_device)) and nobody wanted to risk receiving and MCE event at boot time leading to a shutdown (if the MCE feature is not yet enabled). Here is attempt two: the timming stays as-is but the ordering of the functions is changed: - mcheck_cpu_init() (which is run from identify_cpu()) will setup the timer struct but won't fire the timer. This is moved to CPU_ONLINE since its cleanup part is in CPU_DOWN_PREPARE. So if it is okay to stop the timer early in the shutdown phase, it should be okay to start it late in the bring up phase. - CPU_DOWN_PREPARE disables the MCE feature flags for !INTEL CPUs in mce_disable_cpu(). If a failure occures it would be re-enabled on all vendor CPUs (including Intel where it was not disabled during shutdown). To keep this working I am moving it to CPU_ONLINE. smp_call_function_single() is dropped beause the notifier runs nowdays on the target CPU. - CPU_ONLINE is invoking mce_device_create() + mce_threshold_create_device() but its cleanup part is in CPU_DEAD (mce_threshold_remove_device() and mce_device_remove()). In order to keep this symmetrical I am moving the clean up from CPU_DEAD to CPU_DOWN_PREPARE. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Tony Luck <tony.luck@intel.com> Cc: rt@linutronix.de Cc: linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/20161110174447.11848-6-bigeasy@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16x86/mcheck: Split threshold_cpu_callback into two callbacksSebastian Andrzej Siewior
The threshold_cpu_callback callbacks looks like one of the notifier and its arguments are almost the same. Split this out and have one ONLINE and one DEAD callback. This will come handy later once the main code gets changed to use the callback mechanism. Also, handle threshold_cpu_callback_online() return value so we don't continue if the function fails. Boris Petkov removed the callback pointer and replaced it with proper functions. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Tony Luck <tony.luck@intel.com> Cc: rt@linutronix.de Cc: linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/20161110174447.11848-5-bigeasy@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16x86/mcheck: Be prepared for a rollback back to the ONLINE stateSebastian Andrzej Siewior
If we try a CPU down and fail in the middle then we roll back to the online state. This means we would perform CPU_ONLINE / mce_device_create() without invoking CPU_DEAD / mce_device_remove() for the cleanup of what was allocated in CPU_ONLINE. Be prepared for this and don't allocate the struct if we have it already. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Tony Luck <tony.luck@intel.com> Cc: rt@linutronix.de Cc: linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/20161110174447.11848-4-bigeasy@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16x86/mcheck: Explicit cleanup on failure in mce_amdSebastian Andrzej Siewior
If the ONLINE callback fails, the driver does not any clean up right away instead it waits to get to the DEAD stage to do it. Yes, it waits. Since we don't pass the error code back to the caller, no one knows. Do the clean up right away so it does not look like a leak. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Tony Luck <tony.luck@intel.com> Cc: rt@linutronix.de Cc: linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/20161110174447.11848-3-bigeasy@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16x86/mcheck: Move threshold_create_device()Sebastian Andrzej Siewior
Move the threshold_create_device() so it can use threshold_remove_device() without a forward declaration. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Tony Luck <tony.luck@intel.com> Cc: rt@linutronix.de Cc: linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/20161110174447.11848-2-bigeasy@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16posix-timers: Make them configurableNicolas Pitre
Some embedded systems have no use for them. This removes about 25KB from the kernel binary size when configured out. Corresponding syscalls are routed to a stub logging the attempt to use those syscalls which should be enough of a clue if they were disabled without proper consideration. They are: timer_create, timer_gettime: timer_getoverrun, timer_settime, timer_delete, clock_adjtime, setitimer, getitimer, alarm. The clock_settime, clock_gettime, clock_getres and clock_nanosleep syscalls are replaced by simple wrappers compatible with CLOCK_REALTIME, CLOCK_MONOTONIC and CLOCK_BOOTTIME only which should cover the vast majority of use cases with very little code. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Richard Cochran <richardcochran@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: John Stultz <john.stultz@linaro.org> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Cc: Paul Bolle <pebolle@tiscali.nl> Cc: linux-kbuild@vger.kernel.org Cc: netdev@vger.kernel.org Cc: Michal Marek <mmarek@suse.com> Cc: Edward Cree <ecree@solarflare.com> Link: http://lkml.kernel.org/r/1478841010-28605-7-git-send-email-nicolas.pitre@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16posix_cpu_timers: Move the add_device_randomness() call to a proper placeNicolas Pitre
There is no logical relation between add_device_randomness() and posix_cpu_timers_exit(). Let's move the former to where the later is called. This way, when posix-cpu-timers.c is compiled out, there is no need to worry about not losing a call to add_device_randomness(). Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: John Stultz <john.stultz@linaro.org> Cc: Paul Bolle <pebolle@tiscali.nl> Cc: linux-kbuild@vger.kernel.org Cc: netdev@vger.kernel.org Cc: Richard Cochran <richardcochran@gmail.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Michal Marek <mmarek@suse.com> Cc: Edward Cree <ecree@solarflare.com> Link: http://lkml.kernel.org/r/1478841010-28605-6-git-send-email-nicolas.pitre@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16timer: Move sys_alarm from timer.c to itimer.cNicolas Pitre
Move the only user of alarm_setitimer to itimer.c where it is defined. This allows for making alarm_setitimer static, and dropping it from the build when __ARCH_WANT_SYS_ALARM is not defined. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: John Stultz <john.stultz@linaro.org> Cc: Paul Bolle <pebolle@tiscali.nl> Cc: linux-kbuild@vger.kernel.org Cc: netdev@vger.kernel.org Cc: Richard Cochran <richardcochran@gmail.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Michal Marek <mmarek@suse.com> Cc: Edward Cree <ecree@solarflare.com> Link: http://lkml.kernel.org/r/1478841010-28605-5-git-send-email-nicolas.pitre@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16ptp_clock: Allow for it to be optionalNicolas Pitre
In order to break the hard dependency between the PTP clock subsystem and ethernet drivers capable of being clock providers, this patch provides simple PTP stub functions to allow linkage of those drivers into the kernel even when the PTP subsystem is configured out. Drivers must be ready to accept NULL from ptp_clock_register() in that case. And to make it possible for PTP to be configured out, the select statement in those driver's Kconfig menu entries is converted to the new "imply" statement. This way the PTP subsystem may have Kconfig dependencies of its own, such as POSIX_TIMERS, without having to make those ethernet drivers unavailable if POSIX timers are cconfigured out. And when support for POSIX timers is selected again then the default config option for PTP clock support will automatically be adjusted accordingly. The pch_gbe driver is a bit special as it relies on extra code in drivers/ptp/ptp_pch.c. Therefore we let the make process descend into drivers/ptp/ even if PTP_1588_CLOCK is unselected. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Richard Cochran <richardcochran@gmail.com> Acked-by: Edward Cree <ecree@solarflare.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: John Stultz <john.stultz@linaro.org> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Cc: Paul Bolle <pebolle@tiscali.nl> Cc: linux-kbuild@vger.kernel.org Cc: netdev@vger.kernel.org Cc: Michal Marek <mmarek@suse.com> Link: http://lkml.kernel.org/r/1478841010-28605-4-git-send-email-nicolas.pitre@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16Kconfig: Regenerate *.c_shipped files after previous changesNicolas Pitre
Signed-off-by: Nicolas Pitre <nico@linaro.org> Cc: Paul Bolle <pebolle@tiscali.nl> Cc: linux-kbuild@vger.kernel.org Cc: netdev@vger.kernel.org Cc: Richard Cochran <richardcochran@gmail.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Michal Marek <mmarek@suse.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Edward Cree <ecree@solarflare.com> Link: http://lkml.kernel.org/r/1478841010-28605-3-git-send-email-nicolas.pitre@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16Kconfig: Introduce the "imply" keywordNicolas Pitre
The "imply" keyword is a weak version of "select" where the target config symbol can still be turned off, avoiding those pitfalls that come with the "select" keyword. This is useful e.g. with multiple drivers that want to indicate their ability to hook into a secondary subsystem while allowing the user to configure that subsystem out without also having to unset these drivers. Currently, the same effect can almost be achieved with: config DRIVER_A tristate config DRIVER_B tristate config DRIVER_C tristate config DRIVER_D tristate [...] config SUBSYSTEM_X tristate default DRIVER_A || DRIVER_B || DRIVER_C || DRIVER_D || [...] This is unwieldy to maintain especially with a large number of drivers. Furthermore, there is no easy way to restrict the choice for SUBSYSTEM_X to y or n, excluding m, when some drivers are built-in. The "select" keyword allows for excluding m, but it excludes n as well. Hence this "imply" keyword. The above becomes: config DRIVER_A tristate imply SUBSYSTEM_X config DRIVER_B tristate imply SUBSYSTEM_X [...] config SUBSYSTEM_X tristate This is much cleaner, and way more flexible than "select". SUBSYSTEM_X can still be configured out, and it can be set as a module when none of the drivers are configured in or all of them are modular. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Richard Cochran <richardcochran@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: John Stultz <john.stultz@linaro.org> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Cc: Paul Bolle <pebolle@tiscali.nl> Cc: linux-kbuild@vger.kernel.org Cc: netdev@vger.kernel.org Cc: Michal Marek <mmarek@suse.com> Cc: Edward Cree <ecree@solarflare.com> Link: http://lkml.kernel.org/r/1478841010-28605-2-git-send-email-nicolas.pitre@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-16drm/i915: Assume non-DP++ port if dvo_port is HDMI and there's no AUX ch ↵Ville Syrjälä
specified in the VBT My heuristic for detecting type 1 DVI DP++ adaptors based on the VBT port information apparently didn't survive the reality of buggy VBTs. In this particular case we have a machine with a natice HDMI port, but the VBT tells us it's a DP++ port based on its capabilities. The dvo_port information in VBT does claim that we're dealing with a HDMI port though, but we have other machines which do the same even when they actually have DP++ ports. So that piece of information alone isn't sufficient to tell the two apart. After staring at a bunch of VBTs from various machines, I have to conclude that the only other semi-reliable clue we can use is the presence of the AUX channel in the VBT. On this particular machine AUX channel is specified as zero, whereas on some of the other machines which listed the DP++ port as HDMI have a non-zero AUX channel. I've also seen VBTs which have dvo_port a DP but have a zero AUX channel. I believe those we need to treat as DP ports, so we'll limit the AUX channel check to just the cases where dvo_port is HDMI. If we encounter any more serious failures with this heuristic I think we'll have to have to throw it out entirely. But that could mean that there is a risk of type 1 DVI dongle users getting greeted by a black screen, so I'd rather not go there unless absolutely necessary. v2: Remove the duplicate PORT_A check (Daniel) Fix some typos in the commit message Cc: Daniel Otero <daniel.otero@outlook.com> Cc: stable@vger.kernel.org Tested-by: Daniel Otero <daniel.otero@outlook.com> Fixes: d61992565bd3 ("drm/i915: Determine DP++ type 1 DVI adaptor presence based on VBT") Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97994 Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1478884464-14251-1-git-send-email-ville.syrjala@linux.intel.com Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (cherry picked from commit 7a17995a3dc8613f778a9e2fd20e870f17789544) Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2016-11-15Merge branch 'alx-multiqueue-support'David S. Miller
Tobias Regnery says: ==================== alx: add multi queue support This patchset lays the groundwork for multi queue support in the alx driver and enables multi queue support for the tx path by default. The hardware supports up to 4 tx queues. Benefits are better utilization of multi core cpus and the usage of the msi-x support by default which splits the handling of rx / tx and misc other interrupts. The rx path is a little bit harder because apparently (based on the limited information from the downstream driver) the hardware supports up to 8 rss queues but only has one hardware descriptor ring on the rx side. So the rx path will be part of another patchset. Tested on my AR8161 ethernet adapter with different tests: - there are no regressions observed during my daily usage - iperf tcp and udp tests shows no performance regressions - netperf TCP_RR and UDP_RR shows a slight performance increase of about 1-2% with this patchset applied This work is based on the downstream driver at github.com/qca/alx Changes in V2: - drop unneeded casts in alx_alloc_rx_ring (Patch 1) - add additional information about testing and benefit to the changelog ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: enable multiple tx queuesTobias Regnery
Enable multiple tx queues by default based on the number of online cpus. The hardware supports up to four tx queues. Based on the downstream driver at github.com/qca/alx Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: enable msi-x interrupts by defaultTobias Regnery
Remove the module parameter to enable msi-x support and enable msi-x interrupts unconditionally by default. This is a preparatory step to enable multi queue support by default, because this is only working with msi-x interrupts. Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: prepare tx path for multi queue supportTobias Regnery
This patch prepares the tx path to send data on multiple tx queues. It introduces per queue register adresses and uses them in the alx_tx_queue structs. There are new helper functions for the queue mapping in the tx path. Based on the downstream driver at github.com/qca/alx Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: prepare resource allocation for multi queue supportTobias Regnery
Allocate, initialise and free alx_tx_queue structs based on the number of alx_napi structures. Also increase the size of the descriptor memory based on the number of tx queues in use. Based on the downstream driver at github.com/qca/alx Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: prepare interrupt functions for multiple queuesTobias Regnery
Extend the interrupt bringup code and the interrupt handler for msi-x interrupts in order to handle multiple queues. We must change the poll function because with multiple queues it is possible that an alx_napi structure has only a tx or only a rx queue pointer. Based on the downstream driver at github.com/qca/alx Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: switch to per queue data structuresTobias Regnery
Remove the tx and rx queue structures from the alx_priv structure and switch everything over to the queue pointers in the alx_napi structure. Based on the downstream driver at github.com/qca/alx Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: add ability to allocate and free alx_napi structuresTobias Regnery
Add new functions to allocate and free the alx_napi structures and use them in __alx_open and __alx_stop. We only allocate one of these structures for now, as the rest of the driver is not yet ready for multiple queues. We switch over the setup of the interrupt mask and the call to netif_napi_add to the new function because we must adjust these later on a per queue basis. Based on the downstream driver at github.com/qca/alx Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: extend data structures for multi queue supportTobias Regnery
Extend the driver data structures to be able to handle multiple queues. Based on the downstream driver at github.com/qca/alx Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15alx: refactor descriptor allocationTobias Regnery
Split the allocation of descriptor memory and the buffer allocation into a tx and rx function. This is in preparation for multiple queues where we need to iterate over the new functions. While at it drop the unneeded casting on the rx side. Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15rtnetlink: fix rtnl message size computation for XDPSabrina Dubroca
rtnl_xdp_size() only considers the size of the actual payload attribute, and misses the space taken by the attribute used for nesting (IFLA_XDP). Fixes: d1fdd9138682 ("rtnl: add option for setting link xdp prog") Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Reviewed-by: Brenden Blanco <bblanco@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15rtnetlink: fix rtnl_vfinfo_sizeSabrina Dubroca
The size reported by rtnl_vfinfo_size doesn't match the space used by rtnl_fill_vfinfo. rtnl_vfinfo_size currently doesn't account for the nest attributes used by statistics (added in commit 3b766cd83232), nor for struct ifla_vf_tx_rate (since commit ed616689a3d9, which added ifla_vf_rate to the dump without removing ifla_vf_tx_rate, but replaced ifla_vf_tx_rate with ifla_vf_rate in the size computation). Fixes: 3b766cd83232 ("net/core: Add reading VF statistics through the PF netdevice") Fixes: ed616689a3d9 ("net-next:v4: Add support to configure SR-IOV VF minimum and maximum Tx rate through ip tool") Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15Merge branch 'dpaa_eth-next'David S. Miller
Madalin Bucur says: ==================== dpaa_eth: Add the QorIQ DPAA Ethernet driver This patch series adds the Ethernet driver for the Freescale QorIQ Data Path Acceleration Architecture (DPAA). This version includes changes following the feedback received on previous versions from Eric Dumazet, Bob Cochran, Joe Perches, Paul Bolle, Joakim Tjernlund, Scott Wood, David Miller - thank you. Together with the driver a managed version of alloc_percpu is provided that simplifies the release of per-CPU memory. The Freescale DPAA architecture consists in a series of hardware blocks that support the Ethernet connectivity. The Ethernet driver depends upon the following drivers that are currently in the Linux kernel: - Peripheral Access Memory Unit (PAMU) drivers/iommu/fsl_* - Frame Manager (FMan) added in v4.4 drivers/net/ethernet/freescale/fman - Queue Manager (QMan), Buffer Manager (BMan) added in v4.9-rc1 drivers/soc/fsl/qbman dpaa_eth interfaces mapping to FMan MACs: dpaa_eth /eth0\ ... /ethN\ driver | | | | ------------- ---- ----------- ---- ------------- -Ports / Tx Rx \ ... / Tx Rx \ FMan | | | | -MACs | MAC0 | | MACN | / dtsec0 \ ... / dtsecN \ (or tgec) / \ / \(or memac) --------- -------------- --- -------------- --------- FMan, FMan Port, FMan SP, FMan MURAM drivers --------------------------------------------------------- FMan HW blocks: MURAM, MACs, Ports, SP --------------------------------------------------------- dpaa_eth relation to QMan, FMan: ________________________________ dpaa_eth / eth0 \ driver / \ --------- -^- -^- -^- --- --------- QMan driver / \ / \ / \ \ / | BMan | |Rx | |Rx | |Tx | |Tx | | driver | --------- |Dfl| |Err| |Cnf| |FQs| | | QMan HW |FQ | |FQ | |FQ | | | | | / \ / \ / \ \ / | | --------- --- --- --- -v- --------- | FMan QMI | | | FMan HW FMan BMI | BMan HW | ----------------------- -------- where the acronyms used above (and in the code) are: DPAA = Data Path Acceleration Architecture FMan = DPAA Frame Manager QMan = DPAA Queue Manager BMan = DPAA Buffers Manager QMI = QMan interface in FMan BMI = BMan interface in FMan FMan SP = FMan Storage Profiles MURAM = Multi-user RAM in FMan FQ = QMan Frame Queue Rx Dfl FQ = default reception FQ Rx Err FQ = Rx error frames FQ Tx Cnf FQ = Tx confirmation FQ Tx FQs = transmission frame queues dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps) tgec = ten gigabit Ethernet controller (10 Gbps) memac = multirate Ethernet MAC (10/100/1000/10000) Changes from v7: - remove the debug option to use a common buffer pool for all the interfaces Changed from v6: - fixed an issue on an error path in dpaa_set_mac_address() - removed NDO operation definitions that were not needed - sorted the local variable declarations - cleaned up a few checkpatch checks - removed friendly network interface naming code Changes from v5: - adapt to the latest Q/BMan drivers API - use build_skb() on Rx path instead of buffer pool refill path - proper support for multiple buffer pools - align function, variable names, code cleanup - driver file structure cleanup Changes from v4: - addressed feedback from Scott Wood and Joe Perches - fixed spelling - fixed leak of uninitialized stack to userspace - fix prints - replace raw_cpu_ptr() with this_cpu_ptr() - remove _s from the end of structure names - remove underscores at start of functions, goto labels - remove likely in error paths - use container_of() instead of open casts - remove priv from the driver name - move return type on same line with function name - drop DPA_READ_SKB_PTR/DPA_WRITE_SKB_PTR Changes from v3: - removed bogus delay and comment in .ndo_stop implementation - addressed minor issues reported by David Miller Changes from v2: - removed debugfs, moved exports to ethtool statistics - removed congestion groups Kconfig params Changes from v1: - bpool level Kconfig options removed - print format using pr_fmt, cleaned up prints - __hot/__cold removed - gratuitous unlikely() removed - code style aligned, consistent spacing for declarations - comment formatting ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15arch/powerpc: Enable dpaa_ethMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15arch/powerpc: Enable FSL_FMANMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15arch/powerpc: Enable FSL_PAMUMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15dpaa_eth: add trace pointsMadalin Bucur
Add trace points on the hot processing path. Signed-off-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15dpaa_eth: add sysfs exportsMadalin Bucur
Export Frame Queue and Buffer Pool IDs through sysfs. Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15dpaa_eth: add ethtool statisticsMadalin Bucur
Add a series of counters to be exported through ethtool: - add detailed counters for reception errors; - add detailed counters for QMan enqueue reject events; - count the number of fragmented skbs received from the stack; - count all frames received on the Tx confirmation path; - add congestion group statistics; - count the number of interrupts for each CPU. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15dpaa_eth: add ethtool functionalityMadalin Bucur
Add support for basic ethtool operations. Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-15dpaa_eth: add support for DPAA EthernetMadalin Bucur
This introduces the Freescale Data Path Acceleration Architecture (DPAA) Ethernet driver (dpaa_eth) that builds upon the DPAA QMan, BMan, PAMU and FMan drivers to deliver Ethernet connectivity on the Freescale DPAA QorIQ platforms. Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>