summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2015-06-19sched/deadline: Remove needless parameter in dl_runtime_exceeded()Zhiqiang Zhang
Sine commit 269ad8015a6b ("sched/deadline: Avoid double-accounting in case of missed deadlines), parameter 'rq' is no longer used, so remove it. Signed-off-by: Zhiqiang Zhang <zhangzhiqiang.zhang@huawei.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <juri.lelli@gmail.com> Cc: <luca.abeni@unitn.it> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1434338120-43773-1-git-send-email-zhangzhiqiang.zhang@huawei.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched: Remove superfluous resetting of the p->dl_throttled flagWanpeng Li
Resetting the p->dl_throttled flag in rt_mutex_setprio() (for a task that is going to be boosted) is superfluous, as the natural place to do so is in replenish_dl_entity(). If the task was on the runqueue and it is boosted by a DL task, it will be enqueued back with ENQUEUE_REPLENISH flag set, which can guarantee that dl_throttled is reset in replenish_dl_entity(). This patch drops the resetting of throttled status in function rt_mutex_setprio(). Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juri Lelli <juri.lelli@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431496867-4194-6-git-send-email-wanpeng.li@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/deadline: Drop duplicate init_sched_dl_class() declarationWanpeng Li
There are two init_sched_dl_class() declarations, this patch drops the duplicate. Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juri Lelli <juri.lelli@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431496867-4194-5-git-send-email-wanpeng.li@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/deadline: Reduce rq lock contention by eliminating locking of ↵Wanpeng Li
non-feasible target This patch adds a check that prevents futile attempts to move DL tasks to a CPU with active tasks of equal or earlier deadline. The same behavior as commit 80e3d87b2c55 ("sched/rt: Reduce rq lock contention by eliminating locking of non-feasible target") for rt class. Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juri Lelli <juri.lelli@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431496867-4194-3-git-send-email-wanpeng.li@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/deadline: Make init_sched_dl_class() __initWanpeng Li
It's a bootstrap function, make init_sched_dl_class() __init. Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juri Lelli <juri.lelli@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431496867-4194-2-git-send-email-wanpeng.li@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/deadline: Optimize pull_dl_task()Wanpeng Li
pull_dl_task() uses pick_next_earliest_dl_task() to select a migration candidate; this is sub-optimal since the next earliest task -- as per the regular runqueue -- might not be migratable at all. This could result in iterating the entire runqueue looking for a task. Instead iterate the pushable queue -- this queue only contains tasks that have at least 2 cpus set in their cpus_allowed mask. Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> [ Improved the changelog. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juri Lelli <juri.lelli@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431496867-4194-1-git-send-email-wanpeng.li@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/preempt: Add static_key() to preempt_notifiersPeter Zijlstra
Avoid touching the curr->preempt_notifier cacheline when not needed. Provides a small improvement on pipe-bench: taskset 01 perf stat --repeat 10 -- perf bench sched pipe before: Performance counter stats for 'perf bench sched pipe' (10 runs): 12385.016204 task-clock (msec) # 1.001 CPUs utilized ( +- 0.34% ) 2,000,023 context-switches # 0.161 M/sec ( +- 0.00% ) 0 cpu-migrations # 0.000 K/sec 175 page-faults # 0.014 K/sec ( +- 0.26% ) 41,376,162,250 cycles # 3.341 GHz ( +- 0.11% ) 17,389,139,321 stalled-cycles-frontend # 42.03% frontend cycles idle ( +- 0.25% ) <not supported> stalled-cycles-backend 68,788,588,003 instructions # 1.66 insns per cycle # 0.25 stalled cycles per insn ( +- 0.02% ) 13,449,387,620 branches # 1085.940 M/sec ( +- 0.02% ) 20,880,690 branch-misses # 0.16% of all branches ( +- 0.98% ) 12.372646094 seconds time elapsed ( +- 0.34% ) after: Performance counter stats for 'perf bench sched pipe' (10 runs): 12180.936528 task-clock (msec) # 1.001 CPUs utilized ( +- 0.33% ) 2,000,077 context-switches # 0.164 M/sec ( +- 0.00% ) 0 cpu-migrations # 0.000 K/sec 174 page-faults # 0.014 K/sec ( +- 0.27% ) 40,691,545,577 cycles # 3.341 GHz ( +- 0.06% ) 16,446,333,371 stalled-cycles-frontend # 40.42% frontend cycles idle ( +- 0.18% ) <not supported> stalled-cycles-backend 68,570,100,387 instructions # 1.69 insns per cycle # 0.24 stalled cycles per insn ( +- 0.01% ) 13,389,740,014 branches # 1099.237 M/sec ( +- 0.01% ) 20,175,440 branch-misses # 0.15% of all branches ( +- 0.52% ) 12.169253010 seconds time elapsed ( +- 0.33% ) Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/preempt: Fix preempt notifiers documentation about hlist_del() within ↵Mathieu Desnoyers
unsafe iteration preempt_notifier_unregister() documents: "This is safe to call from within a preemption notifier." However, both fire_sched_in_preempt_notifiers() and fire_sched_out_preempt_notifiers() are using hlist_for_each_entry(), which is not safe against entry removal during iteration. Inspection of the KVM code does not reveal any use of preempt_notifier_unregister() within the preempt notifiers. Therefore, fix the comment. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431881590-1456-1-git-send-email-mathieu.desnoyers@efficios.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/stop_machine: Fix deadlock between multiple stop_two_cpus()Peter Zijlstra
Jiri reported a machine stuck in multi_cpu_stop() with migrate_swap_stop() as function and with the following src,dst cpu pairs: {11, 4} {13, 11} { 4, 13} 4 11 13 cpuM: queue(4 ,13) *Ma cpuN: queue(13,11) *N Na *M Mb cpuO: queue(11, 4) *O Oa *Nb *Ob Where *X denotes the cpu running the queueing of cpu-X and X[ab] denotes the first/second queued work. You'll observe the top of the workqueue for each cpu: 4,11,13 to be work from cpus: M, O, N resp. IOW. deadlock. Do away with the queueing trickery and introduce lg_double_lock() to lock both CPUs and fully serialize the stop_two_cpus() callers instead of the partial (and buggy) serialization we have now. Reported-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20150605153023.GH19282@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/debug: Add sum_sleep_runtime to /proc/<pid>/schedSrikar Dronamraju
When CONFIG_SCHEDSTATS is enabled, /proc/<pid>/sched prints almost all sched statistics except sum_sleep_runtime. Since sum_sleep_runtime is a good info to collect, add this it to /proc/<pid>/sched. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1433751041-11724-4-git-send-email-srikar@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/debug: Replace vruntime with wait_sum in /proc/sched_debugSrikar Dronamraju
Within runnable tasks in /proc/sched_debug, vruntime is printed twice, once as tree-key and again as exec-runtime. Since exec-runtime isnt populated in !CONFIG_SCHEDSTATS, use this field to print wait_sum. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1433751041-11724-3-git-send-email-srikar@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19sched/debug: Properly format runnable tasks in /proc/sched_debugSrikar Dronamraju
With !CONFIG_SCHEDSTATS, runnable tasks in /proc/sched_debug has too many columns than required. Fix this by printing appropriate columns. While at this, print sum_exec_runtime, since this information is available even in !CONFIG_SCHEDSTATS case. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1433751041-11724-2-git-send-email-srikar@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19locking/lockdep: Remove hard coded array size dependencyGeorge Beshers
An apparent oversight left a hardcoded '4' in place when LOCKSTAT_POINTS was introduced. The contention_point[] and contending_point[] arrays in the structs lock_class and lock_class_stats need to be the same size for the loops in lock_stats() to be correct. This patch allows LOCKSTAT_POINTS to be changed without affecting the correctness of the code. Signed-off-by: George Beshers <gbeshers@sgi.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19locking/qrwlock: Don't contend with readers when setting _QW_WAITINGWaiman Long
The current cmpxchg() loop in setting the _QW_WAITING flag for writers in queue_write_lock_slowpath() will contend with incoming readers causing possibly extra cmpxchg() operations that are wasteful. This patch changes the code to do a byte cmpxchg() to eliminate contention with new readers. A multithreaded microbenchmark running 5M read_lock/write_lock loop on a 8-socket 80-core Westmere-EX machine running 4.0 based kernel with the qspinlock patch have the following execution times (in ms) with and without the patch: With R:W ratio = 5:1 Threads w/o patch with patch % change ------- --------- ---------- -------- 2 990 895 -9.6% 3 2136 1912 -10.5% 4 3166 2830 -10.6% 5 3953 3629 -8.2% 6 4628 4405 -4.8% 7 5344 5197 -2.8% 8 6065 6004 -1.0% 9 6826 6811 -0.2% 10 7599 7599 0.0% 15 9757 9766 +0.1% 20 13767 13817 +0.4% With small number of contending threads, this patch can improve locking performance by up to 10%. With more contending threads, however, the gain diminishes. Signed-off-by: Waiman Long <Waiman.Long@hp.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Douglas Hatch <doug.hatch@hp.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Scott J Norton <scott.norton@hp.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1433863153-30722-3-git-send-email-Waiman.Long@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19perf/x86: Honor the architectural performance monitoring versionPalik, Imre
Architectural performance monitoring, version 1, doesn't support fixed counters. Currently, even if a hypervisor advertises support for architectural performance monitoring version 1, perf may still try to use the fixed counters, as the constraints are set up based on the CPU model. This patch ensures that perf honors the architectural performance monitoring version returned by CPUID, and it only uses the fixed counters for version 2 and above. (Some of the ideas in this patch came from Peter Zijlstra.) Signed-off-by: Imre Palik <imrep@amazon.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Anthony Liguori <aliguori@amazon.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1433767609-1039-1-git-send-email-imrep.amz@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19perf/x86/intel: Fix PMI handling for Intel PTAlexander Shishkin
Intel PT is a separate PMU and it is not using any of the x86_pmu code paths, which means in particular that the active_events counter remains intact when new PT events are created. However, PT uses the generic x86_pmu PMI handler for its PMI handling needs. The problem here is that the latter checks active_events and in case of it being zero, exits without calling the actual x86_pmu.handle_nmi(), which results in unknown NMI errors and massive data loss for PT. The effect is not visible if there are other perf events in the system at the same time that keep active_events counter non-zero, for instance if the NMI watchdog is running, so one needs to disable it to reproduce the problem. At the same time, the active_events counter besides doing what the name suggests also implicitly serves as a PMC hardware and DS area reference counter. This patch adds a separate reference counter for the PMC hardware, leaving active_events for actually counting the events and makes sure it also counts PT and BTS events. Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: adrian.hunter@intel.com Link: http://lkml.kernel.org/r/87k2v92t0s.fsf@ashishki-desk.ger.corp.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19perf/x86/intel/bts: Fix DS area sharing with x86_pmu eventsAlexander Shishkin
Currently, the intel_bts driver relies on the DS area allocated by the x86_pmu code in its event_init() path, which is a bug: creating a BTS event while no x86_pmu events are present results in a NULL pointer dereference. The same DS area is also used by PEBS sampling, which makes it quite a bit trickier to have a separate one for intel_bts' purposes. This patch makes intel_bts driver use the same DS allocation and reference counting code as x86_pmu to make sure it is always present when either intel_bts or x86_pmu need it. Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: adrian.hunter@intel.com Link: http://lkml.kernel.org/r/1434024837-9916-2-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19perf/x86: Add more Broadwell model numbersAndi Kleen
This patch adds additional model numbers for Broadwell to perf. Support for Broadwell with Iris Pro (Intel Core i7-57xxC) and support for Broadwell Server Xeon. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1434055942-28253-1-git-send-email-andi@firstfloor.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19perf: Fix ring_buffer_attach() RCU sync, againOleg Nesterov
While looking for other users of get_state/cond_sync. I Found ring_buffer_attach() and it looks obviously buggy? Don't we need to ensure that we have "synchronize" _between_ list_del() and list_add() ? IOW. Suppose that ring_buffer_attach() preempts right_after get_state_synchronize_rcu() and gp completes before spin_lock(). In this case cond_synchronize_rcu() does nothing and we reuse ->rb_entry without waiting for gp in between? It also moves the ->rcu_pending check under "if (rb)", to make it more readable imo. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dave@stgolabs.net Cc: der.herr@hofr.at Cc: josh@joshtriplett.org Cc: tj@kernel.org Fixes: b69cf53640da ("perf: Fix a race between ring_buffer_detach() and ring_buffer_attach()") Link: http://lkml.kernel.org/r/20150530200425.GA15748@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-06-19MAINTAINERS: clarify drivers/crypto/nx/ file ownershipDan Streetman
Update the "IBM Power in-Nest Crypto Acceleration" and "IBM Power 842 compression accelerator" sections to specify the correct files. The "IBM Power in-Nest Crypto Acceleration" was originally the only NX driver, and so its section listed all drivers/crypto/nx/ files, but now there is also the 842 driver which has its own section. This lists explicitly what files are owned by the Crypto driver and which files are owned by the 842 compression driver. Signed-off-by: Dan Streetman <ddstreet@ieee.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-19crypto: nx - add LE support to pSeries platform driverDan Streetman
Add support to the nx-842-pseries.c driver for running in little endian mode. The pSeries platform NX 842 driver currently only works as big endian. This adds cpu_to_be*() and be*_to_cpu() in the appropriate places to work in LE mode also. Signed-off-by: Dan Streetman <ddstreet@ieee.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-19crypto: caam - Set last bit on src SG listHerbert Xu
The new aead_edesc_alloc left out the bit indicating the last entry on the source SG list. This patch fixes it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-19crypto: caam - Reintroduce DESC_MAX_USED_BYTESHerbert Xu
I incorrectly removed DESC_MAX_USED_BYTES when enlarging the size of the shared descriptor buffers, thus making it four times larger than what is necessary. This patch restores the division by four calculation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-19crypto: aead - Fix aead_instance struct sizeHerbert Xu
The struct aead_instance is meant to extend struct crypto_instance by incorporating the extra members of struct aead_alg. However, the current layout which is copied from shash/ahash does not specify the struct fully. In particular only aead_alg is present. For shash/ahash this works because users there add extra headroom to sizeof(struct crypto_instance) when allocating the instance. Unfortunately for aead, this bit was lost when the new aead_instance was added. Rather than fixing it like shash/ahash, this patch simply expands struct aead_instance to contain what is supposed to be there, i.e., adding struct crypto_instance. In order to not break existing AEAD users, this is done through an anonymous union. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-19crypto: api - Add CRYPTO_MINALIGN_ATTR to struct crypto_algHerbert Xu
The struct crypto_alg is embedded into various type-specific structs such as aead_alg. This is then used as part of instances such as struct aead_instance. It is also embedded into the generic struct crypto_instance. In order to ensure that struct aead_instance can be converted to struct crypto_instance when necessary, we need to ensure that crypto_alg is aligned properly. This patch adds an alignment attribute to struct crypto_alg to ensure this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-18Merge branch 'i2c/for-current' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux Pull i2c documentation fix from Wolfram Sang: "Here is a small documentation fix for I2C. We already had a user who unsuccessfully tried to get the new slave framework running with the currently broken example. So, before this happens again, I'd like to have this how-to-use section fixed for 4.1 already. So that no more hacking time is wasted" * 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: i2c: slave: fix the example how to instantiate from userspace
2015-06-18revert "cpumask: don't perform while loop in cpumask_next_and()"Andrew Morton
Revert commit 534b483a86e6 ("cpumask: don't perform while loop in cpumask_next_and()"). This was a minor optimization, but it puts a `struct cpumask' on the stack, which consumes too much stack space. Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Peter Zijlstra <peterz@infradead.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Tejun Heo <tj@kernel.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Amir Vadai <amirv@mellanox.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-19Merge tag 'drm-intel-fixes-2015-06-18' of ↵Dave Airlie
git://anongit.freedesktop.org/drm-intel into drm-fixes one fix, one revert * tag 'drm-intel-fixes-2015-06-18' of git://anongit.freedesktop.org/drm-intel: Revert "drm/i915: Don't skip request retirement if the active list is empty" drm/i915: Always reset vma->ggtt_view.pages cache on unbinding
2015-06-19Merge branch 'drm-fixes-4.1' of ↵Dave Airlie
git://people.freedesktop.org/~deathsimple/linux into drm-fixes two radeon fixes one MST fix, one query addition, destined for stable, and to fix a regression * 'drm-fixes-4.1' of git://people.freedesktop.org/~deathsimple/linux: drm/radeon: don't probe MST on hw we don't support it on drm/radeon: Add RADEON_INFO_VA_UNMAP_WORKING query
2015-06-18Merge branches 'pci/host-xgene' and 'pci/hotplug' into nextBjorn Helgaas
* pci/host-xgene: PCI: xgene: Allow config access to Root Port even when link is down PCI: xgene: Disable Configuration Request Retry Status for v1 silicon * pci/hotplug: PCI: pciehp: Inline the "handle event" functions into the ISR PCI: pciehp: Rename queue_interrupt_event() to pciehp_queue_interrupt_event() PCI: pciehp: Make queue_interrupt_event() void PCI: pciehp: Clean up debug logging
2015-06-19hrtimer: Allow hrtimer::function() to free the timerPeter Zijlstra
Currently an hrtimer callback function cannot free its own timer because __run_hrtimer() still needs to clear HRTIMER_STATE_CALLBACK after it. Freeing the timer would result in a clear use-after-free. Solve this by using a scheme similar to regular timers; track the current running timer in hrtimer_clock_base::running. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: ktkhai@parallels.com Cc: rostedt@goodmis.org Cc: juri.lelli@gmail.com Cc: pang.xunlei@linaro.org Cc: wanpeng.li@linux.intel.com Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: umgwanakikbuti@gmail.com Link: http://lkml.kernel.org/r/20150611124743.471563047@infradead.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-19seqcount: Introduce raw_write_seqcount_barrier()Peter Zijlstra
Introduce raw_write_seqcount_barrier(), a new construct that can be used to provide write barrier semantics in seqcount read loops instead of the usual consistency guarantee. raw_write_seqcount_barier() is equivalent to: raw_write_seqcount_begin(); raw_write_seqcount_end(); But avoids issueing two back-to-back smp_wmb() instructions. This construct works because the read side will 'stall' when observing odd values. This means that -- referring to the example in the comment below -- even though there is no (matching) read barrier between the loads of X and Y, we cannot observe !x && !y, because: - if we observe Y == false we must observe the first sequence increment, which makes us loop, until - we observe !(seq & 1) -- the second sequence increment -- at which time we must also observe T == true. Suggested-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: umgwanakikbuti@gmail.com Cc: ktkhai@parallels.com Cc: rostedt@goodmis.org Cc: juri.lelli@gmail.com Cc: pang.xunlei@linaro.org Cc: oleg@redhat.com Cc: wanpeng.li@linux.intel.com Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/20150617122924.GP3644@twins.programming.kicks-ass.net Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-19seqcount: Rename write_seqcount_barrier()Peter Zijlstra
I'll shortly be introducing another seqcount primitive that's useful to provide ordering semantics and would like to use the write_seqcount_barrier() name for that. Seeing how there's only one user of the current primitive, lets rename it to invalidate, as that appears what its doing. While there, employ lockdep_assert_held() instead of assert_spin_locked() to not generate debug code for regular kernels. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: ktkhai@parallels.com Cc: rostedt@goodmis.org Cc: juri.lelli@gmail.com Cc: pang.xunlei@linaro.org Cc: Oleg Nesterov <oleg@redhat.com> Cc: wanpeng.li@linux.intel.com Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: umgwanakikbuti@gmail.com Link: http://lkml.kernel.org/r/20150611124743.279926217@infradead.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-19hrtimer: Fix hrtimer_is_queued() holePeter Zijlstra
A queued hrtimer that gets restarted (hrtimer_start*() while hrtimer_is_queued()) will briefly appear as unqueued/inactive, even though the timer has always been active, we just moved it. Close this hole by preserving timer->state in hrtimer_start_range_ns()'s remove_hrtimer() call. Reported-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: ktkhai@parallels.com Cc: rostedt@goodmis.org Cc: juri.lelli@gmail.com Cc: pang.xunlei@linaro.org Cc: wanpeng.li@linux.intel.com Cc: umgwanakikbuti@gmail.com Link: http://lkml.kernel.org/r/20150611124743.175989138@infradead.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-19hrtimer: Remove HRTIMER_STATE_MIGRATEOleg Nesterov
I do not understand HRTIMER_STATE_MIGRATE. Unless I am totally confused it looks buggy and simply unneeded. migrate_hrtimer_list() sets it to keep hrtimer_active() == T, but this is not enough: this can fool, say, hrtimer_is_queued() in dequeue_signal(). Can't migrate_hrtimer_list() simply use HRTIMER_STATE_ENQUEUED? This fixes the race and we can kill STATE_MIGRATE. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: ktkhai@parallels.com Cc: rostedt@goodmis.org Cc: juri.lelli@gmail.com Cc: pang.xunlei@linaro.org Cc: wanpeng.li@linux.intel.com Cc: umgwanakikbuti@gmail.com Link: http://lkml.kernel.org/r/20150611124743.072387650@infradead.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-18PCI: pciehp: Inline the "handle event" functions into the ISRBjorn Helgaas
The pciehp_handle_*() functions (pciehp_handle_attention_button(), etc.) only contain a line or two of useful code, so it's clumsy to put them in separate functions. All they so is add an event to a work queue, and it's clearer to see that directly in the ISR. Inline them directly into pcie_isr(). No functional change. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rajat Jain <rajatja@google.com> Acked-by: Yinghai Lu <yinghai@kernel.org>
2015-06-18PCI: pciehp: Rename queue_interrupt_event() to pciehp_queue_interrupt_event()Bjorn Helgaas
Rename queue_interrupt_event() to pciehp_queue_interrupt_event() so we can make it extern and call it from pcie_isr(). No functional change. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rajat Jain <rajatja@google.com> Acked-by: Yinghai Lu <yinghai@kernel.org>
2015-06-18PCI: pciehp: Make queue_interrupt_event() voidBjorn Helgaas
Nobody looks at the return value from queue_interrupt_event(), so errors were silently ignored. Convert it to a "void" function and note the error in the dmesg log. No functional change except the new message. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rajat Jain <rajatja@google.com> Acked-by: Yinghai Lu <yinghai@kernel.org>
2015-06-18PCI: xgene: Allow config access to Root Port even when link is downDuc Dang
Previously, when a Root Port's link was down, we didn't allow config access to the Root Port, which meant that if the Root Port led to an empty slot, "lspci" didn't even show the Root Port. Allow config access to Root Port even when link is down. [bhelgaas: changelog, fold in unused var fix] Suggested-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Duc Dang <dhdang@apm.com> Signed-off-by: Tanmay Inamdar <tinamdar@apm.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2015-06-18drm/radeon: don't probe MST on hw we don't support it onDave Airlie
If you do radeon.mst=1 on a gpu without mst hw, and then plug some mst hw it will oops instead of falling back. So check we have DCE5 at least before proceeding. Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Christian König <christian.koenig@amd.com>
2015-06-18drm/radeon: Add RADEON_INFO_VA_UNMAP_WORKING queryMichel Dänzer
This tells userspace that it's safe to use the RADEON_VA_UNMAP operation of the DRM_RADEON_GEM_VA ioctl. Cc: stable@vger.kernel.org (NOTE: Backporting this commit requires at least backports of commits 26d4d129b6042197b4cbc8341c0618f99231af2f, 48afbd70ac7b6aa62e8d452091023941d8085f8a and c29c0876ec05d51a93508a39b90b92c29ba6423d as well, otherwise using RADEON_VA_UNMAP runs into trouble) Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Christian König <christian.koenig@amd.com>
2015-06-18PCI: xgene: Disable Configuration Request Retry Status for v1 siliconDuc Dang
When a CPU reads the Vendor and Device ID of a non-existent device, the controller should fabricate return data of 0xFFFFFFFF. Configuration Request Retry Status (CRS) is not applicable in this case because the device doesn't exist at all. The X-Gene v1 PCIe controller has a bug in the CRS logic such that when CRS is enabled, it fabricates return data of 0xFFFF0001 for this case, which means "the device exists but is not ready." That causes the PCI core to retry the read until it times out after 60 seconds. Disable CRS capability advertisement by clearing the CRS Software Visibility bit in the Root Capabilities Register. [bhelgaas: changelog and comment] Tested-by: Ian Campbell <ian.campbell@citrix.com> Tested-by: Marcin Juszkiewicz <mjuszkiewicz@redhat.com> Signed-off-by: Duc Dang <dhdang@apm.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Tanmay Inamdar <tinamdar@apm.com>
2015-06-18irqchip: atmel-aic5: Add sama5d2 supportNicolas Ferre
Add sama5d2 support to irq-atmel-aic5. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Boris BREZILLON <boris.brezillon@free-electrons.com> Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com> Cc: Ludovic Desroches <ludovic.desroches@atmel.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: <linux-arm-kernel@lists.infradead.org> Link: http://lkml.kernel.org/r/1434632855-27272-1-git-send-email-nicolas.ferre@atmel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-18selftest: Timers: Avoid signal deadlock in leap-a-dayJohn Stultz
In 0c4a5fc95b1df (Add leap-second timer edge testing to leap-a-day.c), we added a timer to the test which checks to make sure timers near the leapsecond edge behave correctly. However, the output generated from the timer uses ctime_r, which isn't async-signal safe, and should that signal land while the main test is using ctime_r to print its output, its possible for the test to deadlock on glibc internal locks. Thus this patch reworks the output to avoid using ctime_r in the signal handler. Signed-off-by: John Stultz <john.stultz@linaro.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Daniel Bristot de Oliveira <bristot@redhat.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Jan Kara <jack@suse.cz> Cc: Jiri Bohac <jbohac@suse.cz> Cc: Shuah Khan <shuahkh@osg.samsung.com> Cc: Ingo Molnar <mingo@kernel.org> Link: http://lkml.kernel.org/r/1434565003-3386-1-git-send-email-john.stultz@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-18irq: spear-shirq: Fix race in installing chained IRQ handlerRussell King
Fix a race where a pending interrupt could be received and the handler called before the handler's data has been setup, by converting to irq_set_chained_handler_and_data(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Lee Jones <lee.jones@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/E1Z4z0X-0002T1-6U@rmk-PC.arm.linux.org.uk Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-18irq: irq-keystone: Fix race in installing chained IRQ handlerRussell King
Fix a race where a pending interrupt could be received and the handler called before the handler's data has been setup, by converting to irq_set_chained_handler_and_data(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Lee Jones <lee.jones@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/E1Z4z0S-0002Ss-1V@rmk-PC.arm.linux.org.uk Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-18gpio: gpio-tegra: Fix race in installing chained IRQ handlerRussell King
Fix a race where a pending interrupt could be received and the handler called before the handler's data has been setup, by converting to irq_set_chained_handler_and_data(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Lee Jones <lee.jones@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/E1Z4z0M-0002Sl-Ti@rmk-PC.arm.linux.org.uk Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-18gpio: gpio-mxs: Fix race in installing chained IRQ handlerRussell King
Fix a race where a pending interrupt could be received and the handler called before the handler's data has been setup, by converting to irq_set_chained_handler_and_data(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Lee Jones <lee.jones@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/E1Z4z0H-0002Sf-P9@rmk-PC.arm.linux.org.uk Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-18gpio: gpio-mxc: Fix race in installing chained IRQ handlerRussell King
Fix a race where a pending interrupt could be received and the handler called before the handler's data has been setup, by converting to irq_set_chained_handler_and_data(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Lee Jones <lee.jones@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/E1Z4z0C-0002SX-Lj@rmk-PC.arm.linux.org.uk Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-18ARM: gemini: Fix race in installing GPIO chained IRQ handlerRussell King
The gemini code was installing its chained interrupt handler (which enables the interrupt) before it was setting its data, which is bad if the IRQ was previously pending. Avoid this problem by converting it to irq_set_chained_handler_and_data(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Lee Jones <lee.jones@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/E1Z4z07-0002SO-Gv@rmk-PC.arm.linux.org.uk Signed-off-by: Thomas Gleixner <tglx@linutronix.de>