summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2014-02-27x86: Use clflushopt in clflush_cache_rangeRoss Zwisler
If clflushopt is available on the system, use it instead of clflush in clflush_cache_range. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Link: http://lkml.kernel.org/r/1393441612-19729-3-git-send-email-ross.zwisler@linux.intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-02-27x86: Add support for the clflushopt instructionRoss Zwisler
Add support for the new clflushopt instruction. This instruction was announced in the document "Intel Architecture Instruction Set Extensions Programming Reference" with Ref # 319433-018. http://download-software.intel.com/sites/default/files/managed/50/1a/319433-018.pdf [ hpa: changed the feature flag to simply X86_FEATURE_CLFLUSHOPT - if that is what we want to report in /proc/cpuinfo anyway... ] Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Link: http://lkml.kernel.org/r/1393441612-19729-2-git-send-email-ross.zwisler@linux.intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-02-27cpuset: fix a race condition in __cpuset_node_allowed_softwall()Li Zefan
It's not safe to access task's cpuset after releasing task_lock(). Holding callback_mutex won't help. Cc: <stable@vger.kernel.org> Signed-off-by: Li Zefan <lizefan@huawei.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2014-02-27cpuset: fix a locking issue in cpuset_migrate_mm()Li Zefan
I can trigger a lockdep warning: # mount -t cgroup -o cpuset xxx /cgroup # mkdir /cgroup/cpuset # mkdir /cgroup/tmp # echo 0 > /cgroup/tmp/cpuset.cpus # echo 0 > /cgroup/tmp/cpuset.mems # echo 1 > /cgroup/tmp/cpuset.memory_migrate # echo $$ > /cgroup/tmp/tasks # echo 1 > /cgruop/tmp/cpuset.mems =============================== [ INFO: suspicious RCU usage. ] 3.14.0-rc1-0.1-default+ #32 Not tainted ------------------------------- include/linux/cgroup.h:682 suspicious rcu_dereference_check() usage! ... [<ffffffff81582174>] dump_stack+0x72/0x86 [<ffffffff810b8f01>] lockdep_rcu_suspicious+0x101/0x140 [<ffffffff81105ba1>] cpuset_migrate_mm+0xb1/0xe0 ... We used to hold cgroup_mutex when calling cpuset_migrate_mm(), but now we hold cpuset_mutex, which causes task_css() to complain. This is not a false-positive but a real issue. Holding cpuset_mutex won't prevent a task from migrating to another cpuset, and it won't prevent the original task->cgroup from destroying during this change. Fixes: 5d21cc2db040 (cpuset: replace cgroup_mutex locking with cpuset internal locking) Cc: <stable@vger.kernel.org> # 3.9+ Signed-off-by: Li Zefan <lizefan@huawei.com> Sigend-off-by: Tejun Heo <tj@kernel.org>
2014-02-27arm64: Use swiotlb late initialisationCatalin Marinas
Since arm64 does not support ISA, there is no need for early swiotlb initialisation. This patch switches the DMA mapping code to swiotlb_tlb_late_init_with_default_size(). A side effect of this is that GFP_DMA is used for the swiotlb buffer and devices with a 32-bit coherent mask are correctly supported. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-27genirq: Include missing header file in irqdomain.cRashika Kheria
Include appropriate header file include/linux/of_irq.h in kernel/irq/irqdomain.c because it contains prototype definition of function define in kernel/irq/irqdomain.c. This eliminates the following warning in kernel/irq/irqdomain.c: kernel/irq/irqdomain.c:468:14: warning: no previous prototype for ‘irq_create_of_mapping’ [-Wmissing-prototypes] Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/eb89aebea7ff1a46122918ac389ebecf8248be9a.1393493276.git.rashika.kheria@gmail.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-27arm64: Replace ZONE_DMA32 with ZONE_DMACatalin Marinas
On arm64 we do not have two DMA zones, so it does not make sense to implement ZONE_DMA32. This patch changes ZONE_DMA32 with ZONE_DMA, the latter covering 32-bit dma address space to honour GFP_DMA allocations. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-27Merge branch 'liblockdep-fixes' of https://github.com/sashalevin/liblockdep ↵Ingo Molnar
into core/urgent Pull tools/lib/lockdep/ fixes from Sasha Levin. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27Merge branch 'core/urgent' into core/lockingIngo Molnar
It's not really a regression fix, so move it to the v3.15 queue. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27Merge tag 'perf-urgent-for-mingo' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/urgent fixes from Arnaldo Carvalho de Melo: * Fix annotation on stdio/GTK+ interfaces (Namhyung Kim) * Fix file descriptor leaking while searching DSOs for suitable symtab (Namhyung Kim). Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27Merge tag 'asoc-v3.14-rc4-2' of ↵Takashi Iwai
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus ASoC: Updates for v3.14 A few more driver specific bug fixes, all driver specific things that only affect users of those devices.
2014-02-27Merge tag 'perf-core-for-mingo' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: * Add support for the new DWARF unwinder library in elfutils (Jiri Olsa) * Fix build race in the generation of bison files (Jiri Olsa) * Further streamline the feature detection display, trimming it a bit to show just the libraries detected, using VF=1 gets a more verbose output, showing the less interesting feature checks as well (Jiri Olsa). * Check compatible symtab type before loading dso (Namhyung Kim) * Check return value of filename__read_debuglink() (Stephane Eranian) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27perf: Optimize group_sched_in()Peter Zijlstra
Use the ctx pmu instead of the event pmu. When a group leader is a software event but the group contains hardware events, the entire group is on the hardware PMU. Using the hardware PMU for the transaction makes most sense since that's the most expensive one to programm (and software PMUs generally don't have TXN support anyway). Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-sctoo9t2f3nn2c9g568928q3@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27perf/x86: Add a few more commentsPeter Zijlstra
Add a few comments on the ->add(), ->del() and ->*_txn() implementation. Requested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-he3819318c245j7t5e1e22tr@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27perf: Remove redundant PMU assignmentMark Rutland
Currently perf_branch_stack_sched_in iterates over the set of pmus, checks that each pmu has a flush_branch_stack callback, then overwrites the pmu before calling the callback. This is either redundant or broken. In systems with a single hw pmu, pmu == cpuctx->ctx.pmu, and thus the assignment is redundant. In systems with multiple hw pmus (i.e. multiple pmus with task_ctx_nr == perf_hw_context) the pmus share the same perf_cpu_context. Thus the assignment can cause one of the pmus to flush its branch stack repeatedly rather than causing each of the pmus to flush their branch stacks. Worse still, if only some pmus have the callback the assignment can result in a branch to NULL. This patch removes the redundant assignment. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1392054264-23570-3-git-send-email-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27perf: Fix prototype of find_pmu_context()Mark Rutland
For some reason find_pmu_context() is defined as returning void * rather than a __percpu struct perf_cpu_context *. As all the requisite types are defined in advance there's no reason to keep it that way. This patch modifies the prototype of pmu_find_context to return a __percpu struct perf_cpu_context *. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1392054264-23570-2-git-send-email-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27Merge branch 'perf/urgent' into perf/coreIngo Molnar
Merge the latest fixes before queueing up new changes. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27trace: Replace hardcoding of 19 with MAX_NICEDongsheng Yang
Use MAX_NICE instead of the value 19 for ring_buffer_benchmark. Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/1393251121-25534-1-git-send-email-yangds.fnst@cn.fujitsu.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27sched: Guarantee task priority in pick_next_task()Peter Zijlstra
Michael spotted that the idle_balance() push down created a task priority problem. Previously, when we called idle_balance() before pick_next_task() it wasn't a problem when -- because of the rq->lock droppage -- an rt/dl task slipped in. Similarly for pre_schedule(), rt pre-schedule could have a dl task slip in. But by pulling it into the pick_next_task() loop, we'll not try a higher task priority again. Cure this by creating a re-start condition in pick_next_task(); and triggering this from pick_next_task_{rt,fair}(). It also fixes a live-lock where we get stuck in pick_next_task_fair() due to idle_balance() seeing !0 nr_running but there not actually being any fair tasks about. Reported-by: Michael Wang <wangyun@linux.vnet.ibm.com> Fixes: 38033c37faab ("sched: Push down pre_schedule() and idle_balance()") Tested-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Juri Lelli <juri.lelli@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20140224121218.GR15586@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27sched/idle: Remove stale old filePeter Zijlstra
Commit cf37b6b48428d ("sched/idle: Move cpu/idle.c to sched/idle.c") said to simply move a file; somehow it got mangled and created an old version of the file and forgot to remove the old file. Fix this fail; add the lost change and remove the now identical old file. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: rjw@rjwysocki.net Cc: nicolas.pitre@linaro.org Cc: preeti@linux.vnet.ibm.com Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Link: http://lkml.kernel.org/r/20140224172207.GC9987@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27sched: Put rq's sched_avg under CONFIG_FAIR_GROUP_SCHEDDietmar Eggemann
The struct sched_avg of struct rq is only used in case group scheduling is enabled inside __update_tg_runnable_avg() to update per-cpu representation of a task group. I.e. that there is no need to maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case. This patch guards struct sched_avg of struct rq and update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED. There is an extra empty definition for update_rq_runnable_avg() necessary for the !CONFIG_FAIR_GROUP_SCHED && CONFIG_SMP case. The function print_cfs_group_stats() which prints out struct sched_avg of struct rq is already guarded with CONFIG_FAIR_GROUP_SCHED. Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/530DCDC5.1060406@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27perf: Fix hotplug splatPeter Zijlstra
Drew Richardson reported that he could make the kernel go *boom* when hotplugging while having perf events active. It turned out that when you have a group event, the code in __perf_event_exit_context() fails to remove the group siblings from the context. We then proceed with destroying and freeing the event, and when you re-plug the CPU and try and add another event to that CPU, things go *boom* because you've still got dead entries there. Reported-by: Drew Richardson <drew.richardson@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/n/tip-k6v5wundvusvcseqj1si0oz0@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27perf/x86: Fix event schedulingPeter Zijlstra
Vince "Super Tester" Weaver reported a new round of syscall fuzzing (Trinity) failures, with perf WARN_ON()s triggering. He also provided traces of the failures. This is I think the relevant bit: > pec_1076_warn-2804 [000] d... 147.926153: x86_pmu_disable: x86_pmu_disable > pec_1076_warn-2804 [000] d... 147.926153: x86_pmu_state: Events: { > pec_1076_warn-2804 [000] d... 147.926156: x86_pmu_state: 0: state: .R config: ffffffffffffffff ( (null)) > pec_1076_warn-2804 [000] d... 147.926158: x86_pmu_state: 33: state: AR config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926159: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926160: x86_pmu_state: n_events: 1, n_added: 0, n_txn: 1 > pec_1076_warn-2804 [000] d... 147.926161: x86_pmu_state: Assignment: { > pec_1076_warn-2804 [000] d... 147.926162: x86_pmu_state: 0->33 tag: 1 config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926163: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926166: collect_events: Adding event: 1 (ffff880119ec8800) So we add the insn:p event (fd[23]). At this point we should have: n_events = 2, n_added = 1, n_txn = 1 > pec_1076_warn-2804 [000] d... 147.926170: collect_events: Adding event: 0 (ffff8800c9e01800) > pec_1076_warn-2804 [000] d... 147.926172: collect_events: Adding event: 4 (ffff8800cbab2c00) We try and add the {BP,cycles,br_insn} group (fd[3], fd[4], fd[15]). These events are 0:cycles and 4:br_insn, the BP event isn't x86_pmu so that's not visible. group_sched_in() pmu->start_txn() /* nop - BP pmu */ event_sched_in() event->pmu->add() So here we should end up with: 0: n_events = 3, n_added = 2, n_txn = 2 4: n_events = 4, n_added = 3, n_txn = 3 But seeing the below state on x86_pmu_enable(), the must have failed, because the 0 and 4 events aren't there anymore. Looking at group_sched_in(), since the BP is the leader, its event_sched_in() must have succeeded, for otherwise we would not have seen the sibling adds. But since neither 0 or 4 are in the below state; their event_sched_in() must have failed; but I don't see why, the complete state: 0,0,1:p,4 fits perfectly fine on a core2. However, since we try and schedule 4 it means the 0 event must have succeeded! Therefore the 4 event must have failed, its failure will have put group_sched_in() into the fail path, which will call: event_sched_out() event->pmu->del() on 0 and the BP event. Now x86_pmu_del() will reduce n_events; but it will not reduce n_added; giving what we see below: n_event = 2, n_added = 2, n_txn = 2 > pec_1076_warn-2804 [000] d... 147.926177: x86_pmu_enable: x86_pmu_enable > pec_1076_warn-2804 [000] d... 147.926177: x86_pmu_state: Events: { > pec_1076_warn-2804 [000] d... 147.926179: x86_pmu_state: 0: state: .R config: ffffffffffffffff ( (null)) > pec_1076_warn-2804 [000] d... 147.926181: x86_pmu_state: 33: state: AR config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926182: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926184: x86_pmu_state: n_events: 2, n_added: 2, n_txn: 2 > pec_1076_warn-2804 [000] d... 147.926184: x86_pmu_state: Assignment: { > pec_1076_warn-2804 [000] d... 147.926186: x86_pmu_state: 0->33 tag: 1 config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926188: x86_pmu_state: 1->0 tag: 1 config: 1 (ffff880119ec8800) > pec_1076_warn-2804 [000] d... 147.926188: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926190: x86_pmu_enable: S0: hwc->idx: 33, hwc->last_cpu: 0, hwc->last_tag: 1 hwc->state: 0 So the problem is that x86_pmu_del(), when called from a group_sched_in() that fails (for whatever reason), and without x86_pmu TXN support (because the leader is !x86_pmu), will corrupt the n_added state. Reported-and-Tested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Stephane Eranian <eranian@google.com> Cc: Dave Jones <davej@redhat.com> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/20140221150312.GF3104@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27sched/deadline: Prevent rt_time growth to infinityJuri Lelli
Kirill Tkhai noted: Since deadline tasks share rt bandwidth, we must care about bandwidth timer set. Otherwise rt_time may grow up to infinity in update_curr_dl(), if there are no other available RT tasks on top level bandwidth. RT task were in fact throttled right after they got enqueued, and never executed again (rt_time never again went below rt_runtime). Peter then proposed to accrue DL execution on rt_time only when rt timer is active, and proposed a patch (this patch is a slight modification of that) to implement that behavior. While this solves Kirill problem, it has a drawback. Indeed, Kirill noted again: It looks we may get into a situation, when all CPU time is shared between RT and DL tasks: rt_runtime = n rt_period = 2n | RT working, DL sleeping | DL working, RT sleeping | ----------------------------------------------------------- | (1) duration = n | (2) duration = n | (repeat) |--------------------------|------------------------------| | (rt_bw timer is running) | (rt_bw timer is not running) | No time for fair tasks at all. While this can happen during the first period, if rq is always backlogged, RT tasks won't have the opportunity to execute anymore: rt_time reached rt_runtime during (1), suppose after (2) RT is enqueued back, it gets throttled since rt timer didn't fire, replenishment is from now on eaten up by DL tasks that accrue their execution on rt_time (while rt timer is active - we have an RT task waiting for replenishment). FAIR tasks are not touched after this first period. Ok, this is not ideal, and the situation is even worse! What above (the nice case), practically never happens in reality, where your rt timer is not aligned to tasks periods, tasks are in general not periodic, etc.. Long story short, you always risk to overload your system. This patch is based on Peter's idea, but exploits an additional fact: if you don't have RT tasks enqueued, it makes little sense to continue incrementing rt_time once you reached the upper limit (DL tasks have their own mechanism for throttling). This cures both problems: - no matter how many DL instances in the past, you'll have an rt_time slightly above rt_runtime when an RT task is enqueued, and from that point on (after the first replenishment), the task will normally execute; - you can still eat up all bandwidth during the first period, but not anymore after that, remember that DL execution will increment rt_time till the upper limit is reached. The situation is still not perfect! But, we have a simple solution for now, that limits how much you can jeopardize your system, as we keep working towards the right answer: RT groups scheduled using deadline servers. Reported-by: Kirill Tkhai <tkhai@yandex.ru> Signed-off-by: Juri Lelli <juri.lelli@gmail.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20140225151515.617714e2f2cd6c558531ba61@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27sched/deadline: Switch CPU's presence test orderJuri Lelli
Commit 82b9580 ("sched/deadline: Test for CPU's presence explicitly") changed how we check if a CPU returned by cpudeadline machinery is valid. But, we don't want to call cpu_present() if best_cpu is equal to -1. So, switch the order of tests inside WARN_ON(). Signed-off-by: Juri Lelli <juri.lelli@gmail.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: boris.ostrovsky@oracle.com Cc: konrad.wilk@oracle.com Cc: rostedt@goodmis.org Link: http://lkml.kernel.org/r/1393238832-9100-1-git-send-email-juri.lelli@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27sched/deadline: Cleanup RT leftovers from {inc/dec}_dl_migrationKirill Tkhai
In deadline class we do not have group scheduling. So, let's remove unnecessary X = X; equations. Signed-off-by: Kirill Tkhai <ktkhai@parallels.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Juri Lelli <juri.lelli@gmail.com> Link: http://lkml.kernel.org/r/1393343543.4089.5.camel@tkhai Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27sched: Fix double normalization of vruntimeGeorge McCollister
dequeue_entity() is called when p->on_rq and sets se->on_rq = 0 which appears to guarentee that the !se->on_rq condition is met. If the task has done set_current_state(TASK_INTERRUPTIBLE) without schedule() the second condition will be met and vruntime will be incorrectly adjusted twice. In certain cases this can result in the task's vruntime never increasing past the vruntime of other tasks on the CFS' run queue, starving them of CPU time. This patch changes switched_from_fair() to use !p->on_rq instead of !se->on_rq. I'm able to cause a task with a priority of 120 to starve all other tasks with the same priority on an ARM platform running 3.2.51-rt72 PREEMPT RT by writing one character at time to a serial tty (16550 UART) in a tight loop. I'm also able to verify making this change corrects the problem on that platform and kernel version. Signed-off-by: George McCollister <george.mccollister@gmail.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1392767811-28916-1-git-send-email-george.mccollister@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27Merge remote-tracking branch 'asoc/fix/wm8958' into asoc-linusMark Brown
2014-02-27Merge remote-tracking branches 'asoc/fix/da732x' and 'asoc/fix/sta32x' into ↵Mark Brown
asoc-linus
2014-02-27Merge tag 'asoc-v3.14-rc4' into asoc-linusMark Brown
ASoC: Fixes for v3.14 A somewhat large set of fixes here due to the identification of some systematic problems with hard to use APIs in the subsystem. Takashi did a lot of work to address the enumeration API which uncovered a number of off by one bugs caused by confusing APIs while Charles addressed issues in the locking around DAPM. # gpg: Signature made Sun 23 Feb 2014 13:29:34 KST using RSA key ID 7EA229BD # gpg: Good signature from "Mark Brown <broonie@sirena.org.uk>" # gpg: aka "Mark Brown <broonie@debian.org>" # gpg: aka "Mark Brown <broonie@kernel.org>" # gpg: aka "Mark Brown <broonie@tardis.ed.ac.uk>" # gpg: aka "Mark Brown <broonie@linaro.org>" # gpg: aka "Mark Brown <Mark.Brown@linaro.org>"
2014-02-27Merge tag 'asoc-v3.14-rc3' into asoc-linusMark Brown
ASoC: Fixes for v3.14 A few fixes, all driver speccific ones. The DaVinci ones aren't as clear as they should be from the subject lines on the commits but they fix issues which will prevent correct operation in some use cases and only affect that particular driver so are reasonably safe. # gpg: Signature made Wed 19 Feb 2014 13:23:13 KST using RSA key ID 7EA229BD # gpg: Good signature from "Mark Brown <broonie@sirena.org.uk>" # gpg: aka "Mark Brown <broonie@debian.org>" # gpg: aka "Mark Brown <broonie@kernel.org>" # gpg: aka "Mark Brown <broonie@tardis.ed.ac.uk>" # gpg: aka "Mark Brown <broonie@linaro.org>" # gpg: aka "Mark Brown <Mark.Brown@linaro.org>"
2014-02-27Revert "sched/wait: Suppress Sparse 'variable shadowing' warning"Ingo Molnar
This reverts commit 980f88e414418bf65569a3b62b08b07e6fc2f4c6. This warning is actually useful, don't suppress it. We actually rely on the shadowing for ___wait_cond_timeout(). We further used the __ret variable in __wait_event_timeout()'s cmd argument: __ret = schedule_timeout(__ret). That now explicitly uses the wrong __ret. Reported-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Requested-by: Andrew Morton <akpm@linux-foundation.org> Requested-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-Q5blhuqqzwgVwvjf1gszrdol@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27genirq: Remove racy waitqueue_active checkChuansheng Liu
We hit one rare case below: T1 calling disable_irq(), but hanging at synchronize_irq() always; The corresponding irq thread is in sleeping state; And all CPUs are in idle state; After analysis, we found there is one possible scenerio which causes T1 is waiting there forever: CPU0 CPU1 synchronize_irq() wait_event() spin_lock() atomic_dec_and_test(&threads_active) insert the __wait into queue spin_unlock() if(waitqueue_active) atomic_read(&threads_active) wake_up() Here after inserted the __wait into queue on CPU0, and before test if queue is empty on CPU1, there is no barrier, it maybe cause it is not visible for CPU1 immediately, although CPU0 has updated the queue list. It is similar for CPU0 atomic_read() threads_active also. So we'd need one smp_mb() before waitqueue_active.that, but removing the waitqueue_active() check solves it as wel l and it makes things simple and clear. Signed-off-by: Chuansheng Liu <chuansheng.liu@intel.com> Cc: Xiaoming Wang <xiaoming.wang@intel.com> Link: http://lkml.kernel.org/r/1393212590-32543-1-git-send-email-chuansheng.liu@intel.com Cc: stable@vger.kernel.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-27iwlwifi: fix TX status for aggregated packetsJohannes Berg
Only the first packet is currently handled correctly, but then all others are assumed to have failed which is problematic. Fix this, marking them all successful instead (since if they're not then the firmware will have transmitted them as single frames.) This fixes the lost packet reporting. Also do a tiny variable scoping cleanup. Cc: <stable@vger.kernel.org> Signed-off-by: Johannes Berg <johannes.berg@intel.com> [Add the dvm part] Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2014-02-27ASoC: sta32x: Fix wrong enum for limiter2 release rateTakashi Iwai
There is a typo in the Limiter2 Release Rate control, a wrong enum for Limiter1 is assigned. It must point to Limiter2. Spotted by a compile warning: In file included from sound/soc/codecs/sta32x.c:34:0: sound/soc/codecs/sta32x.c:223:29: warning: ‘sta32x_limiter2_release_rate_enum’ defined but not used [-Wunused-variable] static SOC_ENUM_SINGLE_DECL(sta32x_limiter2_release_rate_enum, ^ include/sound/soc.h:275:18: note: in definition of macro ‘SOC_ENUM_DOUBLE_DECL’ struct soc_enum name = SOC_ENUM_DOUBLE(xreg, xshift_l, xshift_r, \ ^ sound/soc/codecs/sta32x.c:223:8: note: in expansion of macro ‘SOC_ENUM_SINGLE_DECL’ static SOC_ENUM_SINGLE_DECL(sta32x_limiter2_release_rate_enum, ^ Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Mark Brown <broonie@linaro.org> Cc: <stable@vger.kernel.org>
2014-02-27iwlwifi: mvm: change of listen interval from 70 to 10Max Stepanov
Some APs reject STA association request if a listen interval value exceeds a threshold of 10. Thus, for example, Cisco APs may deny STA associations returning status code 12 (Association denied due to reason outside the scope of 802.11 standard) in the association response frame. Fixing the issue by setting the default IWL_CONN_MAX_LISTEN_INTERVAL value from 70 to 10. Cc: <stable@vger.kernel.org> [3.10+] Signed-off-by: Max Stepanov <Max.Stepanov@intel.com> Reviewed-by: Alexander Bondar <alexander.bondar@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2014-02-27Merge tag 'asoc-v3.14-rc4' of ↵Takashi Iwai
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus ASoC: Fixes for v3.14 A somewhat large set of fixes here due to the identification of some systematic problems with hard to use APIs in the subsystem. Takashi did a lot of work to address the enumeration API which uncovered a number of off by one bugs caused by confusing APIs while Charles addressed issues in the locking around DAPM.
2014-02-27MAINTAINERS: update drm git tree entryAlex Deucher
Fix Dave's git tree. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-02-27MAINTAINERS: add entry for drm radeon driverAlex Deucher
Add an entry for radeon. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-02-27spi-topcliff-pch: Fix probing when DMA mode is usedAlexander Stein
If during registering SPI master due to SPI device probing a SPI transfer is issued the DMA buffers are not allocated yet. This fixes the following oops: pch_spi 0000:02:0c.1: enabling device (0000 -> 0002) pch_spi 0000:02:0c.1: master is unqueued, this is deprecated BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<c125aa05>] pch_spi_handle_dma+0x15c/0x6f4 [...] Signed-off-by: Alexander Stein <alexander.stein@systec-electronic.com> Signed-off-by: Mark Brown <broonie@linaro.org>
2014-02-26bonding: disallow enslaving a bond to itselfJiri Bohac
Enslaving a bond to itself leads to an endless loop and hangs the kernel. Signed-off-by: Jiri Bohac <jbohac@suse.cz> Tested-by: Ding Tianhong <dingtianhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-02-26tools/liblockdep: Use realpath for srctree and objtreeWang Nan
If BUILD_SRC or CURDIR contains tailing '/', the file names passed to gcc will contain '//'. It will be contained .o's in debuginfo, then confuse debugedit: https://bugzilla.redhat.com/show_bug.cgi?id=304121 This patch uses realpath command to makesure potential tailing '/'s are removed. Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Geng Hui <hui.geng@huawei.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2014-02-26tools/liblockdep: Add a stub for new rcu_is_watchingSasha Levin
Stub out rcu_is_watching(), prevents build error with the updated tree. Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2014-02-26tools/liblockdep: Mark runtests.sh as executableSasha Levin
runtests.sh is used to run the sanity tests for liblockdep and should be set +x. Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2014-02-26tools/liblockdep: Add include directory to allow tests to compileIra W. Snyder
All of the programs in the tests directory require the liblockdep/mutex.h header in order to compile. Add the include directory to the compiler options so that the tests can be built with the provided Makefile. Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2014-02-26tools/liblockdep: Fix include of asm/hash.hIra W. Snyder
Commit 71ae8aac ("lib: introduce arch optimized hash library") added an include to <linux/hash.h> for setting up an architecture specific fast hash. This patch mirrors the fix used for perf, titled "tools: perf: util: fix include for non x86 architectures". Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2014-02-26tools/liblockdep: Fix initialization code pathIra W. Snyder
This makes initialization actually happen. Without it, initialization is always skipped due to an incorrect conditional statement. Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2014-02-26clk:at91: Fix memory leak in of_at91_clk_master_setup()Masanari Iida
cppcheck detected following error [clk-master.c:245]: (error) Memory leak: characteristics The original code forgot to free characteristics when irq_of_parse_and_map() failed. Signed-off-by: Masanari Iida <standby24x7@gmail.com> Acked-by Boris BREZILLON <b.brezillon@overkiz.com> Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com> Signed-off-by: Mike Turquette <mturquette@linaro.org>
2014-02-26usb: ehci: fix deadlock when threadirqs option is usedStanislaw Gruszka
ehci_irq() and ehci_hrtimer_func() can deadlock on ehci->lock when threadirqs option is used. To prevent the deadlock use spin_lock_irqsave() in ehci_irq(). This change can be reverted when hrtimer callbacks become threaded. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Cc: stable <stable@vger.kernel.org> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-26USB: ftdi_sio: add Cressi Leonardo PIDJoerg Dorchain
Hello, the following patch adds an entry for the PID of a Cressi Leonardo diving computer interface to kernel 3.13.0. It is detected as FT232RL. Works with subsurface. Signed-off-by: Joerg Dorchain <joerg@dorchain.net> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>