Age | Commit message (Collapse) | Author |
|
If both ACPI and DT platform descriptions are available, and the
kernel was configured at build time to support both flavours, the
default policy is to prefer DT over ACPI, and preferring ACPI over
DT while still allowing DT as a fallback is not possible.
Since some enterprise features (such as RAS) depend on ACPI, it may
be desirable for, e.g., distro installers to prefer ACPI boot but
fall back to DT rather than failing completely if no ACPI tables are
available.
So introduce the 'acpi=on' kernel command line parameter for arm64,
which signifies that ACPI should be used if available, and DT should
only be used as a fallback.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
When booting a relocatable kernel image, there is no practical reason
to refuse an image whose load address is not exactly TEXT_OFFSET bytes
above a 2 MB aligned base address, as long as the physical and virtual
misalignment with respect to the swapper block size are equal, and are
both aligned to THREAD_SIZE.
Since the virtual misalignment is under our control when we first enter
the kernel proper, we can simply choose its value to be equal to the
physical misalignment.
So treat the misalignment of the physical load address as the initial
KASLR offset, and fix up the remaining code to deal with that.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
For historical reasons, the kernel Image must be loaded into physical
memory at a 512 KB offset above a 2 MB aligned base address. The region
between the base address and the start of the kernel Image has no
significance to the kernel itself, but it is currently mapped explicitly
into the early kernel VMA range for all translation granules.
In some cases (i.e., 4 KB granule), this is unavoidable, due to the 2 MB
granularity of the early kernel mappings. However, in other cases, e.g.,
when running with larger page sizes, or in the future, with more granular
KASLR, there is no reason to map it explicitly like we do currently.
So update the logic so that the region is mapped only if that happens as
a side effect of rounding the start address of the kernel to swapper block
size, and leave it unmapped otherwise.
Since the symbol kernel_img_size now simply resolves to the memory
footprint of the kernel Image, we can drop its definition from image.h
and opencode its calculation.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
When building a relocatable kernel, we currently rely on the fact that
early 64-bit literal loads need to be deferred to after the relocation
has been performed only if they involve symbol references, and not if
they involve assemble time constants. While this is not an unreasonable
assumption to make, it is better to switch to movk/movz sequences, since
these are guaranteed to be resolved at link time, simply because there are
no dynamic relocation types to describe them.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Implement a macro mov_q that can be used to move an immediate constant
into a 64-bit register, using between 2 and 4 movz/movk instructions
(depending on the operand)
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Refactor the relocation processing so that the code executes from the
ID map while accessing the relocation tables via the virtual mapping.
This way, we can use literals containing virtual addresses as before,
instead of having to use convoluted absolute expressions.
For symmetry with the secondary code path, the relocation code and the
subsequent jump to the virtual entry point are implemented in a function
called __primary_switch(), and __mmap_switched() is renamed to
__primary_switched(). Also, the call sequence in stext() is aligned with
the one in secondary_startup(), by replacing the awkward 'adr_l lr' and
'b cpu_setup' sequence with a simple branch and link.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
We can simply use a relocated 64-bit literal to store the address of
__secondary_switched(), and the relocation code will ensure that it
holds the correct value at secondary entry time, as long as we make sure
that the literal is not dereferenced until after we have enabled the MMU.
So jump via a small __secondary_switch() function covered by the ID map
that performs the literal load and branch-to-register.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
This unexports some symbols from head.S that are only used locally.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Currently ARM CPUs DT bindings allows different enable-method value for
PSCI based systems. On ARM 64-bit this property is required and must be
"psci" while on ARM 32-bit systems this property is optional and must
be "arm,psci" if present.
However, "arm,psci" has always been the compatible string for the PSCI
node, and was never intended to be the enable-method. So this is a bug
in the binding and not a deliberate attempt at specifying 32-bit
differently.
This is problematic if 32-bit OS is run on 64-bit system which has
"psci" as enable-method rather than the expected "arm,psci".
So let's unify the value into "psci" and remove support for "arm,psci"
before it finds any users.
Reported-by: Soby Mathew <Soby.Mathew@arm.com>
Cc: Rob Herring <robh+dt@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Certain Intel Sunrisepoint PCH variants report zero chip selects in SPI
capabilities register even they have one per port. Detection in
pxa2xx_spi_probe() sets master->num_chipselect to 0 leading to -EINVAL
from spi_register_master() where chip select count is validated.
Fix this by not using SPI capabilities register on Sunrisepoint. They don't
have more than one chip select so use the default value 1 instead of
detection.
Fixes: 8b136baa5892 ("spi: pxa2xx: Detect number of enabled Intel LPSS SPI chip select signals")
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
|
|
After removing all uses of the range operations in a recent patch,
we get a warning about the symbol not being referenced anywhere:
drivers/regulator/rk808-regulator.c:306:29: 'rk808_reg_ops_ranges' defined but not used
This removes the now-unused structure along with the
rk808_set_suspend_voltage_range function that is only referenced from
rk808_reg_ops_ranges.
Fixes: afcd666d9db0 ("regulator: rk808: remove linear range definitions with a single range")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
The recent bug report suggests that BCLK setup for i915 HSW/BDW needs
to be updated at each HDMI hotplug, not only at initialization and
resume. That is, we need to update HSW_EM4 and HSW_EM5 registers at
ELD notification, too. Otherwise the HDMI audio may be out of sync
and played in a wrong pitch.
However, the HDA codec driver has no access to the controller
registers, and currently the code managing these registers is in
hda_intel.c, i.e. local to the controller driver. For allowing the
explicit BCLK update from the codec driver, as in this patch, the
former haswell_set_bclk() in hda_intel.c is moved to hdac_i915.c and
exposed as snd_hdac_i915_set_bclk(). This is called from both the HDA
controller driver and intel_pin_eld_notify() in HDMI codec driver.
Along with this change, snd_hdac_get_display_clk() gets dropped as
it's no longer used.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=91410
Cc: <stable@vger.kernel.org> # v4.5+
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Fixes audio output on a ThinkPad X260, when using Lenovo CES 2013
docking station series (basic, pro, ultra).
Signed-off-by: Conrad Kostecki <ck+linuxkernel@bl4ckb0x.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
When multiple skb are TX-completed in a row, we might incorrectly keep
a timestamp of a prior skb and cause extra work.
Fixes: ec693d47010e8 ("net/mlx4_en: Add HW timestamping (TS) support")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Ivan Babrou <ivan@cloudflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
blk_queue_split marks bio unmergeable, which makes sense for normal bio.
But if dispatching the bio to underlayer disk, the blk_queue_split
checks are invalid, hence it's possible the bio becomes mergeable.
In the reported bug, this bug causes trim against raid0 performance slash
https://bugzilla.kernel.org/show_bug.cgi?id=117051
Reported-and-tested-by: Park Ju Hyung <qkrwngud825@gmail.com>
Fixes: 6ac45aeb6bca(block: avoid to merge splitted bio)
Cc: stable@vger.kernel.org (v4.3+)
Cc: Ming Lei <ming.lei@canonical.com>
Cc: Neil Brown <neilb@suse.de>
Reviewed-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
|
|
The ktime_get() can have a non negligeable overhead, use local_clock()
instead.
In order to test the difference between ktime_get() and local_clock(),
a quick hack has been added to trigger, via debugfs, 10000 times a
call to ktime_get() and local_clock() and measure the elapsed time.
Then the average value, the min and max is computed for each call.
From userspace, the test above was called 100 times every 2 seconds.
So, ktime_get() and local_clock() have been called 1000000 times in
total.
The results are:
ktime_get():
============
* average: 101 ns (stddev: 27.4)
* maximum: 38313 ns
* minimum: 65 ns
local_clock():
==============
* average: 60 ns (stddev: 9.8)
* maximum: 13487 ns
* minimum: 46 ns
The local_clock() is faster and more stable.
Even if it is a drop in the ocean, changing the ktime_get() by the
local_clock() allows to save 80ns at idle time (entry + exit). And
in some circumstances, especially when there are several CPUs racing
for the clock access, we save tens of microseconds.
The idle duration resulting from a diff is converted from nanosec to
microsec. This could be done with integer division (div 1000) - which is
an expensive operation or by 10 bits shifting (div 1024) - which is fast
but unprecise.
The following table gives some results at the limits.
------------------------------------------
| nsec | div(1000) | div(1024) |
------------------------------------------
| 1e3 | 1 usec | 976 nsec |
------------------------------------------
| 1e6 | 1000 usec | 976 usec |
------------------------------------------
| 1e9 | 1000000 usec | 976562 usec |
------------------------------------------
There is a linear deviation of 2.34%. This loss of precision is acceptable
in the context of the resulting diff which is used for statistics. These
ones are processed to guess estimate an approximation of the duration of the
next idle period which ends up into an idle state selection. The selection
criteria takes into account the next duration based on large intervals,
represented by the idle state's target residency.
The 2^10 division is enough because the approximation regarding the 1e3
division is lost in all the approximations done for the next idle duration
computation.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[ rjw: Subject ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
|
|
http://git.linaro.org/people/daniel.lezcano/linux into tmp
Pull ARM cpuidle changes for v4.7 from Daniel Lezcano.
* 'cpuidle/4.7' of http://git.linaro.org/people/daniel.lezcano/linux:
drivers: firmware: psci: use const and __initconst for psci_cpuidle_ops
soc: qcom: spm: Use const and __initconst for qcom_cpuidle_ops
ARM: cpuidle: constify return value of arm_cpuidle_get_ops()
ARM: cpuidle: add const qualifier to cpuidle_ops member in structures
|
|
clk_get() on a disabled clock node will return -EPROBE_DEFER, which can
cause drivers to be deferred forever if such clocks are referenced in
their devices' clocks properties.
Update the disabled external scif clock node so that it
is not disabled to prevent this.
Reported-by: Jürg Billeter <j@bitron.ch>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
[simon: fix for v4.6 extracted from a larger patch targeted at v4.7]
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
The accumulated period for dummy entry should also be 0. Otherwise, the
total overhead could be overcounted.
$ perf record -e '{LLC-load-misses,cpu/instructions/}' --call-graph=lbr ./tchain
$ perf report --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 21K of event 'anon group { LLC-load-misses, cpu/instructions/ }'
# Event count (approx.): 16313667937
#
# Children Self Command Shared Object Symbol
# ................ ................ ........... ................ ............................
#
4769.98% 0.01% 0.00% 0.01% tchain_edit [kernel.vmlinux] [k] update_fast_timekeeper
4356.18% 0.01% 0.00% 0.01% tchain_edit [kernel.vmlinux] [k] trigger_load_balance
3181.12% 0.01% 0.00% 0.01% tchain_edit [kernel.vmlinux] [k] irq_work_tick
1592.37% 0.00% 0.00% 0.00% tchain_edit [kernel.vmlinux] [k] cpu_needs_another_gp
Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1461565689-5862-1-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
The check for the maximum code is off-by-one; the current comparison of
a code that is INTEL_PT_ERR_MAX will cause the strlcpy to perform an out
of bounds array access on the intel_pt_err_msgs array.
Fix this with a >= comparison.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461524203-10224-1-git-send-email-colin.king@canonical.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Given that the 'val' parameter is ignored for FUTEX_LOCK_PI, get rid of
the bogus deadlock detection flag in the wrapper code and avoid the
extra argument, making it resemble its unlock counterpart. And if
nothing else, we already only pass 0 anyway.
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Davidlohr Bueso <dbueso@suse.de>
Link: http://lkml.kernel.org/r/1461208447-29328-1-git-send-email-dave@stgolabs.net
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
The current assert check is checking an assignment, which will always be
true. Instead, the assert should be checking if scale is equal to 0.122
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461419154-16918-1-git-send-email-colin.king@canonical.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
When the config TDP level is not nominal (level = 0), the MSR values for
reading level 1 and level 2 ratios contain power in low 14 bits and actual
ratio bits are at bits [23:16]. The current processing for level 1 and
level 2 is wrong as there is no shift done to get actual ratio.
Fixes: 6a35fc2d6c22 (cpufreq: intel_pstate: get P1 from TAR when available)
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Cc: 4.4+ <stable@vger.kernel.org> # 4.4+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
If using IRQF_TRIGGER_FALLING, then there is a race here: if the reset
completes before we enable the IRQ, then CHG is already low and touch
will be broken.
This has been seen on Chromebook Pixel 2.
A workaround is to reconfig T18 COMMSCONFIG to enable the RETRIGEN bit
using mxt-app:
mxt-app -W -T18 44
mxt-app --backup
Tested-by: Tom Rini <trini@konsulko.com>
Signed-off-by: Nick Dyer <nick.dyer@itdev.co.uk>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
A wrong decoding of the touch coordinate message causes a wrong touch
ID. Touch ID for dual touch must be 0 or 1.
According to the actual Neonode nine byte touch coordinate coding,
the state is transported in the lower nibble and the touch ID in
the higher nibble of payload byte five.
Signed-off-by: Knut Wohlrab <Knut.Wohlrab@de.bosch.com>
Signed-off-by: Oleksij Rempel <linux@rempel-privat.de>
Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Signed-off-by: Eric Engestrom <eric.engestrom@imgtec.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461577678-29517-1-git-send-email-eric.engestrom@imgtec.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
commit c6f39257c952 ("mfd: twl6040: Use regmap for register cache")
did remove the private cache for the vibra control registers and replaced
access within twl6040_get_vibralr_status() by calls to regmap. This is OK,
as long as twl6040_get_vibralr_status() uses already cached values or is
not called from interrupt context. But we call this in vibra_play() for
checking that the vibrator is not configured for audio mode.
The result is a "BUG: scheduling while atomic" if the first use of the
twl6040 is a vibra effect, because the first fetch is by reading the
twl6040 registers through (blocking) i2c and not from the cache.
As soon as the regmap has cached the status, further calls are fine.
The solution is to move the condition to the work() function which
runs in context that can block.
The original code returns -EBUSY, but the return value of ->play()
functions is ignored anyways. Hence, we do not loose functionality
by not returning an error but just reporting the issue to INFO loglevel.
Tested-on: Pyra (omap5) prototype
Signed-off-by: H. Nikolaus Schaller <hns@goldelico.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Fix perf_clean target to follow the same logic as perf target.
Fixes the following make invokation:
$ cd <kernelsrc> && make tools/perf_clean
Reported-by: TJ <linux@iam.tj>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=116411
Link: http://lkml.kernel.org/r/1461615438-27894-2-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Turn current clean output:
$ make clean
rm -f arch/x86/include/generated/asm/syscalls_64.c
CLEAN libbpf
CLEAN libapi
into:
$ make clean
CLEAN x86
CLEAN libapi
CLEAN libbpf
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: TJ <linux@iam.tj>
Link: http://lkml.kernel.org/r/1461615438-27894-1-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
ip6_route_output looks into different fields in the passed flowi6 structure,
yet cxgbi passes garbage in nearly all those fields. Zero the structure out
first.
Fixes: fc8d0590d9142 ("libcxgbi: Add ipv6 api to driver")
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
While trying to use --call-graph lbr in 'perf trace', since we only are
interested in the callchain for userspace, up to the callchain, I found
that 'perf evlist' is not decoding the branch_sample_type field, fix it.
Before:
# perf record --call-graph lbr usleep 1
# perf evlist -v
cycles:ppp: size: 112, { sample_period, sample_freq }: 4000,
sample_type: IP|TID|TIME|CALLCHAIN|CPU|PERIOD|BRANCH_STACK,
disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1,
precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1,
comm_exec: 1, branch_sample_type: 51201
^^^^^^^^^^^^^^^^^^^^^^^^^
After:
# perf evlist -v
cycles:ppp: size: 112, { sample_period, sample_freq }: 4000,
sample_type: IP|TID|TIME|CALLCHAIN|CPU|PERIOD|BRANCH_STACK,
disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1,
precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1,
comm_exec: 1, branch_sample_type: USER|CALL_STACK|NO_FLAGS|NO_CYCLES
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-hozai7974u0ulgx13k96fcaw@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Hello,
So, this ended up a lot simpler than I originally expected. I tested
it lightly and it seems to work fine. Petr, can you please test these
two patches w/o the lru drain drop patch and see whether the problem
is gone?
Thanks.
------ 8< ------
If charge moving is used, memcg performs relabeling of the affected
pages from its ->attach callback which is called under both
cgroup_threadgroup_rwsem and thus can't create new kthreads. This is
fragile as various operations may depend on workqueues making forward
progress which relies on the ability to create new kthreads.
There's no reason to perform charge moving from ->attach which is deep
in the task migration path. Move it to ->post_attach which is called
after the actual migration is finished and cgroup_threadgroup_rwsem is
dropped.
* move_charge_struct->mm is added and ->can_attach is now responsible
for pinning and recording the target mm. mem_cgroup_clear_mc() is
updated accordingly. This also simplifies mem_cgroup_move_task().
* mem_cgroup_move_task() is now called from ->post_attach instead of
->attach.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@kernel.org>
Debugged-and-tested-by: Petr Mladek <pmladek@suse.com>
Reported-by: Cyril Hrubis <chrubis@suse.cz>
Reported-by: Johannes Weiner <hannes@cmpxchg.org>
Fixes: 1ed1328792ff ("sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem")
Cc: <stable@vger.kernel.org> # 4.4+
|
|
cgroup_subsys->post_attach callback
Since e93ad19d0564 ("cpuset: make mm migration asynchronous"), cpuset
kicks off asynchronous NUMA node migration if necessary during task
migration and flushes it from cpuset_post_attach_flush() which is
called at the end of __cgroup_procs_write(). This is to avoid
performing migration with cgroup_threadgroup_rwsem write-locked which
can lead to deadlock through dependency on kworker creation.
memcg has a similar issue with charge moving, so let's convert it to
an official callback rather than the current one-off cpuset specific
function. This patch adds cgroup_subsys->post_attach callback and
makes cpuset register cpuset_post_attach_flush() as its ->post_attach.
The conversion is mostly one-to-one except that the new callback is
called under cgroup_mutex. This is to guarantee that no other
migration operations are started before ->post_attach callbacks are
finished. cgroup_mutex is one of the outermost mutex in the system
and has never been and shouldn't be a problem. We can add specialized
synchronization around __cgroup_procs_write() but I don't think
there's any noticeable benefit.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: <stable@vger.kernel.org> # 4.4+ prerequisite for the next patch
|
|
This reverts the following three commits:
70af921db6f8835f4b11c65731116560adb00c14
799977d9aafbf0ca0b9c39b04cbfb16db71302c9
f1705ec197e705b79ea40fe7a2cc5acfa1d3bfac
The feature was ill conceived, has terrible semantics, and has added
nothing but regressions to the already fragile ipv6 stack.
Fixes: f1705ec197e7 ("net: ipv6: Make address flushing on ifdown optional")
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Starting the kernel client with cephx disabled and then enabling cephx
and restarting userspace daemons can result in a crash:
[262671.478162] BUG: unable to handle kernel paging request at ffffebe000000000
[262671.531460] IP: [<ffffffff811cd04a>] kfree+0x5a/0x130
[262671.584334] PGD 0
[262671.635847] Oops: 0000 [#1] SMP
[262672.055841] CPU: 22 PID: 2961272 Comm: kworker/22:2 Not tainted 4.2.0-34-generic #39~14.04.1-Ubuntu
[262672.162338] Hardware name: Dell Inc. PowerEdge R720/068CDY, BIOS 2.4.3 07/09/2014
[262672.268937] Workqueue: ceph-msgr con_work [libceph]
[262672.322290] task: ffff88081c2d0dc0 ti: ffff880149ae8000 task.ti: ffff880149ae8000
[262672.428330] RIP: 0010:[<ffffffff811cd04a>] [<ffffffff811cd04a>] kfree+0x5a/0x130
[262672.535880] RSP: 0018:ffff880149aeba58 EFLAGS: 00010286
[262672.589486] RAX: 000001e000000000 RBX: 0000000000000012 RCX: ffff8807e7461018
[262672.695980] RDX: 000077ff80000000 RSI: ffff88081af2be04 RDI: 0000000000000012
[262672.803668] RBP: ffff880149aeba78 R08: 0000000000000000 R09: 0000000000000000
[262672.912299] R10: ffffebe000000000 R11: ffff880819a60e78 R12: ffff8800aec8df40
[262673.021769] R13: ffffffffc035f70f R14: ffff8807e5b138e0 R15: ffff880da9785840
[262673.131722] FS: 0000000000000000(0000) GS:ffff88081fac0000(0000) knlGS:0000000000000000
[262673.245377] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[262673.303281] CR2: ffffebe000000000 CR3: 0000000001c0d000 CR4: 00000000001406e0
[262673.417556] Stack:
[262673.472943] ffff880149aeba88 ffff88081af2be04 ffff8800aec8df40 ffff88081af2be04
[262673.583767] ffff880149aeba98 ffffffffc035f70f ffff880149aebac8 ffff8800aec8df00
[262673.694546] ffff880149aebac8 ffffffffc035c89e ffff8807e5b138e0 ffff8805b047f800
[262673.805230] Call Trace:
[262673.859116] [<ffffffffc035f70f>] ceph_x_destroy_authorizer+0x1f/0x50 [libceph]
[262673.968705] [<ffffffffc035c89e>] ceph_auth_destroy_authorizer+0x3e/0x60 [libceph]
[262674.078852] [<ffffffffc0352805>] put_osd+0x45/0x80 [libceph]
[262674.134249] [<ffffffffc035290e>] remove_osd+0xae/0x140 [libceph]
[262674.189124] [<ffffffffc0352aa3>] __reset_osd+0x103/0x150 [libceph]
[262674.243749] [<ffffffffc0354703>] kick_requests+0x223/0x460 [libceph]
[262674.297485] [<ffffffffc03559e2>] ceph_osdc_handle_map+0x282/0x5e0 [libceph]
[262674.350813] [<ffffffffc035022e>] dispatch+0x4e/0x720 [libceph]
[262674.403312] [<ffffffffc034bd91>] try_read+0x3d1/0x1090 [libceph]
[262674.454712] [<ffffffff810ab7c2>] ? dequeue_entity+0x152/0x690
[262674.505096] [<ffffffffc034cb1b>] con_work+0xcb/0x1300 [libceph]
[262674.555104] [<ffffffff8108fb3e>] process_one_work+0x14e/0x3d0
[262674.604072] [<ffffffff810901ea>] worker_thread+0x11a/0x470
[262674.652187] [<ffffffff810900d0>] ? rescuer_thread+0x310/0x310
[262674.699022] [<ffffffff810957a2>] kthread+0xd2/0xf0
[262674.744494] [<ffffffff810956d0>] ? kthread_create_on_node+0x1c0/0x1c0
[262674.789543] [<ffffffff817bd81f>] ret_from_fork+0x3f/0x70
[262674.834094] [<ffffffff810956d0>] ? kthread_create_on_node+0x1c0/0x1c0
What happens is the following:
(1) new MON session is established
(2) old "none" ac is destroyed
(3) new "cephx" ac is constructed
...
(4) old OSD session (w/ "none" authorizer) is put
ceph_auth_destroy_authorizer(ac, osd->o_auth.authorizer)
osd->o_auth.authorizer in the "none" case is just a bare pointer into
ac, which contains a single static copy for all services. By the time
we get to (4), "none" ac, freed in (2), is long gone. On top of that,
a new vtable installed in (3) points us at ceph_x_destroy_authorizer(),
so we end up trying to destroy a "none" authorizer with a "cephx"
destructor operating on invalid memory!
To fix this, decouple authorizer destruction from ac and do away with
a single static "none" authorizer by making a copy for each OSD or MDS
session. Authorizers themselves are independent of ac and so there is
no reason for destroy_authorizer() to be an ac op. Make it an op on
the authorizer itself by turning ceph_authorizer into a real struct.
Fixes: http://tracker.ceph.com/issues/15447
Reported-by: Alan Zhang <alan.zhang@linux.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Sage Weil <sage@redhat.com>
|
|
To make the code more compat and centralized, this patch add a
unified function - regulator_ops_is_valid. So we can add
some extra checking code easily later.
Signed-off-by: WEN Pingbo <pingbo.wen@linaro.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
The driver was using only linear ranges. Now we remove linear range
definitions with a single range. So we have to add an ops struct for
ranges and adjust all other ops functions accordingly.
Signed-off-by: Wadim Egorov <w.egorov@phytec.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Commit 52cbae0127ad ("toshiba_acpi: Change default Hotkey enabling value")
changed the hotkeys enabling value, as it was the same value Windows uses,
however, it turns out that the value tells the EC that the driver will now
take care of the hardware events like the physical RFKill switch or the
pointing device toggle button.
This patch reverts such commit by changing the default hotkey enabling
value to 0x09, which enables hotkey events only, making the hardware
buttons working again.
Fixes bugs 113331 and 114941.
Signed-off-by: Azael Avalos <coproscefalo@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Darren Hart <dvhart@linux.intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
"This fixes a couple of regressions in the talitos driver that were
introduced back in 4.3.
The first bug causes a crash when the driver's AEAD functionality is
used while the second bug prevents its AEAD feature from working once
you get past the first bug"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: talitos - fix AEAD tcrypt tests
crypto: talitos - fix crash in talitos_cra_init()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes
Enable dm814x and dra62x clock driver. This branch has a dependency
to the clk-ti branch from the Linux clk tree for the ADPLL clock driver.
Otherwise things won't keep booting properly when we flip over to use
the clock driver instead of fixed clocks set up by the bootloader.
* tag 'omap-for-v4.6/dt-ti81xx-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: dts: Add clocks for dm814x ADPLL
|
|
To check deeply nested page fault callchains.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-wuji34xx003kr88nmqt6jkgf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-shj0fazntmskhjild5i6x73l@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
This fixes a bug caused by an unitialized callchain cursor. The crash
frist appeared in:
6f736735e30f ("perf evsel: Require that callchains be resolved before
calling fprintf_{sym,callchain}")
The callchain cursor is a struct that contains pointers, that when
uninitialized will cause unpredictable behavior (usually a crash)
when trying to append to the callchain.
The existing implementation has the following issues:
1. The callchain cursor used is not initialized, resulting in
unpredictable behavior when used.
2. The cursor is declared on the stack. Even if it is properly initalized,
the implmentation will leak memory when the function returns,
since all the references to the callchain_nodes allocated by
callchain_cursor_append will be lost when the cursor goes out of
scope.
3. Storing the cursor on the stack is inefficient. Even if memory is
properly freed when it goes out of scope, a performance penalty
will be incurred due to reallocation of callchain nodes.
callchain_cursor_append is designed to avoid these reallocations
when an existing cursor is reused.
This patch fixes the crash by replacing cursor_callchain with a reference
to the global callchain_cursor which also resolves all 3 issues mentioned
above.
How to reproduce the crash:
$ perf record --call-graph=dwarf stress -t 1 -c 1
$ perf script > /dev/null
Segfault
Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: 6f736735e30f ("perf evsel: Require that callchains be resolved before calling fprintf_{sym,callchain}")
Link: http://lkml.kernel.org/r/1461119531-2529-1-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Forgot about page faults, a software event, when adding support for callchains,
fix it:
# trace --no-syscalls --pf maj --call dwarf
0.000 ( 0.000 ms): Xorg/2068 majfault [sfbSegment1+0x0] => /usr/lib64/xorg/modules/drivers/intel_drv.so@0x11b490 (x.)
sfbSegment1+0x0 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
fbPolySegment32+0x361 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
sna_poly_segment+0x743 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
damagePolySegment+0x77 (/usr/libexec/Xorg)
ProcPolySegment+0xe7 (/usr/libexec/Xorg)
Dispatch+0x25f (/usr/libexec/Xorg)
dix_main+0x3c3 (/usr/libexec/Xorg)
__libc_start_main+0xf0 (/usr/lib64/libc-2.22.so)
_start+0x29 (/usr/libexec/Xorg)
0.257 ( 0.000 ms): Xorg/2068 majfault [miZeroClipLine+0x0] => /usr/libexec/Xorg@0x18e830 (x.)
miZeroClipLine+0x0 (/usr/libexec/Xorg)
_fbSegment+0x2c0 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
sfbSegment1+0x67 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
fbPolySegment32+0x361 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
sna_poly_segment+0x743 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
damagePolySegment+0x77 (/usr/libexec/Xorg)
ProcPolySegment+0xe7 (/usr/libexec/Xorg)
Dispatch+0x25f (/usr/libexec/Xorg)
dix_main+0x3c3 (/usr/libexec/Xorg)
__libc_start_main+0xf0 (/usr/lib64/libc-2.22.so)
_start+0x29 (/usr/libexec/Xorg)
^C#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-8h6ssirw5z15qyhy2lwd6f89@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Prep work for next patches, where we'll need access to the created
evsels, to possibly configure callchains.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-2pcgsgnkgellhlcao4aub8tu@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
write_buildid() increments 'name_len' with intention to take into
account trailing zero byte. However, 'name_len' was already incremented
in machine__write_buildid_table() before. So this leads to
out-of-bounds read in do_write():
$ ./perf record sleep 0
[ perf record: Woken up 1 times to write data ]
=================================================================
==15899==ERROR: AddressSanitizer: global-buffer-overflow on address 0x00000099fc92 at pc 0x7f1aa9c7eab5 bp 0x7fff940f84d0 sp 0x7fff940f7c78
READ of size 19 at 0x00000099fc92 thread T0
#0 0x7f1aa9c7eab4 (/usr/lib/gcc/x86_64-pc-linux-gnu/5.3.0/libasan.so.2+0x44ab4)
#1 0x649c5b in do_write util/header.c:67
#2 0x649c5b in write_padded util/header.c:82
#3 0x57e8bc in write_buildid util/build-id.c:239
#4 0x57e8bc in machine__write_buildid_table util/build-id.c:278
...
0x00000099fc92 is located 0 bytes to the right of global variable '*.LC99' defined in 'util/symbol.c' (0x99fc80) of size 18
'*.LC99' is ascii string '[kernel.kallsyms]'
...
Shadow bytes around the buggy address:
0x00008012bf80: f9 f9 f9 f9 00 00 00 00 00 00 03 f9 f9 f9 f9 f9
=>0x00008012bf90: 00 00[02]f9 f9 f9 f9 f9 00 00 00 00 00 05 f9 f9
0x00008012bfa0: f9 f9 f9 f9 00 03 f9 f9 f9 f9 f9 f9 00 00 00 00
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461053847-5633-1-git-send-email-aryabinin@virtuozzo.com
[ Remove the off-by one at the origin, to keep len(s) == strlen(s) assumption ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Remove the final printk. All preceding output is already properly
newline-terminated and the printk isn't even KERN_CONT to begin with,
so it only adds one empty line to the log.
Signed-off-by: Michal Pecio <michal.pecio@gmail.com>
Signed-off-by: Shaohua Li <shli@fb.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi into efi/urgent
Pull EFI fix from Matt Fleming:
* Avoid out-of-bounds access in the efivars code when performing
string matching on converted EFI variable names (Laszlo Ersek)
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|