Age | Commit message (Collapse) | Author |
|
Calling a GPIO LEDs is quite likely to work even if the kernel
has paniced, so they are ideal to blink in this situation.
This commit adds support for the new "panic-indicator"
firmware property, allowing to mark a given LED to blink on
a kernel panic.
Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Reviewed-by: Matthias Brugger <mbrugger@suse.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
|
|
This commit adds a new led_cdev flag LED_PANIC_INDICATOR, which
allows to mark a specific LED to be switched to the "panic"
trigger, on a kernel panic.
This is useful to allow the user to assign a regular trigger
to a given LED, and still blink that LED on a kernel panic.
Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Reviewed-by: Matthias Brugger <mbrugger@suse.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
|
|
The dma_alloc_coherent() function returns a virtual address which can
be used for coherent access to the underlying memory. On some
architectures, like arm64, undefined behavior results if this memory is
also accessed via virtual mappings that are not coherent. Because of
their undefined nature, operations like virt_to_page() return garbage
when passed virtual addresses obtained from dma_alloc_coherent(). Any
subsequent mappings via vmap() of the garbage page values are unusable
and result in bad things like bus errors (synchronous aborts in ARM64
speak).
The mlx4 driver contains code that does the equivalent of:
vmap(virt_to_page(dma_alloc_coherent)), this results in an OOPs when the
device is opened.
Prevent Ethernet driver to run this problematic code by forcing it to
allocate contiguous memory. As for the Infiniband driver, at first we
are trying to allocate contiguous memory, but in case of failure roll
back to work with fragmented memory.
Signed-off-by: Haggai Abramovsky <hagaya@mellanox.com>
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Reported-by: David Daney <david.daney@cavium.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Commit bf6993276f74 ("jbd2: Use tracepoints for history file")
removed the members j_history, j_history_max and j_history_cur from struct
handle_s but the descriptions stayed lingering. Removing them.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Jan Kara <jack@suse.cz>
|
|
Updates from Boris Brezillon:
This pull request contains the following infrastructure changes:
* introduction of the ECC algo concept to extend the ECC mode one
* replacement of the nand_ecclayout infrastructure by something more
future-proof.
* addition of an mtd-activity led trigger to replace the nand-activity
one
And a bunch of specific NAND driver improvements/fixes. Here are the
changes that are worth mentioning:
* rework of the OMAP GPMC and NAND drivers
* prepare the sunxi NAND driver to receive DMA support
* handle bitflips in erased pages on GPMI revisions that do not support
this in hardware.
* tag 'nand/for-4.7' of github.com:linux-nand/linux: (152 commits)
mtd: brcmnand: respect ECC algorithm set by NAND subsystem
gpmi-nand: Handle ECC Errors in erased pages
Documentation: devicetree: deprecate "soft_bch" nand-ecc-mode value
mtd: nand: add support for "nand-ecc-algo" DT property
mtd: mtd: drop NAND_ECC_SOFT_BCH enum value
mtd: drop support for NAND_ECC_SOFT_BCH as "soft_bch" mapping
mtd: nand: read ECC algorithm from the new field
mtd: nand: fsmc: validate ECC setup by checking algorithm directly
mtd: nand: set ECC algorithm to Hamming on fallback
staging: mt29f_spinand: set ECC algorithm explicitly
CRIS v32: nand: set ECC algorithm explicitly
mtd: nand: atmel: set ECC algorithm explicitly
mtd: nand: davinci: set ECC algorithm explicitly
mtd: nand: bf5xx: set ECC algorithm explicitly
mtd: nand: omap2: Fix high memory dma prefetch transfer
mtd: nand: omap2: Start dma request before enabling prefetch
mtd: nandsim: add __init attribute
mtd: nand: move of_get_nand_xxx() helpers into nand_base.c
mtd: nand: sh_flctl: rely on generic DT parsing done in nand_scan_ident()
mtd: nand: mxc: rely on generic DT parsing done in nand_scan_ident()
...
|
|
After the THP refcounting change, obtaining a compound pages from
get_user_pages() no longer allows us to assume the entire compound page
is immediately mappable from a secondary MMU.
A secondary MMU doesn't want to call get_user_pages() more than once for
each compound page, in order to know if it can map the whole compound
page. So a secondary MMU needs to know from a single get_user_pages()
invocation when it can map immediately the entire compound page to avoid
a flood of unnecessary secondary MMU faults and spurious
atomic_inc()/atomic_dec() (pages don't have to be pinned by MMU notifier
users).
Ideally instead of the page->_mapcount < 1 check, get_user_pages()
should return the granularity of the "page" mapping in the "mm" passed
to get_user_pages(). However it's non trivial change to pass the "pmd"
status belonging to the "mm" walked by get_user_pages up the stack (up
to the caller of get_user_pages). So the fix just checks if there is
not a single pte mapping on the page returned by get_user_pages, and in
turn if the caller can assume that the whole compound page is mapped in
the current "mm" (in a pmd_trans_huge()). In such case the entire
compound page is safe to map into the secondary MMU without additional
get_user_pages() calls on the surrounding tail/head pages. In addition
of being faster, not having to run other get_user_pages() calls also
reduces the memory footprint of the secondary MMU fault in case the pmd
split happened as result of memory pressure.
Without this fix after a MADV_DONTNEED (like invoked by QEMU during
postcopy live migration or balloning) or after generic swapping (with a
failure in split_huge_page() that would only result in pmd splitting and
not a physical page split), KVM would map the whole compound page into
the shadow pagetables, despite regular faults or userfaults (like
UFFDIO_COPY) may map regular pages into the primary MMU as result of the
pte faults, leading to the guest mode and userland mode going out of
sync and not working on the same memory at all times.
Any other secondary MMU notifier manager (KVM is just one of the many
MMU notifier users) will need the same information if it doesn't want to
run a flood of get_user_pages_fast and it can support multiple
granularity in the secondary MMU mappings, so I think it is justified to
be exposed not just to KVM.
The other option would be to move transparent_hugepage_adjust to
mm/huge_memory.c but that currently has all kind of KVM data structures
in it, so it's definitely not a cut-and-paste work, so I couldn't do a
fix as cleaner as this one for 4.6.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: "Li, Liang Z" <liang.z.li@intel.com>
Cc: Amit Shah <amit.shah@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Fix problems in uapi definitions reported by Gabriel Laskar: (see
https://lkml.org/lkml/2016/4/5/205 for details)
- move public header file rio_mport_cdev.h to include/uapi/linux directory
- change types in data structures passed as IOCTL parameters
- improve parameter checking in some IOCTL service routines
Signed-off-by: Alexandre Bounine <alexandre.bounine@idt.com>
Reported-by: Gabriel Laskar <gabriel@lse.epita.fr>
Tested-by: Barry Wood <barry.wood@idt.com>
Cc: Gabriel Laskar <gabriel@lse.epita.fr>
Cc: Matt Porter <mporter@kernel.crashing.org>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Andre van Herk <andre.van.herk@prodrive-technologies.com>
Cc: Barry Wood <barry.wood@idt.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Cgroup2 currently doesn't have a per-cgroup swappiness setting. We
might want to add one later - that's a different discussion - but until
we do, the cgroups should always follow the system setting. Otherwise
it will be unchangeably set to whatever the ancestor inherited from the
system setting at the time of cgroup creation.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: <stable@vger.kernel.org> [4.5]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs into next
|
|
This value should not be part of nand_ecc_modes_t as it specifies
algorithm not a mode. We successfully managed to introduce new "algo"
field which is respected now.
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Now that all drivers go through nand_set_flash_node() to parse the generic
NAND properties, we can move all of_get_nand_xxx() helpers in to
nand_base.c, make them static and remove of_mtd.c and of_mtd.h.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Now that all MTD drivers have moved to the mtd_ooblayout_ops model we can
safely remove the struct nand_ecclayout definition, and all the remaining
places where it was still used.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Now that all NAND drivers have switched to mtd_ooblayout_ops, we can kill
the ecc->layout field.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Implementing the mtd_ooblayout_ops interface is the new way of exposing
ECC/OOB layout to MTD users. Modify the onenand drivers to switch to this
approach.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Now that mtd_ooblayout_ecc() returns the ECC byte position using the
OOB free method, we can get rid of the fsmc_nand_eccplace struct.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Implementing the mtd_ooblayout_ops interface is the new way of exposing
ECC/OOB layout to MTD users.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Commit 326e1dbb57 ("block: remove management of bi_remaining when
restoring original bi_end_io") made bio_inc_remaining() private to bio.c
because the only use-case that made sense was confined to the
bio_chain() interface.
Since that time DM thinp went on to use bio_chain() in its relatively
complex implementation of async discard support. That implementation,
even when converted over to use the new async __blkdev_issue_discard()
interface, depends on deferred completion of the original discard bio --
which is most appropriately implemented using bio_inc_remaining().
DM thinp foolishly duplicated bio_inc_remaining(), local to dm-thin.c as
__bio_inc_remaining(), so re-exporting bio_inc_remaining() allows us to
put an end to that foolishness.
All said, bio_inc_remaining() should really only be used in conjunction
with bio_chain(). It isn't intended for generic bio reference counting.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
On mx6ul the General Purpose Register 1 (GPR1) contains the following
bits for configuring the direction of the SAI MCLKs:
SAI1_MCLK_DIR, SAI2_MCLK_DIR, SAI3_MCLK_DIR
Introduce the "fsl,sai-mclk-direction-output" optional property to allow
configuring the SAI_MCLK outputs.
Tested on a imx6ul-evk board.
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Acked-by: Nicolin Chen <nicoleotsuka@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Commit:
26657848502b7847 ("perf/core: Verify we have a single perf_hw_context PMU")
forcefully prevents multiple PMUs from sharing perf_hw_context, as this
generally doesn't make sense. It is a common bug for uncore PMUs to
use perf_hw_context rather than perf_invalid_context, which this detects.
However, systems exist with heterogeneous CPUs (and hence heterogeneous
HW PMUs), for which sharing perf_hw_context is necessary, and possible
in some limited cases.
To make this work we have to perform some gymnastics, as we did in these
commits:
66eb579e66ecfea5 ("perf: allow for PMU-specific event filtering")
c904e32a69b7c779 ("arm: perf: filter unschedulable events")
To allow those systems to work, we must allow PMUs for heterogeneous
CPUs to share perf_hw_context, though we must still disallow sharing
otherwise to detect the common misuse of perf_hw_context.
This patch adds a new PERF_PMU_CAP_HETEROGENEOUS_CPUS for this, updates
the core logic to account for this, and makes use of it in the arm_pmu
code that is used for systems with heterogeneous CPUs. Comments are
added to make the rationale clear and hopefully avoid accidental abuse.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20160426103346.GA20836@leverpostej
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Many instruction tracing PMUs out there support address range-based
filtering, which would, for example, generate trace data only for a
given range of instruction addresses, which is useful for tracing
individual functions, modules or libraries. Other PMUs may also
utilize this functionality to allow filtering to or filtering out
code at certain address ranges.
This patch introduces the interface for userspace to specify these
filters and for the PMU drivers to apply these filters to hardware
configuration.
The user interface is an ASCII string that is passed via an ioctl()
and specifies (in the form of an ASCII string) address ranges within
certain object files or within kernel. There is no special treatment
for kernel modules yet, but it might be a worthy pursuit.
The PMU driver interface basically adds two extra callbacks to the
PMU driver structure, one of which validates the filter configuration
proposed by the user against what the hardware is actually capable of
doing and the other one translates hardware-independent filter
configuration into something that can be programmed into the
hardware.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461771888-10409-6-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
All the atomic operations have their arguments the wrong way around;
make atomic_fetch_or() consistent and flip them.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
These sched metrics have become complex enough, so describe them
in detail at their definition.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[ Fixed the text to improve its spelling and typography. ]
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: lizefan@huawei.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1459829551-21625-4-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Integer metric needs fixed point arithmetic. In sched/fair, a few
metrics, e.g., weight, load, load_avg, util_avg, freq, and capacity,
may have different fixed point ranges, which makes their update and
usage error-prone.
In order to avoid the errors relating to the fixed point range, we
definie a basic fixed point range, and then formalize all metrics to
base on the basic range.
The basic range is 1024 or (1 << 10). Further, one can recursively
apply the basic range to have larger range.
Pointed out by Ben Segall, weight (visible to user, e.g., NICE-0 has
1024) and load (e.g., NICE_0_LOAD) have independent ranges, but they
must be well calibrated.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: lizefan@huawei.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1459829551-21625-2-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The problem with the existing lock pinning is that each pin is of
value 1; this mean you can simply unpin if you know its pinned,
without having any extra information.
This scheme generates a random (16 bit) cookie for each pin and
requires this same cookie to unpin. This means you have to keep the
cookie in context.
No objsize difference for !LOCKDEP kernels.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
applying new changes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
|
|
Functions dev_pm_opp_of_{cpumask_,}remove_table removes/frees all the
static OPP entries associated with the device and/or all cpus(in case
of cpumask) that are created from DT.
However the OPP entries are populated reading from the firmware or some
different method using dev_pm_opp_add are marked dynamic and can't be
removed using above functions.
This patch adds non DT/OF versions of dev_pm_opp_{cpumask_,}remove_table
to support the above mentioned usecase.
This is in preparation to make use of the same in scpi-cpufreq.c
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The new use of dev_pm_opp_set_sharing_cpus resulted in a harmless compiler
warning with CONFIG_CPUMASK_OFFSTACK=y:
drivers/cpufreq/mvebu-cpufreq.c: In function 'armada_xp_pmsu_cpufreq_init':
include/linux/cpumask.h:550:25: error: passing argument 2 of 'dev_pm_opp_set_sharing_cpus' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
The problem here is that cpumask_var_t gets passed by reference, but
by declaring a 'const cpumask_var_t' argument, only the pointer is
constant, not the actual mask. This is harmless because the function
does not actually modify the mask.
This patch changes the function prototypes for all of the related functions
to pass a 'struct cpumask *' instead of 'cpumask_var_t', matching what
most other such functions do in the kernel. This lets us mark all the
other similar functions as taking a 'const' mask where possible,
and it avoids the warning without any change in object code.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 947bd567f7a5 (mvebu: Use dev_pm_opp_set_sharing_cpus() to mark OPP tables as shared)
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Removing the SCI penalize function as the penalty is now calculated on the
fly.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
_OSI handling code grows giant and it's time to move them into one file.
This patch collects all _OSI handling code into one single file.
So that we only have the following functions to be used externally:
early_acpi_osi_init(): Used by DMI detections;
acpi_osi_init(): Used to initialize OSI command line settings and install
Linux specific _OSI handler;
acpi_osi_setup(): The API that should be used by the external quirks.
acpi_osi_is_win8(): The API is used by the external drivers to determine
if BIOS supports Win8.
CONFIG_DMI is not useful as stub dmi_check_system() can make everything
stub because of strip.
No functional changes.
Tested-by: Lukas Wunner <lukas@wunner.de>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
source file
This patch performs necessary cleanups before moving OSI support to
another file.
1. Change printk into pr_xxx
2. Do not initialize values to 0
3. Do not append additional "return" at the end of the function
4. Remove useless comments which may easily break line breaking rule
After fixing the coding style issues, rename functions to make them looking
like acpi_osi_xxx.
No functional changes.
Tested-by: Lukas Wunner <lukas@wunner.de>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
This patch changes "int/unsigned int" to "bool" to simplify the code.
Tested-by: Lukas Wunner <lukas@wunner.de>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The following commit always reports positive value when Apple hardware
queries _OSI("Darwin"):
Commit: 7bc5a2bad0b8d9d1ac9f7b8b33150e4ddf197334
Subject: ACPI: Support _OSI("Darwin") correctly
However since this implementation places the judgement in runtime, it
breaks acpi_osi=!Darwin and cannot return unsupported for _OSI("WinXXX")
invoked before invoking _OSI("Darwin").
This patch fixes the issues by reverting the wrong support and implementing
the default behavior of _OSI("Darwin")/_OSI("WinXXX") on Apple hardware via
DMI matching.
Fixes: 7bc5a2bad0b8 (ACPI: Support _OSI("Darwin") correctly)
Link: https://bugzilla.kernel.org/show_bug.cgi?id=92111
Reported-and-tested-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Since we will need the backlight_device_get_by_type API, we can use it
instead of the backlight_device_registered API whenever necessary so
remove the backlight_device_registered API.
Signed-off-by: Aaron Lu <aaron.lu@intel.com>
Acked-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
It is useful to get the backlight device's pointer and use it to set
backlight in some cases(the following patch will make use of it) so add
the two APIs and export them.
Signed-off-by: Aaron Lu <aaron.lu@intel.com>
Acked-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
tcp_snd_una_update() and tcp_rcv_nxt_update() call
u64_stats_update_begin() either from process context or BH handler.
This triggers a lockdep splat on 32bit & SMP builds.
We could add u64_stats_update_begin_bh() variant but this would
slow down 32bit builds with useless local_disable_bh() and
local_enable_bh() pairs, since we own the socket lock at this point.
I add sock_owned_by_me() helper to have proper lockdep support
even on 64bit builds, and new u64_stats_update_begin_raw()
and u64_stats_update_end_raw methods.
Fixes: c10d9310edf5 ("tcp: do not assume TCP code is non preemptible")
Reported-by: Fabio Estevam <festevam@gmail.com>
Diagnosed-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Tested-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
With a i2c topology like the following
GPIO ---| ------ BAT1
| v /
I2C -----+----------+---- MUX
| \
EEPROM ------ BAT2
there is a locking problem with the GPIO controller since it is a client
on the same i2c bus that it muxes. Transfers to the mux clients (e.g. BAT1)
will lock the whole i2c bus prior to attempting to switch the mux to the
correct i2c segment. In the above case, the GPIO device is an I/O expander
with an i2c interface, and since the GPIO subsystem knows nothing (and
rightfully so) about the lockless needs of the i2c mux code, this results
in a deadlock when the GPIO driver issues i2c transfers to modify the
mux.
So, observing that while it is needed to have the i2c bus locked during the
actual MUX update in order to avoid random garbage on the slave side, it
is not strictly a must to have it locked over the whole sequence of a full
select-transfer-deselect mux client operation. The mux itself needs to be
locked, so transfers to clients behind the mux are serialized, and the mux
needs to be stable during all i2c traffic (otherwise individual mux slave
segments might see garbage, or worse).
Introduce this new locking concept as "mux-locked" muxes, and call the
pre-existing mux locking scheme "parent-locked".
Modify the i2c mux locking so that muxes that are "mux-locked" locks only
the muxes on the parent adapter instead of the whole i2c bus when there is
a transfer to the slave side of the mux. This lock serializes transfers to
the slave side of the muxes on the parent adapter.
Add code to i2c-mux-gpio and i2c-mux-pinctrl that checks if all involved
gpio/pinctrl devices have a parent that is an i2c adapter in the same
adapter tree that is muxed, and request a "mux-locked mux" if that is the
case.
Modify the select-transfer-deselect code for "mux-locked" muxes so
that each of the select-transfer-deselect ops locks the mux parent
adapter individually.
Signed-off-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
Add i2c_lock_bus() and i2c_unlock_bus(), which call the new lock_bus and
unlock_bus ops in the adapter. These funcs/ops take an additional flags
argument that indicates for what purpose the adapter is locked.
There are two flags, I2C_LOCK_ROOT_ADAPTER and I2C_LOCK_SEGMENT, but they
are both implemented the same. For now. Locking the root adapter means
that the whole bus is locked, locking the segment means that only the
current bus segment is locked (i.e. i2c traffic on the parent side of
a mux is still allowed even if the child side of the mux is locked).
Also support a trylock_bus op (but no function to call it, as it is not
expected to be needed outside of the i2c core).
Implement i2c_lock_adapter/i2c_unlock_adapter in terms of the new locking
scheme (i.e. lock with the I2C_LOCK_ROOT_ADAPTER flag).
Locking the root adapter and locking the segment is the same thing for
all root adapters (e.g. in the normal case of a simple topology with no
i2c muxes). The two locking variants are also the same for traditional
muxes (aka parent-locked muxes). These muxes traverse the tree, locking
each level as they go until they reach the root. This patch is preparatory
for a later patch in the series introducing mux-locked muxes, which behave
differently depending on the requested locking. Since all current users
are using i2c_lock_adapter, which is a wrapper for I2C_LOCK_ROOT_ADAPTER,
we only need to annotate the calls that will not need to lock the root
adapter for mux-locked muxes. I.e. the instances that needs to use
I2C_LOCK_SEGMENT instead of i2c_lock_adapter/I2C_LOCK_ROOT_ADAPTER. Those
instances are in the i2c_transfer and i2c_smbus_xfer functions, so that
mux-locked muxes can single out normal i2c accesses to its slave side
and adjust the locking for those accesses.
Signed-off-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
previous patches removed all direct accesses to dev->trans_start,
so change the netif_trans_update helper to update trans_start of
netdev queue 0 instead and then remove trans_start from struct net_device.
AFAICS a lot of the netif_trans_update() invocations are now useless
because they occur in ndo_start_xmit and driver doesn't set LLTX
(i.e. stack already took care of the update).
As I can't test any of them it seems better to just leave them alone.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
trans_start exists twice:
- as member of net_device (legacy)
- as member of netdev_queue
In order to get rid of the legacy case, add a helper for the
dev->trans_update (this patch), then convert spots that do
dev->trans_start = jiffies
to use this helper (next patch).
This would then allow us to change the helper so that it updates the
trans_stamp of netdev queue 0 instead.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Update the relevant flow steering device structs and commands to
support vport.
Update the flow steering core API to receive vport number.
Add ingress and egress ACL flow table name spaces.
Add ACL flow table support:
* ACL (Access Control List) flow table is a table that contains
only allow/drop steering rules.
* We have two types of ACL flow tables - ingress and egress.
* ACLs handle traffic sent from/to E-Switch FDB table, Ingress refers to
traffic sent from Vport to E-Switch and Egress refers to traffic sent
from E-Switch to vport.
* Ingress ACL flow table allow/drop rules is checked against traffic
sent from VF.
* Egress ACL flow table allow/drop rules is checked against traffic sent
to VF.
Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb into usb-next
Hi Greg, below are changes for chipidea and OTG FSM, no major changes.
Some for documentation, some for tiny changes, thanks.
|
|
Here's a set of patches that changes how certificates/keys are determined
to be trusted. That's currently a two-step process:
(1) Up until recently, when an X.509 certificate was parsed - no matter
the source - it was judged against the keys in .system_keyring,
assuming those keys to be trusted if they have KEY_FLAG_TRUSTED set
upon them.
This has just been changed such that any key in the .ima_mok keyring,
if configured, may also be used to judge the trustworthiness of a new
certificate, whether or not the .ima_mok keyring is meant to be
consulted for whatever process is being undertaken.
If a certificate is determined to be trustworthy, KEY_FLAG_TRUSTED
will be set upon a key it is loaded into (if it is loaded into one),
no matter what the key is going to be loaded for.
(2) If an X.509 certificate is loaded into a key, then that key - if
KEY_FLAG_TRUSTED gets set upon it - can be linked into any keyring
with KEY_FLAG_TRUSTED_ONLY set upon it. This was meant to be the
system keyring only, but has been extended to various IMA keyrings.
A user can at will link any key marked KEY_FLAG_TRUSTED into any
keyring marked KEY_FLAG_TRUSTED_ONLY if the relevant permissions masks
permit it.
These patches change that:
(1) Trust becomes a matter of consulting the ring of trusted keys supplied
when the trust is evaluated only.
(2) Every keyring can be supplied with its own manager function to
restrict what may be added to that keyring. This is called whenever a
key is to be linked into the keyring to guard against a key being
created in one keyring and then linked across.
This function is supplied with the keyring and the key type and
payload[*] of the key being linked in for use in its evaluation. It
is permitted to use other data also, such as the contents of other
keyrings such as the system keyrings.
[*] The type and payload are supplied instead of a key because as an
optimisation this function may be called whilst creating a key and
so may reject the proposed key between preparse and allocation.
(3) A default manager function is provided that permits keys to be
restricted to only asymmetric keys that are vouched for by the
contents of the system keyring.
A second manager function is provided that just rejects with EPERM.
(4) A key allocation flag, KEY_ALLOC_BYPASS_RESTRICTION, is made available
so that the kernel can initialise keyrings with keys that form the
root of the trust relationship.
(5) KEY_FLAG_TRUSTED and KEY_FLAG_TRUSTED_ONLY are removed, along with
key_preparsed_payload::trusted.
This change also makes it possible in future for userspace to create a private
set of trusted keys and then to have it sealed by setting a manager function
where the private set is wholly independent of the kernel's trust
relationships.
Further changes in the set involve extracting certain IMA special keyrings
and making them generally global:
(*) .system_keyring is renamed to .builtin_trusted_keys and remains read
only. It carries only keys built in to the kernel. It may be where
UEFI keys should be loaded - though that could better be the new
secondary keyring (see below) or a separate UEFI keyring.
(*) An optional secondary system keyring (called .secondary_trusted_keys)
is added to replace the IMA MOK keyring.
(*) Keys can be added to the secondary keyring by root if the keys can
be vouched for by either ring of system keys.
(*) Module signing and kexec only use .builtin_trusted_keys and do not use
the new secondary keyring.
(*) Config option SYSTEM_TRUSTED_KEYS now depends on ASYMMETRIC_KEY_TYPE as
that's the only type currently permitted on the system keyrings.
(*) A new config option, IMA_KEYRINGS_PERMIT_SIGNED_BY_BUILTIN_OR_SECONDARY,
is provided to allow keys to be added to IMA keyrings, subject to the
restriction that such keys are validly signed by a key already in the
system keyrings.
If this option is enabled, but secondary keyrings aren't, additions to
the IMA keyrings will be restricted to signatures verifiable by keys in
the builtin system keyring only.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
This is similar with support for creating triggers via configfs.
Devices will be hosted under:
* /config/iio/devices
We allow users to register "device types" under:
* /config/iio/devices/<device_types>/
Signed-off-by: Daniel Baluta <daniel.baluta@intel.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
RCar Gen2 and later implementations of TMIO/SDHI have their own set of
features and additions. FAST_CLK_CHG is just one of them and I see a few
others being added soon. Some may work on older chipsets but this needs
to be tested case by case. Instead of adding a bunch of flags for each
feature, add a global RCar2+ one for now. We can still break out
features if the need arises.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
If a signal stack is set up with SS_AUTODISARM, then the kernel
inherently avoids incorrectly resetting the signal stack if signals
recurse: the signal stack will be reset on the first signal
delivery. This means that we don't need check the stack pointer
when delivering signals if SS_AUTODISARM is set.
This will make segmented x86 programs more robust: currently there's
a hole that could be triggered if ESP/RSP appears to point to the
signal stack but actually doesn't due to a nonzero SS base.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Aleksa Sarai <cyphar@cyphar.com>
Cc: Amanieu d'Antras <amanieu@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Jason Low <jason.low2@hp.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Moore <pmoore@redhat.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Stas Sergeev <stsp@list.ru>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Cc: linux-api@vger.kernel.org
Link: http://lkml.kernel.org/r/c46bee4654ca9e68c498462fd11746e2bd0d98c8.1462296606.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Conflicts:
net/ipv4/ip_gre.c
Minor conflicts between tunnel bug fixes in net and
ipv6 tunnel cleanups in net-next.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pull networking fixes from David Miller:
"Some straggler bug fixes:
1) Batman-adv DAT must consider VLAN IDs when choosing candidate
nodes, from Antonio Quartulli.
2) Fix botched reference counting of vlan objects and neigh nodes in
batman-adv, from Sven Eckelmann.
3) netem can crash when it sees GSO packets, the fix is to segment
then upon ->enqueue. Fix from Neil Horman with help from Eric
Dumazet.
4) Fix VXLAN dependencies in mlx5 driver Kconfig, from Matthew
Finlay.
5) Handle VXLAN ops outside of rcu lock, via a workqueue, in mlx5,
since it can sleep. Fix also from Matthew Finlay.
6) Check mdiobus_scan() return values properly in pxa168_eth and macb
drivers. From Sergei Shtylyov.
7) If the netdevice doesn't support checksumming, disable
segmentation. From Alexandery Duyck.
8) Fix races between RDS tcp accept and sending, from Sowmini
Varadhan.
9) In macb driver, probe MDIO bus before we register the netdev,
otherwise we can try to open the device before it is really ready
for that. Fix from Florian Fainelli.
10) Netlink attribute size for ILA "tunnels" not calculated properly,
fix from Nicolas Dichtel"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
ipv6/ila: fix nlsize calculation for lwtunnel
net: macb: Probe MDIO bus before registering netdev
RDS: TCP: Synchronize accept() and connect() paths on t_conn_lock.
RDS:TCP: Synchronize rds_tcp_accept_one with rds_send_xmit when resetting t_sock
vxlan: Add checksum check to the features check function
net: Disable segmentation if checksumming is not supported
net: mvneta: Remove superfluous SMP function call
macb: fix mdiobus_scan() error check
pxa168_eth: fix mdiobus_scan() error check
net/mlx5e: Use workqueue for vxlan ops
net/mlx5e: Implement a mlx5e workqueue
net/mlx5: Kconfig: Fix MLX5_EN/VXLAN build issue
net/mlx5: Unmap only the relevant IO memory mapping
netem: Segment GSO packets on enqueue
batman-adv: Fix reference counting of hardif_neigh_node object for neigh_node
batman-adv: Fix reference counting of vlan object for tt_local_entry
batman-adv: B.A.T.M.A.N V - make sure iface is reactivated upon NETDEV_UP event
batman-adv: fix DAT candidate selection (must use vid)
|
|
Export information about the bus stored in the FPGA's header to userspace via
sysfs, instead of hiding it in pr_debug()s from everyone.
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Andreas Werner <andreas.werner@men.de>
Tested-by: Andreas Werner <andreas.werner@men.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|