Age | Commit message (Collapse) | Author |
|
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
User visible changes:
- Mark events as (x86 only) in help output for 'perf kvm stat live" (Alexander Yarygin)
- Provide a better explanation when mmap fails in 'trace' (Arnaldo Carvalho de Melo)
- Add --buildid-dir option to set cache directory, i.e. use:
$ perf --buildid-dir /path/to/dir tool --tool-options
(Jiri Olsa)
- Fix memcpy/memset 'perf bench' output (Rabin Vicent)
- Fix 'perf test' attr tests size values to cope with machine state on
interrupt ABI changes (Jiri Olsa)
- Fixup callchain type parameter handling error message (Kan Liang)
Infrastructure changes and cleanups:
- calloc/xcalloc: Fix argument order (Arjun Sreedharan)
- Move filename__read_int from tools/perf/ to tools/lib, add sysctl__read_int
there and use it in place of ad-hoc copies (Arnaldo Carvalho de Melo)
- Use single strcmp call instead of two (Jiri Olsa)
- Remove extra debugdir variables in 'perf buildid-cache' (Jiri Olsa)
- Fix -a segfault related to kcore handling in 'perf buildid-cache' (Jiri Olsa)
- Move cpumode resolve code to add_callchain_ip (Kan Liang)
- Merge memset into memcpy 'perf bench' (Rabin Vincent)
- Change print format from %lu to %PRIu64 in the hists browser (Tom Huynh)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Use kmem_cache_free() to free a buffer allocated with kmem_cache_alloc().
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The 24x7 counters are continuously running and not updated on an
interrupt. So we record the event counts when stopping the event or
deleting it.
But to "read" a single counter in 24x7, we allocate a page and pass it
into the hypervisor (The HV returns the page full of counters from which
we extract the specific counter for this event).
We allocate a page using GFP_USER and when deleting the event, we end up
with the following warning because we are blocking in interrupt context.
[ 698.641709] BUG: scheduling while atomic: swapper/0/0/0x10010000
We could use GFP_ATOMIC but that could result in failures. Pre-allocate
a buffer so we don't have to allocate in interrupt context. Further as
Michael Ellerman suggested, use Per-CPU buffer so we only need to
allocate once per CPU.
Cc: stable@vger.kernel.org
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The parameters for prepare_ftrace_return() used by the function graph
tracer were swapped to simplify the code on x86_64. But i386 function
graph trampoline also calls this function, and it did not have its
parameters swapped.
Link: http://lkml.kernel.org/r/20141210231732.GA24163@wfg-t540p.sh.intel.com
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Tested-by: Fengguang Wu <fengguang.wu@intel.com>
Fixes: 6a06bdbf7f9c "ftrace/fgraph/x86: Have prepare_ftrace_return() take ip as first parameter"
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup
Pull cgroup update from Tejun Heo:
"cpuset got simplified a bit. cgroup core got a fix on unified
hierarchy and grew some effective css related interfaces which will be
used for blkio support for writeback IO traffic which is currently
being worked on"
* 'for-3.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup:
cgroup: implement cgroup_get_e_css()
cgroup: add cgroup_subsys->css_e_css_changed()
cgroup: add cgroup_subsys->css_released()
cgroup: fix the async css offline wait logic in cgroup_subtree_control_write()
cgroup: restructure child_subsys_mask handling in cgroup_subtree_control_write()
cgroup: separate out cgroup_calc_child_subsys_mask() from cgroup_refresh_child_subsys_mask()
cpuset: lock vs unlock typo
cpuset: simplify cpuset_node_allowed API
cpuset: convert callback_mutex to a spinlock
|
|
This change enables the sunvdc driver to reconnect and recover if a vds
service domain is disconnected or bounced.
By default, it will wait indefinitely for the service domain to become
available again, but will honor a non-zero vdc-timout md property if one
is set. If a timeout is reached, any in-progress I/O's are completed
with -EIO.
Signed-off-by: Dwight Engen <dwight.engen@oracle.com>
Reviewed-by: Chris Hyser <chris.hyser@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Dwight Engen <dwight.engen@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Both sunvdc and sunvnet implemented distinct functionality for incrementing
and decrementing dring indexes. Create common functions for use by both
from the sunvnet versions, which were chosen since they will still work
correctly in case a non power of two ring size is used.
Signed-off-by: Dwight Engen <dwight.engen@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata
Pull libata changes from Tejun Heo:
"The only interesting piece is the support for shingled drives. The
changes in libata layer are minimal. All it does is identifying the
new class of device and report upwards accordingly"
* 'for-3.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata:
libata: Remove FIXME comment in atapi_request_sense()
sata_rcar: Document deprecated "renesas,rcar-sata"
sata_rcar: Add clocks to sata_rcar bindings
ahci_sunxi: Make AHCI_HFLAG_NO_PMP flag configurable with a module option
libata-scsi: Update SATL for ZAC drives
libata: Implement ATA_DEV_ZAC
libsas: use ata_dev_classify()
|
|
Free resources allocated during port/disk probing so that the module may be
successfully reloaded after unloading.
Signed-off-by: Dwight Engen <dwight.engen@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It is being filled in using std in leon_cross_call.
Signed-off-by: Andreas Larsson <andreas@gaisler.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pull workqueue update from Tejun Heo:
"Work items which may be involved in memory reclaim path may be
executed by the rescuer under memory pressure. When a rescuer gets
activated, it processes whatever are on the pending list and then goes
back to sleep until the manager kicks it again which involves 100ms
delay.
This is problematic for self-requeueing work items or the ones running
on ordered workqueues as there always is only one work item on the
pending list when the rescuer kicks in. The execution of that work
item produces more to execute but the rescuer won't see them until
after the said 100ms has passed, so such workqueues would only execute
one work item every 100ms under prolonged memory pressure, which BTW
may be being prolonged due to the slow execution.
Neil wrote up a patch which fixes this issue by keeping the rescuer
working as long as the target workqueue is busy but doesn't have
enough workers"
* 'for-3.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
workqueue: allow rescuer thread to do more work.
workqueue: invert the order between pool->lock and wq_mayday_lock
workqueue: cosmetic update in rescuer_thread()
|
|
Add ephy parameter to rtl8168g.
Also change the common function of rtl8168g from "rtl_hw_start_8168g_1" to
"rtl_hw_start_8168g". And function "rtl_hw_start_8168g_1" is used for
setting rtl8168g hardware parameters.
Following is the explanation of what hardware parameter change for.
rtl8168g may erroneous judge the PCIe signal quality and show the error bit
on PCI configuration space when in PCIe low power mode.
The following ephy parameters are for above issue.
{ 0x00, 0x0000, 0x0008 }
{ 0x0c, 0x37d0, 0x0820 }
{ 0x1e, 0x0000, 0x0001 }
rtl8168g may return to PCIe L0 from PCIe L0s low power mode too slow.
The following ephy parameter is for above issue.
{ 0x19, 0x8000, 0x0000 }
Signed-off-by: Chunhao Lin <hau@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu
Pull percpu updates from Tejun Heo:
"Nothing interesting. A patch to convert the remaining __get_cpu_var()
users, another to fix non-critical off-by-one in an assertion and a
cosmetic conversion to lockless_dereference() in percpu-ref.
The back-merge from mainline is to receive lockless_dereference()"
* 'for-3.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu: Replace smp_read_barrier_depends() with lockless_dereference()
percpu: Convert remaining __get_cpu_var uses in 3.18-rcX
percpu: off by one in BUG_ON()
|
|
For ports of the switch that we define as "fixed PHYs" such as MoCA, we
would have our Port 7 special handling that would allow us to assert the
link status indication.
For other ports, such as e.g: RGMII_1 connected to a cable modem, we
would rely on whatever the bootloader has left configured, which is a
bad assumption to make, we really need to force the link status
indication here.
Fixes: 246d7f773c13 ("net: dsa: add Broadcom SF2 switch driver")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Alexander Duyck says:
====================
arch: Add lightweight memory barriers for coherent memory access
These patches introduce two new primitives for synchronizing cache coherent
memory writes and reads. These two new primitives are:
dma_rmb()
dma_wmb()
The first patch cleans up some unnecessary overhead related to the
definition of read_barrier_depends, smp_read_barrier_depends, and comments
related to the barrier.
The second patch adds the primitives for the applicable architectures and
asm-generic.
The third patch adds the barriers to r8169 which turns out to be a good
example of where the new barriers might be useful as they have full
rmb()/wmb() barriers ordering accesses to the descriptors and the DescOwn
bit.
The fourth patch adds support for coherent_rmb() to the Intel fm10k, igb,
and ixgbe drivers. Testing with the ixgbe driver has shown a processing
time reduction of at least 7ns per 64B frame on a Core i7-4930K.
This patch series is essentially the v7 for:
v4-7: Add lightweight memory barriers for coherent memory access
v3: Add lightweight memory barriers fast_rmb() and fast_wmb()
v2: Introduce load_acquire() and store_release()
v1: Introduce read_acquire()
The key changes in this patch series versus the earlier patches are:
v7 resubmit:
- Added Acked-by: Ben Herrenschmidt from v5 to dma_rmb/wmb patch
- No code changes from previous set, still applies cleanly and builds.
v7:
- Dropped test/debug patch that was accidentally slipped in
v6:
- Replaced "memory based device I/O" with "consistent memory" in
docs
- Added reference to DMA-API.txt to explain consistent memory
v5:
- Renamed barriers dma_rmb and dma_wmb
- Undid smp_wmb changes in x86 and PowerPC
- Defined smp_rmb as __lwsync for SMP case on PowerPC
v4:
- Renamed barriers coherent_rmb and coherent_wmb
- Added smp_lwsync for use in smp_load_acquire/smp_store_release
v3:
- Moved away from acquire()/store() and instead focused on barriers
- Added cleanup of read_barrier_depends
- Added change in r8169 to fix cur_tx/DescOwn ordering
- Simplified changes to just replacing/moving barriers in r8169
- Added update to documentation with code example
v2:
- Renamed read_acquire() to be consistent with smp_load_acquire()
- Changed barrier used to be consistent with smp_load_acquire()
- Updated PowerPC code to use __lwsync based on IBM article
- Added store_release() as this is a viable use case for drivers
- Added r8169 patch which is able to fully use primitives
- Added fm10k/igb/ixgbe patch which is able to test performance
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen features and fixes from David Vrabel:
- Fully support non-coherent devices on ARM by introducing the
mechanisms to request the hypervisor to perform the required cache
maintainance operations.
- A number of pciback bug fixes and cleanups. Notably a deadlock fix
if a PCI device was manually uunbound and a fix for incorrectly
restoring state after a function reset.
- In x86 PVHVM guests, use the APIC for interrupts if this has been
virtualized by the hardware. This reduces the number of interrupt-
related VM exits on such hardware.
* tag 'stable/for-linus-3.19-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: (26 commits)
Revert "swiotlb-xen: pass dev_addr to swiotlb_tbl_unmap_single"
xen/pci: Use APIC directly when APIC virtualization hardware is available
xen/pci: Defer initialization of MSI ops on HVM guests
xen-pciback: drop SR-IOV VFs when PF driver unloads
xen/pciback: Restore configuration space when detaching from a guest.
PCI: Expose pci_load_saved_state for public consumption.
xen/pciback: Remove tons of dereferences
xen/pciback: Print out the domain owning the device.
xen/pciback: Include the domain id if removing the device whilst still in use
driver core: Provide an wrapper around the mutex to do lockdep warnings
xen/pciback: Don't deadlock when unbinding.
swiotlb-xen: pass dev_addr to swiotlb_tbl_unmap_single
swiotlb-xen: call xen_dma_sync_single_for_device when appropriate
swiotlb-xen: remove BUG_ON in xen_bus_to_phys
swiotlb-xen: pass dev_addr to xen_dma_unmap_page and xen_dma_sync_single_for_cpu
xen/arm: introduce GNTTABOP_cache_flush
xen/arm/arm64: introduce xen_arch_need_swiotlb
xen/arm/arm64: merge xen/mm32.c into xen/mm.c
xen/arm: use hypercall to flush caches in map_page
xen: add a dma_addr_t dev_addr argument to xen_dma_map_page
...
|
|
This change makes it so that dma_rmb is used when reading the Rx
descriptor. The advantage of dma_rmb is that it allows for a much
lower cost barrier on x86, powerpc, arm, and arm64 architectures than a
traditional memory barrier when dealing with reads that only have to
synchronize to coherent memory.
In addition I have updated the code so that it just checks to see if any
bits have been set instead of just the DD bit since the DD bit will always
be set as a part of a descriptor write-back so we just need to check for a
non-zero value being present at that memory location rather than just
checking for any specific bit. This allows the code itself to appear much
cleaner and allows the compiler more room to optimize.
Cc: Matthew Vick <matthew.vick@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The r8169 use a pair of wmb() calls when setting up the descriptor rings.
The first is to synchronize the descriptor data with the descriptor status,
and the second is to synchronize the descriptor status with the use of the
MMIO doorbell to notify the device that descriptors are ready. This can
come at a heavy price on some systems, and is not really necessary on
systems such as x86 as a simple barrier() would suffice to order store/store
accesses. As such we can replace the first memory barrier with
dma_wmb() to reduce the cost for these accesses.
In addition the r8169 uses a rmb() to prevent compiler optimization in the
cleanup paths, however by moving the barrier down a few lines and replacing
it with a dma_rmb() we should be able to use it to guarantee
descriptor accesses do not occur until the device has updated the DescOwn
bit from its end.
One last change I made is to move the update of cur_tx in the xmit path to
after the wmb. This way we can guarantee the device and all CPUs should
see the DescOwn update before they see the cur_tx value update.
Cc: Realtek linux nic maintainers <nic_swsd@realtek.com>
Cc: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There are a number of situations where the mandatory barriers rmb() and
wmb() are used to order memory/memory operations in the device drivers
and those barriers are much heavier than they actually need to be. For
example in the case of PowerPC wmb() calls the heavy-weight sync
instruction when for coherent memory operations all that is really needed
is an lsync or eieio instruction.
This commit adds a coherent only version of the mandatory memory barriers
rmb() and wmb(). In most cases this should result in the barrier being the
same as the SMP barriers for the SMP case, however in some cases we use a
barrier that is somewhere in between rmb() and smp_rmb(). For example on
ARM the rmb barriers break down as follows:
Barrier Call Explanation
--------- -------- ----------------------------------
rmb() dsb() Data synchronization barrier - system
dma_rmb() dmb(osh) data memory barrier - outer sharable
smp_rmb() dmb(ish) data memory barrier - inner sharable
These new barriers are not as safe as the standard rmb() and wmb().
Specifically they do not guarantee ordering between coherent and incoherent
memories. The primary use case for these would be to enforce ordering of
reads and writes when accessing coherent memory that is shared between the
CPU and a device.
It may also be noted that there is no dma_mb(). Most architectures don't
provide a good mechanism for performing a coherent only full barrier without
resorting to the same mechanism used in mb(). As such there isn't much to
be gained in trying to define such a function.
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Miller <davem@davemloft.net>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch is meant to cleanup the handling of read_barrier_depends and
smp_read_barrier_depends. In multiple spots in the kernel headers
read_barrier_depends is defined as "do {} while (0)", however we then go
into the SMP vs non-SMP sections and have the SMP version reference
read_barrier_depends, and the non-SMP define it as yet another empty
do/while.
With this commit I went through and cleaned out the duplicate definitions
and reduced the number of definitions down to 2 per header. In addition I
moved the 50 line comments for the macro from the x86 and mips headers that
defined it as an empty do/while to those that were actually defining the
macro, alpha and blackfin.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If we need to force detach a context (e.g. due to EEH or simply force
unbinding the driver) we should prevent the userspace contexts from
being able to access the Problem State Area MMIO region further, which
they may have mapped with mmap().
This patch unmaps any mapped MMIO regions when detaching a userspace
context.
Cc: stable@vger.kernel.org
Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
In the event that something goes wrong in the hardware and it is unable
to complete a process element comment we would end up polling forever,
effectively making the associated process unkillable.
This patch adds a timeout to the process element command code path, so
that we will give up if the hardware does not respond in a reasonable
time.
Cc: stable@vger.kernel.org
Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
We had a known sleep while atomic bug if a CXL device was forcefully
unbound while it was in use. This could occur as a result of EEH, or
manually induced with something like this while the device was in use:
echo 0000:01:00.0 > /sys/bus/pci/drivers/cxl-pci/unbind
The issue was that in this code path we iterated over each context and
forcefully detached it with the contexts_lock spin lock held, however
the detach also needed to take the spu_mutex, and call schedule.
This patch changes the contexts_lock to a mutex so that we are not in
atomic context while doing the detach, thereby avoiding the sleep while
atomic.
Also delete the related TODO comment, which suggested an alternate
solution which turned out to not be workable.
Cc: stable@vger.kernel.org
Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Florian Fainelli says:
====================
net: dsa: two small bug fixes
Here are two small fixes for the DSA slave interface creation code:
- first patch fixes a null pointer de-reference with an invalid PHY
device pointer while calling phy_connect_direct()
- second path propagates the dsa_slave_phy_setup() error code down to
its caller: dsa_slave_create
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In case we cannot attach to our slave netdevice PHY, error out and
propagate that error up to the caller: dsa_slave_create().
Fixes: 0d8bcdd383b8 ("net: dsa: allow for more complex PHY setups")
Signed-off-by: Andrey Volkov <andrey.volkov@nexvision.fr>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In case there is no PHY at the designated address on the internal
switch, we would basically de-reference a null pointer here:
dsa_slave_phy_setup(...)
{
p->phy = ds->slave_mii_bus->phy_map[p->port];
phy_connect_direct(slave_dev, p->phy, dsa_slave_adjust_link,
^------
This can be triggered when the platform configuration (platform_data or
Device Tree) indicates there should be a PHY device at this address, but
the HW is non-responsive, such that we cannot attach a PHY device at
this specific location.
Fix this by checking the return value prior to calling
phy_connect_direct().
CC: Andrew Lunn <andrew@lunn.ch>
Fixes: b31f65fb4383 ("net: dsa: slave: Fix autoneg for phys on switch MDIO bus")
Reported-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Andrey Volkov <andrey.volkov@nexvision.fr>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pull MIPS updates from Ralf Baechle:
"This is an unusually large pull request for MIPS - in parts because
lots of patches missed the 3.18 deadline but primarily because some
folks opened the flood gates.
- Retire the MIPS-specific phys_t with the generic phys_addr_t.
- Improvments for the backtrace code used by oprofile.
- Better backtraces on SMP systems.
- Cleanups for the Octeon platform code.
- Cleanups and fixes for the Loongson platform code.
- Cleanups and fixes to the firmware library.
- Switch ATH79 platform to use the firmware library.
- Grand overhault to the SEAD3 and Malta interrupt code.
- Move the GIC interrupt code to drivers/irqchip
- Lots of GIC cleanups and updates to the GIC code to use modern IRQ
infrastructures and features of the kernel.
- OF documentation updates for the GIC bindings
- Move GIC clocksource driver to drivers/clocksource
- Merge GIC clocksource driver with clockevent driver.
- Further updates to bring the GIC clocksource driver up to date.
- R3000 TLB code cleanups
- Improvments to the Loongson 3 platform code.
- Convert pr_warning to pr_warn.
- Merge a bunch of small lantiq and ralink fixes that have been
staged/lingering inside the openwrt tree for a while.
- Update archhelp for IP22/IP32
- Fix a number of issues for Loongson 1B.
- New clocksource and clockevent driver for Loongson 1B.
- Further work on clk handling for Loongson 1B.
- Platform work for Broadcom BMIPS.
- Error handling cleanups for TurboChannel.
- Fixes and optimization to the microMIPS support.
- Option to disable the FTLB.
- Dump more relevant information on machine check exception
- Change binfmt to allow arch to examine PT_*PROC headers
- Support for new style FPU register model in O32
- VDSO randomization.
- BCM47xx cleanups
- BCM47xx reimplement the way the kernel accesses NVRAM information.
- Random cleanups
- Add support for ATH25 platforms
- Remove pointless locking code in some PCI platforms.
- Some improvments to EVA support
- Minor Alchemy cleanup"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (185 commits)
MIPS: Add MFHC0 and MTHC0 instructions to uasm.
MIPS: Cosmetic cleanups of page table headers.
MIPS: Add CP0 macros for extended EntryLo registers
MIPS: Remove now unused definition of phys_t.
MIPS: Replace use of phys_t with phys_addr_t.
MIPS: Replace MIPS-specific 64BIT_PHYS_ADDR with generic PHYS_ADDR_T_64BIT
PCMCIA: Alchemy Don't select 64BIT_PHYS_ADDR in Kconfig.
MIPS: lib: memset: Clean up some MIPS{EL,EB} ifdefery
MIPS: iomap: Use __mem_{read,write}{b,w,l} for MMIO
MIPS: <asm/types.h> fix indentation.
MAINTAINERS: Add entry for BMIPS multiplatform kernel
MIPS: Enable VDSO randomization
MIPS: Remove a temporary hack for debugging cache flushes in SMTC configuration
MIPS: Remove declaration of obsolete arch_init_clk_ops()
MIPS: atomic.h: Reformat to fit in 79 columns
MIPS: Apply `.insn' to fixup labels throughout
MIPS: Fix microMIPS LL/SC immediate offsets
MIPS: Kconfig: Only allow 32-bit microMIPS builds
MIPS: signal.c: Fix an invalid cast in ISA mode bit handling
MIPS: mm: Only build one microassembler that is suitable
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux
Pull powerpc updates from Michael Ellerman:
"Some nice cleanups like removing bootmem, and removal of
__get_cpu_var().
There is one patch to mm/gup.c. This is the generic GUP
implementation, but is only used by us and arm(64). We have an ack
from Steve Capper, and although we didn't get an ack from Andrew he
told us to take the patch through the powerpc tree.
There's one cxl patch. This is in drivers/misc, but Greg said he was
happy for us to manage fixes for it.
There is an infrastructure patch to support an IPMI driver for OPAL.
There is also an RTC driver for OPAL. We weren't able to get any
response from the RTC maintainer, Alessandro Zummo, so in the end we
just merged the driver.
The usual batch of Freescale updates from Scott"
* tag 'powerpc-3.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux: (101 commits)
powerpc/powernv: Return to cpu offline loop when finished in KVM guest
powerpc/book3s: Fix partial invalidation of TLBs in MCE code.
powerpc/mm: don't do tlbie for updatepp request with NO HPTE fault
powerpc/xmon: Cleanup the breakpoint flags
powerpc/xmon: Enable HW instruction breakpoint on POWER8
powerpc/mm/thp: Use tlbiel if possible
powerpc/mm/thp: Remove code duplication
powerpc/mm/hugetlb: Sanity check gigantic hugepage count
powerpc/oprofile: Disable pagefaults during user stack read
powerpc/mm: Check for matching hpte without taking hpte lock
powerpc: Drop useless warning in eeh_init()
powerpc/powernv: Cleanup unused MCE definitions/declarations.
powerpc/eeh: Dump PHB diag-data early
powerpc/eeh: Recover EEH error on ownership change for BCM5719
powerpc/eeh: Set EEH_PE_RESET on PE reset
powerpc/eeh: Refactor eeh_reset_pe()
powerpc: Remove more traces of bootmem
powerpc/pseries: Initialise nvram_pstore_info's buf_lock
cxl: Name interrupts in /proc/interrupt
cxl: Return error to PSL if IRQ demultiplexing fails & print clearer warning
...
|
|
git://anongit.freedesktop.org/drm-intel into drm-next
Here's a batch of i915 fixes for 3.19.
* tag 'drm-intel-next-fixes-2014-12-11' of git://anongit.freedesktop.org/drm-intel:
drm/i915: save/restore GMBUS freq across suspend/resume on gen4
drm/i915: Remove '& 0xffff' from the mask given to WA_REG()
drm/i915: Invert the mask and val arguments in wa_add() and WA_REG()
drm/i915/bdw: Fix the write setting up the WIZ hashing mode
drm/i915: Don't complain about stolen conflicts on gen3
drm/i915: resume MST after reading back hw state
drm/i915: Handle inaccurate time conversion issues
drm/i915: compute wait_ioctl timeout correctly
drm/i915: don't always do full mode sets when infoframes are enabled
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Martin Schwidefsky:
"The most notable change for this pull request is the ftrace rework
from Heiko. It brings a small performance improvement and the ground
work to support a new gcc option to replace the mcount blocks with a
single nop.
Two new s390 specific system calls are added to emulate user space
mmio for PCI, an artifact of the how PCI memory is accessed.
Two patches for the memory management with changes to common code.
For KVM mm_forbids_zeropage is added which disables the empty zero
page for an mm that is used by a KVM process. And an optimization,
pmdp_get_and_clear_full is added analog to ptep_get_and_clear_full.
Some micro optimization for the cmpxchg and the spinlock code.
And as usual bug fixes and cleanups"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (46 commits)
s390/cputime: fix 31-bit compile
s390/scm_block: make the number of reqs per HW req configurable
s390/scm_block: handle multiple requests in one HW request
s390/scm_block: allocate aidaw pages only when necessary
s390/scm_block: use mempool to manage aidaw requests
s390/eadm: change timeout value
s390/mm: fix memory leak of ptlock in pmd_free_tlb
s390: use local symbol names in entry[64].S
s390/ptrace: always include vector registers in core files
s390/simd: clear vector register pointer on fork/clone
s390: translate cputime magic constants to macros
s390/idle: convert open coded idle time seqcount
s390/idle: add missing irq off lockdep annotation
s390/debug: avoid function call for debug_sprintf_*
s390/kprobes: fix instruction copy for out of line execution
s390: remove diag 44 calls from cpu_relax()
s390/dasd: retry partition detection
s390/dasd: fix list corruption for sleep_on requests
s390/dasd: fix infinite term I/O loop
s390/dasd: remove unused code
...
|
|
A security fix in caused the way the unprivileged remount tests were
using user namespaces to break. Tweak the way user namespaces are
being used so the test works again.
Cc: stable@vger.kernel.org
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
|
|
It is important that all maps are less than PAGE_SIZE
or else setting the last byte of the buffer to '0'
could write off the end of the allocated storage.
Correct the misleading comment.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
|
|
Now that setgroups can be disabled and not reenabled, setting gid_map
without privielge can now be enabled when setgroups is disabled.
This restores most of the functionality that was lost when unprivileged
setting of gid_map was removed. Applications that use this functionality
will need to check to see if they use setgroups or init_groups, and if they
don't they can be fixed by simply disabling setgroups before writing to
gid_map.
Cc: stable@vger.kernel.org
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
|
|
- Expose the knob to user space through a proc file /proc/<pid>/setgroups
A value of "deny" means the setgroups system call is disabled in the
current processes user namespace and can not be enabled in the
future in this user namespace.
A value of "allow" means the segtoups system call is enabled.
- Descendant user namespaces inherit the value of setgroups from
their parents.
- A proc file is used (instead of a sysctl) as sysctls currently do
not allow checking the permissions at open time.
- Writing to the proc file is restricted to before the gid_map
for the user namespace is set.
This ensures that disabling setgroups at a user namespace
level will never remove the ability to call setgroups
from a process that already has that ability.
A process may opt in to the setgroups disable for itself by
creating, entering and configuring a user namespace or by calling
setns on an existing user namespace with setgroups disabled.
Processes without privileges already can not call setgroups so this
is a noop. Prodcess with privilege become processes without
privilege when entering a user namespace and as with any other path
to dropping privilege they would not have the ability to call
setgroups. So this remains within the bounds of what is possible
without a knob to disable setgroups permanently in a user namespace.
Cc: stable@vger.kernel.org
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
|
|
Pull networking updates from David Miller:
1) New offloading infrastructure and example 'rocker' driver for
offloading of switching and routing to hardware.
This work was done by a large group of dedicated individuals, not
limited to: Scott Feldman, Jiri Pirko, Thomas Graf, John Fastabend,
Jamal Hadi Salim, Andy Gospodarek, Florian Fainelli, Roopa Prabhu
2) Start making the networking operate on IOV iterators instead of
modifying iov objects in-situ during transfers. Thanks to Al Viro
and Herbert Xu.
3) A set of new netlink interfaces for the TIPC stack, from Richard
Alpe.
4) Remove unnecessary looping during ipv6 routing lookups, from Martin
KaFai Lau.
5) Add PAUSE frame generation support to gianfar driver, from Matei
Pavaluca.
6) Allow for larger reordering levels in TCP, which are easily
achievable in the real world right now, from Eric Dumazet.
7) Add a variable of napi_schedule that doesn't need to disable cpu
interrupts, from Eric Dumazet.
8) Use a doubly linked list to optimize neigh_parms_release(), from
Nicolas Dichtel.
9) Various enhancements to the kernel BPF verifier, and allow eBPF
programs to actually be attached to sockets. From Alexei
Starovoitov.
10) Support TSO/LSO in sunvnet driver, from David L Stevens.
11) Allow controlling ECN usage via routing metrics, from Florian
Westphal.
12) Remote checksum offload, from Tom Herbert.
13) Add split-header receive, BQL, and xmit_more support to amd-xgbe
driver, from Thomas Lendacky.
14) Add MPLS support to openvswitch, from Simon Horman.
15) Support wildcard tunnel endpoints in ipv6 tunnels, from Steffen
Klassert.
16) Do gro flushes on a per-device basis using a timer, from Eric
Dumazet. This tries to resolve the conflicting goals between the
desired handling of bulk vs. RPC-like traffic.
17) Allow userspace to ask for the CPU upon what a packet was
received/steered, via SO_INCOMING_CPU. From Eric Dumazet.
18) Limit GSO packets to half the current congestion window, from Eric
Dumazet.
19) Add a generic helper so that all drivers set their RSS keys in a
consistent way, from Eric Dumazet.
20) Add xmit_more support to enic driver, from Govindarajulu
Varadarajan.
21) Add VLAN packet scheduler action, from Jiri Pirko.
22) Support configurable RSS hash functions via ethtool, from Eyal
Perry.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1820 commits)
Fix race condition between vxlan_sock_add and vxlan_sock_release
net/macb: fix compilation warning for print_hex_dump() called with skb->mac_header
net/mlx4: Add support for A0 steering
net/mlx4: Refactor QUERY_PORT
net/mlx4_core: Add explicit error message when rule doesn't meet configuration
net/mlx4: Add A0 hybrid steering
net/mlx4: Add mlx4_bitmap zone allocator
net/mlx4: Add a check if there are too many reserved QPs
net/mlx4: Change QP allocation scheme
net/mlx4_core: Use tasklet for user-space CQ completion events
net/mlx4_core: Mask out host side virtualization features for guests
net/mlx4_en: Set csum level for encapsulated packets
be2net: Export tunnel offloads only when a VxLAN tunnel is created
gianfar: Fix dma check map error when DMA_API_DEBUG is enabled
cxgb4/csiostor: Don't use MASTER_MUST for fw_hello call
net: fec: only enable mdio interrupt before phy device link up
net: fec: clear all interrupt events to support i.MX6SX
net: fec: reset fep link status in suspend function
net: sock: fix access via invalid file descriptor
net: introduce helper macro for_each_cmsghdr
...
|
|
Balance a hid_device_io_start() call with hid_device_io_stop() in the
error path. This avoids processing of HID reports when the probe fails
which possibly leads to invalid memory access in hid_device_probe() as
report_enum->report_id_hash might already be freed via
hid_close_report().
hid_set_drvdata() is called before wtp_allocate, be consistent and clear
drvdata too on the error path of wtp_allocate.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
The HID response has a limited size. Do not trust the value returned by
hardware, check that it really fits in the message.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
hidpp_devicenametype_get_device_name() may return a negative value on
protocol errors (for example, when the device is powered off).
Explicitly check this condition to avoid a long-running loop.
(0 cannot be returned as __name_length - index > 0, but check for it
anyway as it would otherwise result in an infinite loop.)
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
We do not make any use of the actual name length get through
hidpp_get_device_name(). Original patch by Benjamin Tissoires, this
patch also replaces a (now) unnecessary goto by return NULL.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
The existing generic touch code only reports events after reading an
entire HID report, which practically means that only data about the last
contact in a report will ever be provided to userspace. This patch uses
a trick from hid-multitouch.c to discover what type of field is at the
end of each contact; when such a field is encountered all the stored
contact data will be reported.
Signed-off-by: Jason Gerecke <killertofu@gmail.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
On some ARMs the memory can be mapped pgprot_noncached() and still
be working for atomic operations. As pointed out by Colin Cross
<ccross@android.com>, in some cases you do want to use
pgprot_noncached() if the SoC supports it to see a debug printk
just before a write hanging the system.
On ARMs, the atomic operations on strongly ordered memory are
implementation defined. So let's provide an optional kernel parameter
for configuring pgprot_noncached(), and use pgprot_writecombine() by
default.
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Rob Herring <robherring2@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: stable@vger.kernel.org
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Currently trying to use pstore on at least ARMs can hang as we're
mapping the peristent RAM with pgprot_noncached().
On ARMs, pgprot_noncached() will actually make the memory strongly
ordered, and as the atomic operations pstore uses are implementation
defined for strongly ordered memory, they may not work. So basically
atomic operations have undefined behavior on ARM for device or strongly
ordered memory types.
Let's fix the issue by using write-combine variants for mappings. This
corresponds to normal, non-cacheable memory on ARM. For many other
architectures, this change does not change the mapping type as by
default we have:
#define pgprot_writecombine pgprot_noncached
The reason why pgprot_noncached() was originaly used for pstore
is because Colin Cross <ccross@android.com> had observed lost
debug prints right before a device hanging write operation on some
systems. For the platforms supporting pgprot_noncached(), we can
add a an optional configuration option to support that. But let's
get pstore working first before adding new features.
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Anton Vorontsov <cbouatmailru@gmail.com>
Cc: Colin Cross <ccross@android.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
[tony@atomide.com: updated description]
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
We don't need the mask since we obtain the channels via DT.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
All callers of path_init() proceed to do the identical cleanup when
they are done with nameidata. Don't open-code it...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|