Age | Commit message (Collapse) | Author |
|
Fix a memory leak in the ieee802154_add_iface() error handling path.
Detected by Coverity: CID 710490.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator into next/soc
From Linus Walleij:
Versatile patches for v3.14:
- Move GPIO2 and GPIO3 to be registered from the core boardfile.
- Update the defconfig.
Defconfig changes:
- Enable GPIOLIB and PL061 for the Versatile.
- Build the Versatile using EABI.
- Enable the new LEDs in the defconfig.
* tag 'versatile-for-v3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator:
ARM: versatile: enable LEDs by default
ARM: versatile: build using EABI
ARM: versatile: enable GPIOLIB and PL061 by default
ARM: versatile: update defconfig
ARM: versatile: move GPIO2 and GPIO3 to core
|
|
When doing a function/slot/bus reset PCI grabs the device_lock for each
device to block things like suspend and driver probes, but call paths exist
where this lock may already be held. This creates an opportunity for
deadlock. For instance, vfio allows userspace to issue resets so long as
it owns the device(s). If a driver unbind .remove callback races with
userspace issuing a reset, we have a deadlock as userspace gets stuck
waiting on device_lock while another thread has device_lock and waits for
.remove to complete. To resolve this, we can make a version of the reset
interfaces which use trylock. With this, we can safely attempt a reset and
return error to userspace if there is contention.
[bhelgaas: the deadlock happens when A (userspace) has a file descriptor for
the device, and B waits in this path:
driver_detach
device_lock # take device_lock
__device_release_driver
pci_device_remove # pci_bus_type.remove
vfio_pci_remove # pci_driver .remove
vfio_del_group_dev
wait_event(vfio.release_q, !vfio_dev_present) # wait (holding device_lock)
Now B is stuck until A gives up the file descriptor. If A tries to acquire
device_lock for any reason, we deadlock because A is waiting for B to release
the lock, and B is waiting for A to release the file descriptor.]
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
Marc Kleine-Budde says:
====================
this is a pull request of three patches for net-next/master.
Oleg Moroz added support for a new PCI card to the generic SJA1000 PCI
driver, Guenter Roeck's patch limits the flexcan driver to little
endian arm (and powerpc) and I fixed a sparse warning found by the
kbuild robot in the ti_hecc driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
next/soc
From Maxime Ripard:
Second round of core additions for the Allwinner SoCs
Fixes to select missing configuration options, and update of the maintainer
file.
* tag 'sunxi-core-for-3.14-2' of https://github.com/mripard/linux:
ARM: sunxi: select ARM_PSCI
MAINTAINERS: Update Allwinner sunXi maintainer files
ARM: sunxi: Select RESET_CONTROLLER
ARM: sun6i: Add SMP support for the Allwinner A31
dt-bindings: fix example of allwinner interrupt controller
ARM: sunxi: Register the A31 reset IP in init_time
ARM: sunxi: Select ARCH_HAS_RESET_CONTROLLER
reset: Add Allwinner SoCs Reset Controller Driver
Signed-off-by: Kevin Hilman <khilman@linaro.org>
|
|
This patch removes the net_random and net_srandom macros and replaces
them with direct calls to the prandom ones. As new commits only seem to
use prandom_u32 there is no use to keep them around.
This change makes it easier to grep for users of prandom_u32.
Signed-off-by: Aruna-Hewapathirane <aruna.hewapathirane@gmail.com>
Suggested-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
device_lock is much too prone to lockups. For instance if we have a
pending .remove then device_lock is already held. If userspace
attempts to modify AER signaling after that point, a deadlock occurs.
eventfd setup/teardown is already protected in vfio with the igate
mutex. AER is not a high performance interrupt, so we can also use
the same mutex to protect signaling versus setup races.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
Suggested-by: Simon Schneider <simon-schneider@gmx.net>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We still need this notifier even when we don't config
PROC_FS.
It should be rare to have a kernel without PROC_FS,
so just for completeness.
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Patrick McHardy <kaber@trash.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Ben Hutchings says:
====================
Improve tracing at the driver/core boundary
These patches add static tracpeoints at the driver/core boundary which
record various skb fields likely to be useful for datapath debugging.
On the TX side the boundary is where the core calls ndo_start_xmit, and
on the RX side it is where any of the various exported receive functions
is called.
The set of skb fields is mostly based on what I thought would be
interesting for sfc.
These patches are basically the same as what I sent as an RFC in
November, but rebased. They now depend on 'net: core: explicitly select
a txq before doing l2 forwarding', so please merge net into net-next
before trying to apply them. The first patch fixes a code formatting
error left behind after that fix.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The existing net/netif_rx and net/netif_receive_skb trace events
provide little information about the skb, nor do they indicate how it
entered the stack.
Add trace events at entry of each of the exported functions, including
most fields that are likely to be interesting for debugging driver
datapath behaviour. Split netif_rx() and netif_receive_skb() so that
internal calls are not traced.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The existing net/net_dev_xmit trace event provides little information
about the skb that has been passed to the driver, and it is not
simple to add more since the skb may already have been freed at
the point the event is emitted.
Add a separate trace event before the skb is passed to the driver,
including most fields that are likely to be interesting for debugging
driver datapath behaviour.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
Paul Durrant says:
====================
make skb_checksum_setup generally available
Both xen-netfront and xen-netback need to be able to set up the partial
checksum offset of an skb and may also need to recalculate the pseudo-
header checksum in the process. This functionality is currently private
and duplicated between the two drivers.
Patch #1 of this series moves the implementation into the core network code
as there is nothing xen-specific about it and it is potentially useful to
any network driver.
Patch #2 removes the private implementation from netback.
Patch #3 removes the private implementation from netfront.
v2:
- Put skb_checksum_setup in skbuff.c rather than dev.c
- remove inline
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds a function to set up the partial checksum offset for IP
packets (and optionally re-calculate the pseudo-header checksum) into the
core network code.
The implementation was previously private and duplicated between xen-netback
and xen-netfront, however it is not xen-specific and is potentially useful
to any network driver.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Veaceslav Falico <vfalico@redhat.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch add the support for Ethernet L2 attributes in the
verbs/cm/cma structures.
When dealing with L2 Ethernet, we should use smac, dmac, vlan ID and priority
in a similar manner that the IB L2 (and the L4 PKEY) attributes are used.
Thus, those attributes were added to the following structures:
* ib_ah_attr - added dmac
* ib_qp_attr - added smac and vlan_id, (sl remains vlan priority)
* ib_wc - added smac, vlan_id
* ib_sa_path_rec - added smac, dmac, vlan_id
* cm_av - added smac and vlan_id
For the path record structure, extra care was taken to avoid the new
fields when packing it into wire format, so we don't break the IB CM
and SA wire protocol.
On the active side, the CM fills. its internal structures from the
path provided by the ULP. We add there taking the ETH L2 attributes
and placing them into the CM Address Handle (struct cm_av).
On the passive side, the CM fills its internal structures from the WC
associated with the REQ message. We add there taking the ETH L2
attributes from the WC.
When the HW driver provides the required ETH L2 attributes in the WC,
they set the IB_WC_WITH_SMAC and IB_WC_WITH_VLAN flags. The IB core
code checks for the presence of these flags, and in their absence does
address resolution from the ib_init_ah_from_wc() helper function.
ib_modify_qp_is_ok is also updated to consider the link layer. Some
parameters are mandatory for Ethernet link layer, while they are
irrelevant for IB. Vendor drivers are modified to support the new
function signature.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/fixes-non-critical
From Tony Lindgren:
Some non-urgent fixes to enable am335x features, update documentation,
and to remove unnecessary double initialization for the GPMC code.
* tag 'omap-for-v3.14/fixes-not-urgent-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap: (238 commits)
ARM: OMAP2+: gpmc: Move legacy GPMC width setting
ARM: OMAP2+: gpmc: Introduce gpmc_set_legacy()
ARM: OMAP2+: gpmc: Move initialization outside the gpmc_t condition
ARM: OMAP2+: board-generic: update SoC compatibility strings
Documentation: dt: OMAP: explicitly state SoC compatible strings
ARM: OMAP2+: enable AM33xx SOC EVM audio
ARM: OMAP2+: Select USB PHY for AM335x SoC
+Linux 3.13-rc5
|
|
|
|
This patch adds support for steerable (NETIF) QP creation. When we
create the device, we allocate a range of steerable QPs.
Afterward when a QP is created with the NETIF flag, it's allocated
from this range. Allocation is managed by bitmap allocator.
Internal steering rules for those QPs is automatically generated on
their creation.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
The mlx4 device requires adding IB flow spec to rules that apply over
infiniband link layer. This patch adds a mechanism to add such a rule.
If higher levels e.g. IP/UDP/TCP flow specs are provided, the device
requires us to add an empty wild-carded IB rule. Furthermore, the device
requires the QPN to be put in the rule.
Add here specific parsing support for IB empty rules and the ability
to self-generate missing specs based on existing ones.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
Up until now, flow steering wasn't supported when using IB ports.
This patch enables support for flow steering if all hardware ports
support that, for example the new MLX4_DEV_CAP_FLAG2_DMFS_IPOIB mlx4
device capability.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
This patch adds support for allocating IB UD QPs that we can steer
traffic from. We introduce a new firmware command FLOW_STEERING_IB_UC_QP_RANGE
and a capability bit.
This command isn't supported for VFs.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
This patch adds preliminary support for IB L2 device-managed steering,
currently exposed only in the kernel.
This flow spec can be used by low-level drivers that need to indicate
the link layer type when creating device-managed flow rules.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
When creating an IPoIB UD QP, provide a hint to the low level driver
that the QP should support flow-steering. This means that privileged
user space applications can steer TCP/IP IPoIB traffic from the
network stack, in a similar manner done with Ethernet RAW_PACKET QPs.
The hint is provided through new QP creation flag called NETIF_QP.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
The micro UAR (uuar) allocator had a bug which resulted from the fact
that in each UAR we only have two micro UARs avaialable, those at
index 0 and 1. This patch defines iterators to aid in traversing the
list of available micro UARs when allocating a uuar.
In addition, change the logic in create_user_qp() so that if high
class allocation fails (high class means lower latency), we revert to
medium class and not to the low class.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
Remove leftover of debug code.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
The variable start in struct mlx5_ib_mr is never used. Remove it.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
The Samsung dmaengine ASoC driver is used with two different dmaengine drivers.
The pl80x, which properly supports residue reporting and the pl330, which
reports that it does not support residue reporting. So there is no need to
manually set the NO_RESIDUE flag. This has the advantage that a proper (race
condition free) PCM pointer() implementation is used when the pl80x driver is
used. Also once the pl330 driver supports residue reporting the ASoC PCM driver
will automatically start using it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
The pl330 driver properly reports that it does not have residue reporting
support, which means the PCM dmanegine driver is able to figure this out on its
own. So there is no need to set the flag manually. Removing the flag has the
advantage that once the pl330 driver gains support for residue reporting it will
automatically be used by the generic dmaengine PCM driver.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
The dmaengine framework now exposes the granularity with which it is able to
report the transfer residue for a certain DMA channel. Check the granularity in
the generic dmaengine PCM driver and
a) Set the SNDRV_PCM_INFO_BATCH if the granularity is per period or worse.
b) Fallback to the (race condition prone) period counting if the driver does
not support any residue reporting.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
Currently we have two different snd_soc_platform_driver structs in the generic
dmaengine PCM driver. One for dmaengine drivers that support residue reporting
and one for those which do not. When registering the PCM component we check
whether the NO_RESIDUE flag is set or not and use the corresponding
snd_soc_platform_driver. This patch modifies the driver to only have one
snd_soc_platform_driver struct where the pointer() callback checks the
NO_RESIDUE flag at runtime. This allows us to set the NO_RESIDUE flag after the
PCM component has been registered. This becomes necessary when querying whether
the dmaengine driver supports residue reporting from the dmaengine driver itself
since the DMA channel might only be requested after the PCM component has been
registered.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
The pl330 driver currently does not support residue reporting, so set the
residue granularity to DMA_RESIDUE_GRANULARITY_DESCRIPTOR.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
This patch adds a new field to the dma_slave_caps struct which indicates the
granularity with which the driver is able to update the residue field of the
dma_tx_state struct. Making this information available to dmaengine users allows
them to make better decisions on how to operate. E.g. for audio certain features
like wakeup less operation or timer based scheduling only make sense and work
correctly if the reported residue is fine-grained enough.
Right now four different levels of granularity are supported:
* DESCRIPTOR: The DMA channel is only able to tell whether a descriptor has
been completed or not, which means residue reporting is not supported by
this channel. The residue field of the dma_tx_state field will always be
0.
* SEGMENT: The DMA channel updates the residue field after each successfully
completed segment of the transfer (For cyclic transfers this is after each
period). This is typically implemented by having the hardware generate an
interrupt after each transferred segment and then the drivers updates the
outstanding residue by the size of the segment. Another possibility is if
the hardware supports SG and the segment descriptor has a field which gets
set after the segment has been completed. The driver then counts the
number of segments without the flag set to compute the residue.
* BURST: The DMA channel updates the residue field after each transferred
burst. This is typically only supported if the hardware has a progress
register of some sort (E.g. a register with the current read/write address
or a register with the amount of bursts/beats/bytes that have been
transferred or still need to be transferred).
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into asoc-dma
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into asoc-dma
|
|
It's a bug that writing to the platform data directly, for it should
be constant. So just copy it before writing.
Signed-off-by: Xiubo Li <Li.Xiubo@freescale.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
To be consistent with the equivalent option in 'stat', also, for the
same reason, use -D as the one letter alias.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-p5yjnopajb3a8x0xha7yl5w8@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
That is how the option summary describes it and so that we can free
--delay to replace --initial-delay and then be consistent with stat's
--delay equivalent option.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-f8hd2010uhjl2zzb34hepbmi@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Instead of open-coding the intersecting of two rate masks (and getting slightly
wrong for some of the corner cases) use the new snd_pcm_rate_mask_intersect()
helper function.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
A bit of special care is necessary when creating the intersection of two rate
masks. This comes from the special meaning of the SNDRV_PCM_RATE_CONTINUOUS and
SNDRV_PCM_RATE_KNOT bits, which needs special handling when intersecting two
rate masks. SNDRV_PCM_RATE_CONTINUOUS means the hardware supports all rates in a
specific interval. SNDRV_PCM_RATE_KNOT means the hardware supports a set of
discrete rates specified by a list constraint. For all other cases the supported
rates are specified directly in the rate mask.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Reviewed-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
SNDRV_PCM_RATE_CONTINUOUS means that all rates (possibly limited to a certain
interval) are supported. There is no need to manually set other rate bits.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Daniel Glöckner <daniel-gl@gmx.net>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
SNDRV_PCM_RATE_CONTINUOUS means that all rates (possibly limited to a certain
interval) are supported. There is no need to manually set other rate bits.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
If none of the components (CODEC or CPU DAI) sets a maximum sample rate we'll
end up with the rate_max field of the runtime hardware set to 0. (Note that it
is still possible for the components to constrain the supported sample rates
using other methods, e.g. setting a list constraint) If rate_max is 0 this means
that the sound card doesn't support any rates at all, which is not the desired
result. So initialize rate_max to UINT_MAX. For symmetry reasons also set
rate_min to 0.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
Linux 3.13-rc3
|
|
Proper ndelay implementation allows for faster IO rate with drivers that
use ndelay to access their device registers, as otherwise ndelay is
emulated with udelay.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
Replace division with shift, use ccount_freq instead of loops_per_jiffy.
Introduce __MAX_UDELAY and (undefined) __bad_udelay, break build for too
big udelay constants.
Remove irrelevant comment, clean up code style.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
This allows the perf tool to monitor kernel tracepoint events.
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|