Age | Commit message (Collapse) | Author |
|
The code related to health->recover_work was removed in
commit 63cbc552eebf ("net/mlx5: Handle SW reset of FW in error flow")
Fix struct mlx5_core_health accordingly.
Signed-off-by: Mikhael Goikhman <migo@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
The "backlog" argument in listen() specifies
the maximom length of pending connections,
so the accept queue should be considered full
if there are exactly "backlog" elements.
Signed-off-by: liuyacan <yacanliu@163.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator
Pull regulator fixes from Mark Brown:
"A small collection fo driver specific fixes that have arrived since
the merge window"
* tag 'regulator-fix-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator:
regulator: mt6315: Fix off-by-one for .n_voltages
regulator: rt4831: Fix return value check in rt4831_regulator_probe()
regulator: pca9450: Clear PRESET_EN bit to fix BUCK1/2/3 voltage setting
regulator: qcom-rpmh: Use correct buck for S1C regulator
regulator: qcom-rpmh: Correct the pmic5_hfsmps515 buck
regulator: pca9450: Fix return value when failing to get sd-vsel GPIO
regulator: mt6315: Return REGULATOR_MODE_INVALID for invalid mode
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"We've got a smattering of changes all over the place which we've
acrued since -rc1. To my knowledge, there aren't any pending issues at
the moment, but there's still plenty of time for something else to
crop up...
Summary:
- Fix booting a 52-bit-VA-aware kernel on Qualcomm Amberwing
- Fix pfn_valid() not to reject all ZONE_DEVICE memory
- Fix memory tagging setup for hotplugged memory regions
- Fix KASAN tagging in page_alloc() when DEBUG_VIRTUAL is enabled
- Fix accidental truncation of CPU PMU event counters
- Fix error code initialisation when failing probe of DMC620 PMU
- Fix return value initialisation for sve-ptrace selftest
- Drop broken support for CMDLINE_EXTEND"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
perf/arm_dmc620_pmu: Fix error return code in dmc620_pmu_device_probe()
arm64: mm: remove unused __cpu_uses_extended_idmap[_level()]
arm64: mm: use a 48-bit ID map when possible on 52-bit VA builds
arm64: perf: Fix 64-bit event counter read truncation
arm64/mm: Fix __enable_mmu() for new TGRAN range values
kselftest: arm64: Fix exit code of sve-ptrace
arm64: mte: Map hotplugged memory as Normal Tagged
arm64: kasan: fix page_alloc tagging with DEBUG_VIRTUAL
arm64/mm: Reorganize pfn_valid()
arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory
arm64/mm: Drop THP conditionality from FORCE_MAX_ZONEORDER
arm64/mm: Drop redundant ARCH_WANT_HUGE_PMD_SHARE
arm64: Drop support for CMDLINE_EXTEND
arm64: cpufeatures: Fix handling of CONFIG_CMDLINE for idreg overrides
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen fixes from Juergen Gross:
"Two fix series and a single cleanup:
- a small cleanup patch to remove unneeded symbol exports
- a series to cleanup Xen grant handling (avoiding allocations in
some cases, and using common defines for "invalid" values)
- a series to address a race issue in Xen event channel handling"
* tag 'for-linus-5.12b-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
Xen/gntdev: don't needlessly use kvcalloc()
Xen/gnttab: introduce common INVALID_GRANT_{HANDLE,REF}
Xen/gntdev: don't needlessly allocate k{,un}map_ops[]
Xen: drop exports of {set,clear}_foreign_p2m_mapping()
xen/events: avoid handling the same event on two cpus at the same time
xen/events: don't unmask an event channel when an eoi is pending
xen/events: reset affinity of 2-level event when tearing it down
|
|
The 'delay' field in the spi_transfer struct is meant to replace the
'delay_usecs' field. However some cleanup was required to remove the
uses of 'delay_usecs'. Now that it's been cleaned up, we can remove it
from the kernel tree.
Signed-off-by: Alexandru Ardelean <aardelean@deviqon.com>
Link: https://lore.kernel.org/r/20210308145502.1075689-10-aardelean@deviqon.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
soc-pcm has very similar but different DPCM BE DAI stop operation at
1) dpcm_be_dai_startup() error case rollback
2) dpcm_be_dai_startup_unwind()
3) dpcm_be_dai_shutdown()
The differences are
1) for rollback
2) Doesn't check by snd_soc_dpcm_be_can_update() (Is this bug ?)
3) Do soc_pcm_hw_free() if it was not !OPENed and !HW_FREEed,
and call soc_pcm_close().
We can share same code by
1) hw_free is not needed. Needs last dpcm as rollback.
2) hw_free is not needed.
3) hw_free is needed.
This patch adds new dpcm_be_dai_stop() and share these 3.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Link: https://lore.kernel.org/r/87a6rduoam.wl-kuninori.morimoto.gx@renesas.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
There's no need to give the page an address_space. Leaving the
page->mapping as NULL will cause the VM to handle set_page_dirty()
the same way that it's handled now, and that was the only reason to
set the address_space in the first place.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210310185530.1053320-1-willy@infradead.org
|
|
1. Add curve 25519 parameters in 'crypto/ecc_curve_defs.h';
2. Add curve25519 interface 'ecc_get_curve25519_param' in
'include/crypto/ecc_curve.h', to make its parameters be
exposed to everyone in kernel tree.
Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move 'ecc_get_curve' to 'include/crypto/ecc_curve.h', so everyone
in kernel tree can easily get ecc curve params;
Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
1. crypto and crypto/atmel-ecc:
Move curve id of ECDH from the key into the algorithm name instead
in crypto and atmel-ecc, so ECDH algorithm name change form 'ecdh'
to 'ecdh-nist-pxxx', and we cannot use 'curve_id' in 'struct ecdh';
2. crypto/testmgr and net/bluetooth:
Modify 'testmgr.c', 'testmgr.h' and 'net/bluetooth' to adapt
the modification.
Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Given that crypto_alloc_tfm() may return ERR pointers, and to avoid
crashes on obscure error paths where such pointers are presented to
crypto_destroy_tfm() (such as [0]), add an ERR_PTR check there
before dereferencing the second argument as a struct crypto_tfm
pointer.
[0] https://lore.kernel.org/linux-crypto/000000000000de949705bc59e0f6@google.com/
Reported-by: syzbot+12cf5fbfdeba210a89dd@syzkaller.appspotmail.com
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Jakub and Neil reported an increase of RTO timers whenever
TX completions are delayed a bit more (by increasing
NIC TX coalescing parameters)
Main issue is that TCP stack has a logic preventing a packet
being retransmit if the prior clone has not yet been
orphaned or freed.
This logic came with commit 1f3279ae0c13 ("tcp: avoid
retransmits of TCP packets hanging in host queues")
Thankfully, in the case skb_still_in_host_queue() detects
the initial clone is still in flight, it can use TSQ logic
that will eventually retry later, at the moment the clone
is freed or orphaned.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Neil Spring <ntspring@fb.com>
Reported-by: Jakub Kicinski <kuba@kernel.org>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add compatible and constants for the power domains exposed by the RPMH
in the Qualcomm Technologies Inc sc7280 platform.
Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org>
Link: https://lore.kernel.org/r/1614664092-9394-1-git-send-email-rnayak@codeaurora.org
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
Add RPM power domain bindings for the SM8350 SoC
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20210210104257.339462-1-vkoul@kernel.org
[bjorn: Added dt-bindings include file changes from the driver patch]
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
Pull drm fixes from Dave Airlie:
"Regular fixes for rc3. The i915 pull was based on the rc1 tag so I
just cherry-picked the single fix from there to avoid it. The misc and
amd trees seem to be on okay bases.
It's a bunch of fixes across the tree, amdgpu has most of them a few
ttm fixes around qxl, and nouveau.
core:
- Clear holes when converting compat ioctl's between 32-bits and
64-bits.
docs:
- Use gitlab for drm bugzilla now.
ttm:
- Fix ttm page pool accounting.
fbdev:
- Fix oops in drm_fbdev_cleanup()
shmem:
- Assorted fixes for shmem helpers.
qxl:
- unpin qxl bos created as pinned when freeing them, and make ttm
only warn once on this behavior.
- Zero head.surface_id correctly in qxl.
atyfb:
- Use LCD management for atyfb on PPC_MAC.
meson:
- Shutdown kms poll helper in meson correctly.
nouveau:
- fix regression in bo syncing
i915:
- Wedge the GPU if command parser setup fails
amdgpu:
- Fix aux backlight control
- Add a backlight override parameter
- Various display fixes
- PCIe DPM fix for vega
- Polaris watermark fixes
- Additional S0ix fix
radeon:
- Fix GEM regression
- Fix AGP dependency handling"
* tag 'drm-fixes-2021-03-12-1' of git://anongit.freedesktop.org/drm/drm: (33 commits)
drm/nouveau: fix dma syncing for loops (v2)
drm/i915: Wedge the GPU if command parser setup fails
drm/compat: Clear bounce structures
drm/shmem-helpers: vunmap: Don't put pages for dma-buf
drm: meson_drv add shutdown function
drm/shmem-helper: Don't remove the offset in vm_area_struct pgoff
drm/shmem-helper: Check for purged buffers in fault handler
qxl: Fix uninitialised struct field head.surface_id
drm/ttm: Fix TTM page pool accounting
drm/ttm: soften TTM warnings
drm: Use USB controller's DMA mask when importing dmabufs
MAINTAINERS: update drm bug reporting URL
fbdev: atyfb: use LCD management functions for PPC_PMAC also
fbdev: atyfb: always declare aty_{ld,st}_lcd()
drm/qxl: fix lockdep issue in qxl_alloc_release_reserved
drm/qxl: unpin release objects
drm/fb-helper: only unmap if buffer not null
drm/amdgpu: fix S0ix handling when the CONFIG_AMD_PMC=m
drm/radeon: fix AGP dependency
drm/radeon: also init GEM funcs in radeon_gem_prime_import_sg_table
...
|
|
The umem DMA list calculation was locked at 4k pages due to confusion
around how this API works and is used when larger pages are present.
The conclusion is:
- umem's cannot extend past what is mapped into the process, so creating
a lage page size and referring to a sub-range is not allowed
- umem's must always have a page offset of zero, except for sub PAGE_SIZE
umems
- The feature of umem_offset to create multiple objects inside a umem
is buggy and isn't used anyplace. Thus we can assume all users of the
current API have umem_offset == 0 as well
Provide a new page size calculator that limits the DMA list to the VA
range and enforces umem_offset == 0.
Allow user space to specify the page sizes which it can accept, this
bitmap must be derived from the intended use of the umem, based on
per-usage HW limitations.
Link: https://lore.kernel.org/r/20210304130501.1102577-4-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Change uverbs_get_const/uverbs_get_const_default to work properly with
both signed/unsigned parameters.
Current APIs mix s64 and u64 which leads to incorrect check when u64
value was supplied and its upper bit was set. In that case
uverbs_get_const() / uverbs_get_const_default() lower bound check may
fail unexpectedly, target is unsigned (lower bound is 0) but value
became negative as of the s64 usage.
Split to have two different APIs, no change to callers as the required
API will be called internally according to the target type.
Link: https://lore.kernel.org/r/20210304130501.1102577-3-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
The kernel periodically checks the idle time of nexthop buckets to
determine if they are idle and can be re-populated with a new nexthop.
When the resilient nexthop group is offloaded to hardware, the kernel
will not see activity on nexthop buckets unless it is reported from
hardware.
Add a function that can be periodically called by device drivers to
report activity on nexthop buckets after querying it from the underlying
device.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a function that can be called by device drivers to set "offload" or
"trap" indication on nexthop buckets following nexthop notifications and
other changes such as a neighbour becoming invalid.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add data structures that will be used for in-kernel notifications about
addition / deletion of a resilient nexthop group and about changes to a
hash bucket within a resilient group.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
At this moment, there is only one type of next-hop group: an mpath group,
which implements the hash-threshold algorithm.
To select a next hop, hash-threshold algorithm first assigns a range of
hashes to each next hop in the group, and then selects the next hop by
comparing the SKB hash with the individual ranges. When a next hop is
removed from the group, the ranges are recomputed, which leads to
reassignment of parts of hash space from one next hop to another. While
there will usually be some overlap between the previous and the new
distribution, some traffic flows change the next hop that they resolve to.
That causes problems e.g. as established TCP connections are reset, because
the traffic is forwarded to a server that is not familiar with the
connection.
Resilient hashing is a technique to address the above problem. Resilient
next-hop group has another layer of indirection between the group itself
and its constituent next hops: a hash table. The selection algorithm uses a
straightforward modulo operation to choose a hash bucket, and then reads
the next hop that this bucket contains, and forwards traffic there.
This indirection brings an important feature. In the hash-threshold
algorithm, the range of hashes associated with a next hop must be
continuous. With a hash table, mapping between the hash table buckets and
the individual next hops is arbitrary. Therefore when a next hop is deleted
the buckets that held it are simply reassigned to other next hops. When
weights of next hops in a group are altered, it may be possible to choose a
subset of buckets that are currently not used for forwarding traffic, and
use those to satisfy the new next-hop distribution demands, keeping the
"busy" buckets intact. This way, established flows are ideally kept being
forwarded to the same endpoints through the same paths as before the
next-hop group change.
In a nutshell, the algorithm works as follows. Each next hop has a number
of buckets that it wants to have, according to its weight and the number of
buckets in the hash table. In case of an event that might cause bucket
allocation change, the numbers for individual next hops are updated,
similarly to how ranges are updated for mpath group next hops. Following
that, a new "upkeep" algorithm runs, and for idle buckets that belong to a
next hop that is currently occupying more buckets than it wants (it is
"overweight"), it migrates the buckets to one of the next hops that has
fewer buckets than it wants (it is "underweight"). If, after this, there
are still underweight next hops, another upkeep run is scheduled to a
future time.
Chances are there are not enough "idle" buckets to satisfy the new demands.
The algorithm has knobs to select both what it means for a bucket to be
idle, and for whether and when to forcefully migrate buckets if there keeps
being an insufficient number of idle buckets.
There are three users of the resilient data structures.
- The forwarding code accesses them under RCU, and does not modify them
except for updating the time a selected bucket was last used.
- Netlink code, running under RTNL, which may modify the data.
- The delayed upkeep code, which may modify the data. This runs unlocked,
and mutual exclusion between the RTNL code and the delayed upkeep is
maintained by canceling the delayed work synchronously before the RTNL
code touches anything. Later it restarts the delayed work if necessary.
The RTNL code has to implement next-hop group replacement, next hop
removal, etc. For removal, the mpath code uses a neat trick of having a
backup next hop group structure, doing the necessary changes offline, and
then RCU-swapping them in. However, the hash tables for resilient hashing
are about an order of magnitude larger than the groups themselves (the size
might be e.g. 4K entries), and it was felt that keeping two of them is an
overkill. Both the primary next-hop group and the spare therefore use the
same resilient table, and writers are careful to keep all references valid
for the forwarding code. The hash table references next-hop group entries
from the next-hop group that is currently in the primary role (i.e. not
spare). During the transition from primary to spare, the table references a
mix of both the primary group and the spare. When a next hop is deleted,
the corresponding buckets are not set to NULL, but instead marked as empty,
so that the pointer is valid and can be used by the forwarding code. The
buckets are then migrated to a new next-hop group entry during upkeep. The
only times that the hash table is invalid is the very beginning and very
end of its lifetime. Between those points, it is always kept valid.
This patch introduces the core support code itself. It does not handle
notifications towards drivers, which are kept as if the group were an mpath
one. It does not handle netlink either. The only bit currently exposed to
user space is the new next-hop group type, and that is currently bounced.
There is therefore no way to actually access this code.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
- RTM_NEWNEXTHOP et.al. that handle resilient groups will have a new nested
attribute, NHA_RES_GROUP, whose elements are attributes NHA_RES_GROUP_*.
- RTM_NEWNEXTHOPBUCKET et.al. is a suite of new messages that will
currently serve only for dumping of individual buckets of resilient next
hop groups. For nexthop group buckets, these messages will carry a nested
attribute NHA_RES_BUCKET, whose elements are attributes NHA_RES_BUCKET_*.
There are several reasons why a new suite of messages is created for
nexthop buckets instead of overloading the information on the existing
RTM_{NEW,DEL,GET}NEXTHOP messages.
First, a nexthop group can contain a large number of nexthop buckets (4k
is not unheard of). This imposes limits on the amount of information that
can be encoded for each nexthop bucket given a netlink message is limited
to 64k bytes.
Second, while RTM_NEWNEXTHOPBUCKET is only used for notifications at
this point, in the future it can be extended to provide user space with
control over nexthop buckets configuration.
- The new group type is NEXTHOP_GRP_TYPE_RES. Note that nexthop code is
adjusted to bounce groups with that type for now.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
With the introduction of resilient nexthop groups, there will be two types
of multipath groups: the current hash-threshold "mpath" ones, and resilient
groups. Both are multipath, but to determine the fact, the system needs to
consider two flags. This might prove costly in the datapath. Therefore,
introduce a new flag, that should be set for next-hop groups that have more
than one nexthop, and should be considered multipath.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
mlx5_is_roce_enabled returns the devlink RoCE init value, therefore it
should be used only when driver is loaded. Instead we just need to read
the roce_en field.
In addition, rename mlx5_is_roce_enabled to mlx5_is_roce_init_enabled.
Fixes: 7a58779edd75 ("IB/mlx5: Improve query port for representor port")
Link: https://lore.kernel.org/r/20210304124517.1100608-2-leon@kernel.org
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
As specified in IETF RFC 8754, section 4.3.1.2, if the upper layer
header is IPv4 or IPv6, perform IPv6 decapsulation and resubmit the
decapsulated packet to the IPv4 or IPv6 module.
Only IPv6 decapsulation was implemented. This patch adds support for IPv4
decapsulation.
Link: https://tools.ietf.org/html/rfc8754#section-4.3.1.2
Signed-off-by: Julien Massonneau <julien.massonneau@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The HIP09 supports XRC transport service, it greatly saves the number of
QPs required to connect all processes in a large cluster.
Link: https://lore.kernel.org/r/1614826558-35423-1-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Add some helpers to lock and unlock the device. As this is such a simple
change, we update all the users that were using the lock already in this
patch.
Signed-off-by: Nuno Sa <nuno.sa@analog.com>
Link: https://lore.kernel.org/r/20210218114039.216091-5-nuno.sa@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
The channels are of type iio_chan_spec, not axi_adc_chan_spec. They were in
some earlier version, but forgot to rename in the doc-string.
Fixes: ef04070692a21 ("iio: adc: adi-axi-adc: add support for AXI ADC IP core")
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210219090134.48057-1-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
Some hid sensors may use relative sensitivity such as als sensor.
This patch adds relative sensitivity checking for all hid sensors.
Signed-off-by: Ye Xiang <xiang.ye@intel.com>
Acked-by: Jiri Kosina <jkosina@suse.cz>
Link: https://lore.kernel.org/r/20210207070048.23935-2-xiang.ye@intel.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
No functional change has been made with this patch. The main intent here
is to reduce code repetition of getting sensitivity attribute.
In the current implementation, sensor_hub_input_get_attribute_info() is
called from multiple drivers to get attribute info for sensitivity
field. Moving this to common place will avoid code repetition.
Signed-off-by: Ye Xiang <xiang.ye@intel.com>
Acked-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Link: https://lore.kernel.org/r/20210201054921.18214-2-xiang.ye@intel.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
With this change, an ioctl() call is added to open a character device for a
buffer. The ioctl() number is 'i' 0x91, which follows the
IIO_GET_EVENT_FD_IOCTL ioctl.
The ioctl() will return an FD for the requested buffer index. The indexes
are the same from the /sys/iio/devices/iio:deviceX/bufferY (i.e. the Y
variable).
Since there doesn't seem to be a sane way to return the FD for buffer0 to
be the same FD for the /dev/iio:deviceX, this ioctl() will return another
FD for buffer0 (or the first buffer). This duplicate FD will be able to
access the same buffer object (for buffer0) as accessing directly the
/dev/iio:deviceX chardev.
Also, there is no IIO_BUFFER_GET_BUFFER_COUNT ioctl() implemented, as the
index for each buffer (and the count) can be deduced from the
'/sys/bus/iio/devices/iio:deviceX/bufferY' folders (i.e the number of
bufferY folders).
Used following C code to test this:
-------------------------------------------------------------------
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <fcntl.h"
#include <errno.h>
#define IIO_BUFFER_GET_FD_IOCTL _IOWR('i', 0x91, int)
int main(int argc, char *argv[])
{
int fd;
int fd1;
int ret;
if ((fd = open("/dev/iio:device0", O_RDWR))<0) {
fprintf(stderr, "Error open() %d errno %d\n",fd, errno);
return -1;
}
fprintf(stderr, "Using FD %d\n", fd);
fd1 = atoi(argv[1]);
ret = ioctl(fd, IIO_BUFFER_GET_FD_IOCTL, &fd1);
if (ret < 0) {
fprintf(stderr, "Error for buffer %d ioctl() %d errno %d\n", fd1, ret, errno);
close(fd);
return -1;
}
fprintf(stderr, "Got FD %d\n", fd1);
close(fd1);
close(fd);
return 0;
}
-------------------------------------------------------------------
Results are:
-------------------------------------------------------------------
# ./test 0
Using FD 3
Got FD 4
# ./test 1
Using FD 3
Got FD 4
# ./test 2
Using FD 3
Got FD 4
# ./test 3
Using FD 3
Got FD 4
# ls /sys/bus/iio/devices/iio\:device0
buffer buffer0 buffer1 buffer2 buffer3 dev
in_voltage_sampling_frequency in_voltage_scale
in_voltage_scale_available
name of_node power scan_elements subsystem uevent
-------------------------------------------------------------------
iio:device0 has some fake kfifo buffers attached to an IIO device.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-21-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
With this change, calling iio_device_attach_buffer() will actually attach
more buffers.
Right now this doesn't do any validation of whether a buffer is attached
twice; maybe that can be added later (if needed). Attaching a buffer more
than once should yield noticeably bad results.
The first buffer is the legacy buffer, so a reference is kept to it.
At this point, accessing the data for the extra buffers (that are added
after the first one) isn't possible yet.
The iio_device_attach_buffer() is also changed to return an error code,
which for now is -ENOMEM if the array could not be realloc-ed for more
buffers.
To adapt to this new change iio_device_attach_buffer() is called last in
all place where it's called. The realloc failure is a bit difficult to
handle during un-managed calls when unwinding, so it's better to have this
as the last error in the setup_buffer calls.
At this point, no driver should call iio_device_attach_buffer() directly,
it should call one of the {devm_}iio_triggered_buffer_setup() or
devm_iio_kfifo_buffer_setup() or devm_iio_dmaengine_buffer_setup()
functions. This makes iio_device_attach_buffer() a bit easier to handle.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-20-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
This change wraps all buffer attributes into iio_dev_attr objects, and
assigns a reference to the IIO buffer they belong to.
With the addition of multiple IIO buffers per one IIO device, we need a way
to know which IIO buffer is being enabled/disabled/controlled.
We know that all buffer attributes are device_attributes. So we can wrap
them with a iio_dev_attr types. In the iio_dev_attr type, we can also hold
a reference to an IIO buffer.
So, we end up being able to allocate wrapped attributes for all buffer
attributes (even the one from other drivers).
The neat part with this mechanism, is that we don't need to add any extra
cleanup, because these attributes are being added to a dynamic list that
will get cleaned up via iio_free_chan_devattr_list().
With this change, the 'buffer->scan_el_dev_attr_list' list is being renamed
to 'buffer->buffer_attr_list', effectively merging (or finalizing the
merge) of the buffer/ & scan_elements/ attributes internally.
Accessing these new buffer attributes can now be done via
'to_iio_dev_attr(attr)->buffer' inside the show/store handlers.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-15-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
This change adds a reference to a 'struct iio_buffer' object on the
iio_dev_attr object. This way, we can use the created iio_dev_attr objects
on per-buffer basis (since they're allocated anyway).
A minor downside of this change is that the number of parameters on
__iio_add_chan_devattr() grows by 1. This looks like it could do with a bit
of a re-think.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-14-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
With this change, we create a new directory for the IIO device called
buffer0, under which both the old buffer/ and scan_elements/ are stored.
This is done to simplify the addition of multiple IIO buffers per IIO
device. Otherwise we would need to add a bufferX/ and scan_elementsX/
directory for each IIO buffer.
With the current way of storing attribute groups, we can't have directories
stored under each other (i.e. scan_elements/ under buffer/), so the best
approach moving forward is to merge their attributes.
The old/legacy buffer/ & scan_elements/ groups are not stored on the opaque
IIO device object. This way the IIO buffer can have just a single
attribute_group object, saving a bit of memory when adding multiple IIO
buffers.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-13-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
Up until now, the device groups that an IIO device had were limited to 6.
Two of these groups would account for buffer attributes (the buffer/ and
scan_elements/ directories).
Since we want to add multiple buffers per IIO device, this number may not
be enough, when adding a second buffer. So, this change reallocates the
groups array whenever an IIO device group is added, via a
iio_device_register_sysfs_group() helper.
This also means that the groups array should be assigned to
'indio_dev.dev.groups' really late, right before {cdev_}device_add() is
called to do the entire setup.
And we also must take care to free this array when the sysfs resources are
being cleaned up.
With this change we can also move the 'groups' & 'groupcounter' fields to
the iio_dev_opaque object. Up until now, this didn't make a whole lot of
sense (especially since we weren't sure how multibuffer support would look
like in the end).
But doing it now kills one birds with one stone.
An alternative, would be to add a configurable Kconfig symbol
CONFIG_IIO_MAX_BUFFERS_PER_DEVICE (or something like that) and compute a
static maximum of the groups we can support per IIO device. But that would
probably annoy a few people since that would make the system less
configurable.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-11-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
This change does a conversion of the devm_iio_dmaengine_buffer_alloc() to
devm_iio_dmaengine_buffer_setup(). This will allocate an IIO DMA buffer and
attach it to the IIO device, similar to devm_iio_triggered_buffer_setup()
(though the underlying code is different, the final logic is the same).
Since the only user of the devm_iio_dmaengine_buffer_alloc() was the
adi-axi-adc driver, this change does the replacement in a single go in the
driver.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-7-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
At this point all drivers should use devm_iio_kfifo_buffer_setup() instead
of manually allocating via devm_iio_kfifo_allocate() and assigning ops and
modes.
With this change, the devm_iio_kfifo_allocate() will be made private to the
IIO core, since all drivers should call either
devm_iio_kfifo_buffer_setup() or devm_iio_triggered_buffer_setup() to
create a kfifo buffer.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-6-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
This change adds the devm_iio_kfifo_buffer_setup() helper/short-hand,
which groups the simple routine of allocating a kfifo buffers via
devm_iio_kfifo_allocate() and calling iio_device_attach_buffer().
The mode_flags parameter is required, as the IIO kfifo supports 2 modes:
INDIO_BUFFER_SOFTWARE & INDIO_BUFFER_TRIGGERED.
The setup_ops parameter is optional.
This function will be a bit more useful when needing to define multiple
buffers per IIO device.
The naming for this function has been inspired from
iio_triggered_buffer_setup() since that one does a kfifo alloc + a pollfunc
alloc. So, this should have a more familiar ring to what it is.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20210215104043.91251-3-alexandru.ardelean@analog.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
|
I tested commit 43042b90cae1 ("svcrdma: Reduce Receive doorbell
rate") with mlx4 (IB) and software iWARP and didn't find any
issues. However, I recently got my hardware iWARP setup back on
line (FastLinQ) and it's crashing hard on this commit (confirmed
via bisect).
The failure mode is complex.
- After a connection is established, the first Receive completes
normally.
- But the second and third Receives have garbage in their Receive
buffers. The server responds with ERR_VERS as a result.
- When the client tears down the connection to retry, a couple
of posted Receives flush twice, and that corrupts the recv_ctxt
free list.
- __svc_rdma_free then faults or loops infinitely while destroying
the xprt's recv_ctxts.
Since 43042b90cae1 ("svcrdma: Reduce Receive doorbell rate") does
not fix a bug but is a scalability enhancement, it's safe and
appropriate to revert it while working on a replacement.
Fixes: 43042b90cae1 ("svcrdma: Reduce Receive doorbell rate")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull media fixes from Mauro Carvalho Chehab:
"A couple of fixes:
- fix a build issue with CEC
- fix a deadlock at usbtv driver
- fix some null pointer address issues at vsp1 driver
- fix a wrong bitmap setting at rkisp1 driver"
* tag 'media/v5.12-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
media: rkisp1: params: fix wrong bits settings
media: v4l: vsp1: Fix uif null pointer access
media: v4l: vsp1: Fix bru null pointer access
media: usbtv: Fix deadlock on suspend
media: rc: compile rc-cec.c into rc-core
|
|
Introduce an extra parameter is_iomem to da_to_va, then the caller
could take the memory as normal memory or io mapped memory.
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Link: https://lore.kernel.org/r/1615029865-23312-5-git-send-email-peng.fan@oss.nxp.com
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
Introduce is_iomem to indicate this piece memory is iomem or not.
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Link: https://lore.kernel.org/r/1615029865-23312-4-git-send-email-peng.fan@oss.nxp.com
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
The request_ncomp_notif device operation and function are unused, remove
them.
Link: https://lore.kernel.org/r/20210311150921.23726-1-galpress@amazon.com
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
This fixes CVE-2020-26541.
The Secure Boot Forbidden Signature Database, dbx, contains a list of now
revoked signatures and keys previously approved to boot with UEFI Secure
Boot enabled. The dbx is capable of containing any number of
EFI_CERT_X509_SHA256_GUID, EFI_CERT_SHA256_GUID, and EFI_CERT_X509_GUID
entries.
Currently when EFI_CERT_X509_GUID are contained in the dbx, the entries are
skipped.
Add support for EFI_CERT_X509_GUID dbx entries. When a EFI_CERT_X509_GUID
is found, it is added as an asymmetrical key to the .blacklist keyring.
Anytime the .platform keyring is used, the keys in the .blacklist keyring
are referenced, if a matching key is found, the key will be rejected.
[DH: Made the following changes:
- Added to have a config option to enable the facility. This allows a
Kconfig solution to make sure that pkcs7_validate_trust() is
enabled.[1][2]
- Moved the functions out from the middle of the blacklist functions.
- Added kerneldoc comments.]
Signed-off-by: Eric Snowberg <eric.snowberg@oracle.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
cc: Randy Dunlap <rdunlap@infradead.org>
cc: Mickaël Salaün <mic@digikod.net>
cc: Arnd Bergmann <arnd@kernel.org>
cc: keyrings@vger.kernel.org
Link: https://lore.kernel.org/r/20200901165143.10295-1-eric.snowberg@oracle.com/ # rfc
Link: https://lore.kernel.org/r/20200909172736.73003-1-eric.snowberg@oracle.com/ # v2
Link: https://lore.kernel.org/r/20200911182230.62266-1-eric.snowberg@oracle.com/ # v3
Link: https://lore.kernel.org/r/20200916004927.64276-1-eric.snowberg@oracle.com/ # v4
Link: https://lore.kernel.org/r/20210122181054.32635-2-eric.snowberg@oracle.com/ # v5
Link: https://lore.kernel.org/r/161428672051.677100.11064981943343605138.stgit@warthog.procyon.org.uk/
Link: https://lore.kernel.org/r/161433310942.902181.4901864302675874242.stgit@warthog.procyon.org.uk/ # v2
Link: https://lore.kernel.org/r/161529605075.163428.14625520893961300757.stgit@warthog.procyon.org.uk/ # v3
Link: https://lore.kernel.org/r/bc2c24e3-ed68-2521-0bf4-a1f6be4a895d@infradead.org/ [1]
Link: https://lore.kernel.org/r/20210225125638.1841436-1-arnd@kernel.org/ [2]
|
|
Some users of paravirtualized functions need to query which function
has been specified in a pv_ops vector element. In order to be able to
switch such paravirtualized functions to static_calls instead, there
needs to be a function to query the function which will be called via
static_call().
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210311142319.4723-4-jgross@suse.com
|
|
Having the definition of static_call() in static_call_types.h makes
no sense as long struct static_call_key isn't defined there, as the
generic implementation of static_call() is referencing this structure.
So move the definition of struct static_call_key to static_call_types.h.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210311142319.4723-3-jgross@suse.com
|
|
Ever since the addition of multipage bio_vecs BIO_MAX_PAGES has been
horribly confusingly misnamed. Rename it to BIO_MAX_VECS to stop
confusing users of the bio API.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20210311110137.1132391-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
There is a spelling mistake in a comment, fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Link: https://lore.kernel.org/r/20210311093054.5338-1-colin.king@canonical.com
|