summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2020-09-11Merge branch 'kvm-arm64/pt-new' into kvmarm-master/nextMarc Zyngier
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-11Merge branch 'powercap'Rafael J. Wysocki
* powercap: powercap: make documentation reflect code powercap/intel_rapl: add support for AlderLake powercap/intel_rapl: add support for RocketLake powercap/intel_rapl: add support for TigerLake Desktop
2020-09-11qede: Notify qedr when mtu has changedMichal Kalderon
MTU change on ethtool is currently not supported for iWARP. Notify qedr driver so that appropriate logging can take place. Link: https://lore.kernel.org/r/20200902165741.8355-6-michal.kalderon@marvell.com Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-11RDMA/qedr: Remove fbo and zbva from the MRJason Gunthorpe
zbva is always false, so fbo is never read. A 'zero-based-virtual-address' is simply IOVA == 0, and the driver already supports this. Link: https://lore.kernel.org/r/16-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Acked-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-11Merge tag 'v5.9-rc4' into nextVinod Koul
Linux 5.9-rc4
2020-09-11dma-direct: rename and cleanup __phys_to_dmaChristoph Hellwig
The __phys_to_dma vs phys_to_dma distinction isn't exactly obvious. Try to improve the situation by renaming __phys_to_dma to phys_to_dma_unencryped, and not forcing architectures that want to override phys_to_dma to actually provide __phys_to_dma. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
2020-09-11dma-direct: remove __dma_to_physChristoph Hellwig
There is no harm in just always clearing the SME encryption bit, while significantly simplifying the interface. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
2020-09-11dma-direct: remove dma_direct_{alloc,free}_pagesChristoph Hellwig
Just merge these helpers into the main dma_direct_{alloc,free} routines, as the additional checks are always false for the two callers. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
2020-09-11dma-mapping: add (back) arch_dma_mark_clean for ia64Christoph Hellwig
Add back a hook to optimize dcache flushing after reading executable code using DMA. This gets ia64 out of the business of pretending to be dma incoherent just for this optimization. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-11include: pe.h: Add RISC-V related PE definitionAtish Patra
Define RISC-V related machine types. Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Link: https://lore.kernel.org/r/20200415195422.19866-3-atish.patra@wdc.com Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-11SUNRPC: remove RC4-HMAC-MD5 support from KerberosVArd Biesheuvel
The RC4-HMAC-MD5 KerberosV algorithm is based on RFC 4757 [0], which was specifically issued for interoperability with Windows 2000, but was never intended to receive the same level of support. The RFC says The IETF Kerberos community supports publishing this specification as an informational document in order to describe this widely implemented technology. However, while these encryption types provide the operations necessary to implement the base Kerberos specification [RFC4120], they do not provide all the required operations in the Kerberos cryptography framework [RFC3961]. As a result, it is not generally possible to implement potential extensions to Kerberos using these encryption types. The Kerberos encryption type negotiation mechanism [RFC4537] provides one approach for using such extensions even when a Kerberos infrastructure uses long-term RC4 keys. Because this specification does not implement operations required by RFC 3961 and because of security concerns with the use of RC4 and MD4 discussed in Section 8, this specification is not appropriate for publication on the standards track. The RC4-HMAC encryption types are used to ease upgrade of existing Windows NT environments, provide strong cryptography (128-bit key lengths), and provide exportable (meet United States government export restriction requirements) encryption. This document describes the implementation of those encryption types. Furthermore, this RFC was re-classified as 'historic' by RFC 8429 [1] in 2018, stating that 'none of the encryption types it specifies should be used' Note that other outdated algorithms are left in place (some of which are guarded by CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES), so this should only adversely affect interoperability with Windows NT/2000 systems that have not received any updates since 2008 (but are connected to a network nonetheless) [0] https://tools.ietf.org/html/rfc4757 [1] https://tools.ietf.org/html/rfc8429 Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-09-10bpf: Plug hole in struct bpf_sk_lookup_kernLorenz Bauer
As Alexei points out, struct bpf_sk_lookup_kern has two 4-byte holes. This leads to suboptimal instructions being generated (IPv4, x86): 1372 struct bpf_sk_lookup_kern ctx = { 0xffffffff81b87f30 <+624>: xor %eax,%eax 0xffffffff81b87f32 <+626>: mov $0x6,%ecx 0xffffffff81b87f37 <+631>: lea 0x90(%rsp),%rdi 0xffffffff81b87f3f <+639>: movl $0x110002,0x88(%rsp) 0xffffffff81b87f4a <+650>: rep stos %rax,%es:(%rdi) 0xffffffff81b87f4d <+653>: mov 0x8(%rsp),%eax 0xffffffff81b87f51 <+657>: mov %r13d,0x90(%rsp) 0xffffffff81b87f59 <+665>: incl %gs:0x7e4970a0(%rip) 0xffffffff81b87f60 <+672>: mov %eax,0x8c(%rsp) 0xffffffff81b87f67 <+679>: movzwl 0x10(%rsp),%eax 0xffffffff81b87f6c <+684>: mov %ax,0xa8(%rsp) 0xffffffff81b87f74 <+692>: movzwl 0x38(%rsp),%eax 0xffffffff81b87f79 <+697>: mov %ax,0xaa(%rsp) Fix this by moving around sport and dport. pahole confirms there are no more holes: struct bpf_sk_lookup_kern { u16 family; /* 0 2 */ u16 protocol; /* 2 2 */ __be16 sport; /* 4 2 */ u16 dport; /* 6 2 */ struct { __be32 saddr; /* 8 4 */ __be32 daddr; /* 12 4 */ } v4; /* 8 8 */ struct { const struct in6_addr * saddr; /* 16 8 */ const struct in6_addr * daddr; /* 24 8 */ } v6; /* 16 16 */ struct sock * selected_sk; /* 32 8 */ bool no_reuseport; /* 40 1 */ /* size: 48, cachelines: 1, members: 8 */ /* padding: 7 */ /* last cacheline: 48 bytes */ }; The assembly also doesn't contain the pesky rep stos anymore: 1372 struct bpf_sk_lookup_kern ctx = { 0xffffffff81b87f60 <+624>: movzwl 0x10(%rsp),%eax 0xffffffff81b87f65 <+629>: movq $0x0,0xa8(%rsp) 0xffffffff81b87f71 <+641>: movq $0x0,0xb0(%rsp) 0xffffffff81b87f7d <+653>: mov %ax,0x9c(%rsp) 0xffffffff81b87f85 <+661>: movzwl 0x38(%rsp),%eax 0xffffffff81b87f8a <+666>: movq $0x0,0xb8(%rsp) 0xffffffff81b87f96 <+678>: mov %ax,0x9e(%rsp) 0xffffffff81b87f9e <+686>: mov 0x8(%rsp),%eax 0xffffffff81b87fa2 <+690>: movq $0x0,0xc0(%rsp) 0xffffffff81b87fae <+702>: movl $0x110002,0x98(%rsp) 0xffffffff81b87fb9 <+713>: mov %eax,0xa0(%rsp) 0xffffffff81b87fc0 <+720>: mov %r13d,0xa4(%rsp) 1: https://lore.kernel.org/bpf/CAADnVQKE6y9h2fwX6OS837v-Uf+aBXnT_JXiN_bbo2gitZQ3tA@mail.gmail.com/ Fixes: e9ddbb7707ff ("bpf: Introduce SK_LOOKUP program type with a dedicated attach point") Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20200910110248.198326-1-lmb@cloudflare.com
2020-09-10f2fs: Use generic casefolding supportDaniel Rosenberg
This switches f2fs over to the generic support provided in the previous patch. Since casefolded dentries behave the same in ext4 and f2fs, we decrease the maintenance burden by unifying them, and any optimizations will immediately apply to both. Signed-off-by: Daniel Rosenberg <drosen@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-09-10fs: Add standard casefolding supportDaniel Rosenberg
This adds general supporting functions for filesystems that use utf8 casefolding. It provides standard dentry_operations and adds the necessary structures in struct super_block to allow this standardization. The new dentry operations are functionally equivalent to the existing operations in ext4 and f2fs, apart from the use of utf8_casefold_hash to avoid an allocation. By providing a common implementation, all users can benefit from any optimizations without needing to port over improvements. Signed-off-by: Daniel Rosenberg <drosen@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-09-10unicode: Add utf8_casefold_hashDaniel Rosenberg
This adds a case insensitive hash function to allow taking the hash without needing to allocate a casefolded copy of the string. The existing d_hash implementations for casefolding allocate memory within rcu-walk, by avoiding it we can be more efficient and avoid worrying about a failed allocation. Signed-off-by: Daniel Rosenberg <drosen@google.com> Reviewed-by: Gabriel Krisman Bertazi <krisman@collabora.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-09-10net: Fix broken NETIF_F_CSUM_MASK spell in netdev_features.hMiaohe Lin
Remove the weird space inside the NETIF_F_CSUM_MASK. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-10tcp: record received TOS value in the request socketWei Wang
A new field is added to the request sock to record the TOS value received on the listening socket during 3WHS: When not under syn flood, it is recording the TOS value sent in SYN. When under syn flood, it is recording the TOS value sent in the ACK. This is a preparation patch in order to do TOS reflection in the later commit. Signed-off-by: Wei Wang <weiwan@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-10net: manage napi add/del idempotence explicitlyJakub Kicinski
To RCUify napi->dev_list we need to replace list_del_init() with list_del_rcu(). There is no _init() version for RCU for obvious reasons. Up until now netif_napi_del() was idempotent so to make sure it remains such add a bit which is set when NAPI is listed, and cleared when it removed. Since we don't expect multiple calls to netif_napi_add() to be correct, add a warning on that side. Now that napi_hash_add / napi_hash_del are only called by napi_add / del we can actually steal its bit. We just need to make sure hash node is initialized correctly. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-10net: remove napi_hash_del() from driver-facing APIJakub Kicinski
We allow drivers to call napi_hash_del() before calling netif_napi_del() to batch RCU grace periods. This makes the API asymmetric and leaks internal implementation details. Soon we will want the grace period to protect more than just the NAPI hash table. Restructure the API and have drivers call a new function - __netif_napi_del() if they want to take care of RCU waits. Note that only core was checking the return status from napi_hash_del() so the new helper does not report if the NAPI was actually deleted. Some notes on driver oddness: - veth observed the grace period before calling netif_napi_del() but that should not matter - myri10ge observed normal RCU flavor - bnx2x and enic did not actually observe the grace period (unless they did so implicitly) - virtio_net and enic only unhashed Rx NAPIs The last two points seem to indicate that the calls to napi_hash_del() were a left over rather than an optimization. Regardless, it's easy enough to correct them. This patch may introduce extra synchronize_net() calls for interfaces which set NAPI_STATE_NO_BUSY_POLL and depend on free_netdev() to call netif_napi_del(). This seems inevitable since we want to use RCU for netpoll dev->napi_list traversal, and almost no drivers set IFF_DISABLE_NETPOLL. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-10powercap: make documentation reflect codeAmit Kucheria
Fix up the documentation of the struct powercap_control_type members to match the code. Also fixup stray whitespace. Signed-off-by: Amit Kucheria <amitk@kernel.org> [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-10PM: <linux/device.h>: fix @em_pd kernel-doc warningRandy Dunlap
Fix kernel-doc warning in <linux/device.h>: ../include/linux/device.h:613: warning: Function parameter or member 'em_pd' not described in 'device' Fixes: 1bc138c62295 ("PM / EM: add support for other devices than CPUs in Energy Model") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-10soc: qcom-geni-se: Don't use relaxed writes when writing commandsDouglas Anderson
Writing the command is the final step in kicking off a transfer. Let's use writel() to ensure that any other memory accesses are done before the command kicks off. It's expected that this is mostly relevant if we're in DMA mode but since it doesn't appear to regress performance in a measurable way [1] even in PIO mode and it's easier to reason about then let's just always use it. NOTE: this patch came about due to code inspection. No actual problems were observed that this patch fixes. [1] Tested by timing "flashrom -p ec" on a Chromebook which stresses GENI SPI a lot. Reviewed-by: Mukesh Kumar Savaliya <msavaliy@codeaurora.org> Reviewed-by: Akash Asthana <akashast@codeaurora.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Suggested-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20200722150113.1.Ia50ab5cb8a6d3a73d302e6bdc25542d48ffd27f4@changeid Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2020-09-10test_firmware: Test platform fw loading on non-EFI systemsKees Cook
On non-EFI systems, it wasn't possible to test the platform firmware loader because it will have never set "checked_fw" during __init. Instead, allow the test code to override this check. Additionally split the declarations into a private symbol namespace so there is greater enforcement of the symbol visibility. Fixes: 548193cba2a7 ("test_firmware: add support for firmware_request_platform") Cc: stable@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200909225354.3118328-1-keescook@chromium.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-10objtool: Make unwind hint definitions available to other architecturesJulien Thierry
Unwind hints are useful to provide objtool with information about stack states in non-standard functions/code. While the type of information being provided might be very arch specific, the mechanism to provide the information can be useful for other architectures. Move the relevant unwint hint definitions for all architectures to see. [ jpoimboe: REGS_IRET -> REGS_PARTIAL ] Signed-off-by: Julien Thierry <jthierry@redhat.com> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
2020-09-10objtool: Only include valid definitions depending on source file typeJulien Thierry
Header include/linux/objtool.h contains both C and assembly definition that are visible regardless of the file including them. Place definition under conditional __ASSEMBLY__. Reviewed-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Julien Thierry <jthierry@redhat.com> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
2020-09-10objtool: Rename frame.h -> objtool.hJulien Thierry
Header frame.h is getting more code annotations to help objtool analyze object files. Rename the file to objtool.h. [ jpoimboe: add objtool.h to MAINTAINERS ] Signed-off-by: Julien Thierry <jthierry@redhat.com> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
2020-09-10block: remove check_disk_changeChristoph Hellwig
Remove the now unused check_disk_change helper. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-09-10ide-gd: stop using the disk events mechanismChristoph Hellwig
ide-gd is only using the disk events mechanism to be able to force an invalidation and partition scan on opening removable media. Just open code the logic without invoving the block layer. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-09-10block: add a bdev_check_media_change helperChristoph Hellwig
Like check_disk_changed, except that it does not call ->revalidate_disk but leaves that to the caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-09-10swiotlb: Declare swiotlb_late_init_with_default_size() in headerAndy Shevchenko
Compiler is not happy about one function prototype: CC kernel/dma/swiotlb.o kernel/dma/swiotlb.c:275:1: warning: no previous prototype for ‘swiotlb_late_init_with_default_size’ [-Wmissing-prototypes] 275 | swiotlb_late_init_with_default_size(size_t default_size) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Since it's used outside of the module, move its declaration to the header from the user. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2020-09-10dax: Create a range version of dax_layout_busy_page()Vivek Goyal
virtiofs device has a range of memory which is mapped into file inodes using dax. This memory is mapped in qemu on host and maps different sections of real file on host. Size of this memory is limited (determined by administrator) and depending on filesystem size, we will soon reach a situation where all the memory is in use and we need to reclaim some. As part of reclaim process, we will need to make sure that there are no active references to pages (taken by get_user_pages()) on the memory range we are trying to reclaim. I am planning to use dax_layout_busy_page() for this. But in current form this is per inode and scans through all the pages of the inode. We want to reclaim only a portion of memory (say 2MB page). So we want to make sure that only that 2MB range of pages do not have any references (and don't want to unmap all the pages of inode). Hence, create a range version of this function named dax_layout_busy_page_range() which can be used to pass a range which needs to be unmapped. Cc: Dan Williams <dan.j.williams@intel.com> Cc: linux-nvdimm@lists.01.org Cc: Jan Kara <jack@suse.cz> Cc: Vishal L Verma <vishal.l.verma@intel.com> Cc: "Weiny, Ira" <ira.weiny@intel.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2020-09-10Merge branch 'virtio-shm' into for-nextMiklos Szeredi
Pull virtio shared memory region patches for virtiofs shared memory (DAX) support.
2020-09-10perf/core: Pull pmu::sched_task() into perf_event_context_sched_out()Kan Liang
The pmu::sched_task() is a context switch callback. It passes the cpuctx->task_ctx as a parameter to the lower code. To find the cpuctx->task_ctx, the current code iterates a cpuctx list. The same context will iterated in perf_event_context_sched_out() soon. Share the cpuctx->task_ctx can avoid the unnecessary iteration of the cpuctx list. The pmu::sched_task() is also required for the optimization case for equivalent contexts. The task_ctx_sched_out() will eventually disable and reenable the PMU when schedule out events. Add perf_pmu_disable() and perf_pmu_enable() around task_ctx_sched_out() don't break anything. Drop the cpuctx->ctx.lock for the pmu::sched_task(). The lock is for per-CPU context, which is not necessary for the per-task context schedule. No one uses sched_cb_entry, perf_sched_cb_usages, sched_cb_list, and perf_pmu_sched_task() any more. Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200821195754.20159-2-kan.liang@linux.intel.com
2020-09-10seqlock: PREEMPT_RT: Do not starve seqlock_t writersAhmed S. Darwish
On PREEMPT_RT, seqlock_t is transformed to a sleeping lock that do not disable preemption. A seqlock_t reader can thus preempt its write side section and spin for the enter scheduler tick. If that reader belongs to a real-time scheduling class, it can spin forever and the kernel will livelock. To break this livelock possibility on PREEMPT_RT, implement seqlock_t in terms of "seqcount_spinlock_t" instead of plain "seqcount_t". Beside its pure annotational value, this will leverage the existing seqcount_LOCKNAME_T PREEMPT_RT anti-livelock mechanisms, without adding any extra code. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200904153231.11994-6-a.darwish@linutronix.de
2020-09-10seqlock: seqcount_LOCKNAME_t: Introduce PREEMPT_RT supportAhmed S. Darwish
Preemption must be disabled before entering a sequence counter write side critical section. Otherwise the read side section can preempt the write side section and spin for the entire scheduler tick. If that reader belongs to a real-time scheduling class, it can spin forever and the kernel will livelock. Disabling preemption cannot be done for PREEMPT_RT though: it can lead to higher latencies, and the write side sections will not be able to acquire locks which become sleeping locks (e.g. spinlock_t). To remain preemptible, while avoiding a possible livelock caused by the reader preempting the writer, use a different technique: let the reader detect if a seqcount_LOCKNAME_t writer is in progress. If that's the case, acquire then release the associated LOCKNAME writer serialization lock. This will allow any possibly-preempted writer to make progress until the end of its writer serialization lock critical section. Implement this lock-unlock technique for all seqcount_LOCKNAME_t with an associated (PREEMPT_RT) sleeping lock. References: 55f3560df975 ("seqlock: Extend seqcount API with associated locks") Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200519214547.352050-1-a.darwish@linutronix.de
2020-09-10seqlock: seqcount_t: Implement all read APIs as statement expressionsAhmed S. Darwish
The sequence counters read APIs are implemented as CPP macros, so they can take either seqcount_t or any of the seqcount_LOCKNAME_t variants. Such macros then get *directly* transformed to internal C functions that only take plain seqcount_t. Further commits need access to seqcount_LOCKNAME_t inside of the actual read APIs code. Thus transform all of the seqcount read APIs to pure GCC statement expressions instead. This will not break type-safety: all of the transformed APIs resolve to a _Generic() selection that does not have a "default" case. This will also not affect the transformed APIs readability: previously added kernel-doc above all of seqlock.h functions makes the expectations quite clear for call-site developers. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200904153231.11994-4-a.darwish@linutronix.de
2020-09-10seqlock: Use unique prefix for seqcount_t property accessorsAhmed S. Darwish
At seqlock.h, the following set of functions: - __seqcount_ptr() - __seqcount_preemptible() - __seqcount_assert() act as plain seqcount_t "property" accessors. Meanwhile, the following group: - __seqcount_ptr() - __seqcount_lock_preemptible() - __seqcount_assert_lock_held() act as the equivalent set, but in the generic form, taking either seqcount_t or any of the seqcount_LOCKNAME_t variants. This is quite confusing, especially the first member where it is called exactly the same in both groups. Differentiate the first group by using "__seqprop" as prefix, and also use that same prefix for all of seqcount_LOCKNAME_t property accessors. While at it, constify the property accessors first parameter when appropriate. References: 55f3560df975 ("seqlock: Extend seqcount API with associated locks") Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200904153231.11994-3-a.darwish@linutronix.de
2020-09-10seqlock: seqcount_LOCKNAME_t: Standardize naming conventionAhmed S. Darwish
At seqlock.h, sequence counters with associated locks are either called seqcount_LOCKNAME_t, seqcount_LOCKTYPE_t, or seqcount_locktype_t. Standardize on seqcount_LOCKNAME_t for all instances in comments, kernel-doc, and SEQCOUNT_LOCKNAME() generative macro paramters. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200904153231.11994-2-a.darwish@linutronix.de
2020-09-10seqlock: seqcount latch APIs: Only allow seqcount_latch_tAhmed S. Darwish
All latch sequence counter call-sites have now been converted from plain seqcount_t to the new seqcount_latch_t data type. Enforce type-safety by modifying seqlock.h latch APIs to only accept seqcount_latch_t. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200827114044.11173-9-a.darwish@linutronix.de
2020-09-10rbtree_latch: Use seqcount_latch_tAhmed S. Darwish
Latch sequence counters have unique read and write APIs, and thus seqcount_latch_t was recently introduced at seqlock.h. Use that new data type instead of plain seqcount_t. This adds the necessary type-safety and ensures that only latching-safe seqcount APIs are to be used. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200827114044.11173-8-a.darwish@linutronix.de
2020-09-10seqlock: Introduce seqcount_latch_tAhmed S. Darwish
Latch sequence counters are a multiversion concurrency control mechanism where the seqcount_t counter even/odd value is used to switch between two copies of protected data. This allows the seqcount_t read path to safely interrupt its write side critical section (e.g. from NMIs). Initially, latch sequence counters were implemented as a single write function above plain seqcount_t: raw_write_seqcount_latch(). The read side was expected to use plain seqcount_t raw_read_seqcount(). A specialized latch read function, raw_read_seqcount_latch(), was later added. It became the standardized way for latch read paths. Due to the dependent load, it has one read memory barrier less than the plain seqcount_t raw_read_seqcount() API. Only raw_write_seqcount_latch() and raw_read_seqcount_latch() should be used with latch sequence counters. Having *unique* read and write path APIs means that latch sequence counters are actually a data type of their own -- just inappropriately overloading plain seqcount_t. Introduce seqcount_latch_t. This adds type-safety and ensures that only the correct latch-safe APIs are to be used. Not to break bisection, let the latch APIs also accept plain seqcount_t or seqcount_raw_spinlock_t. After converting all call sites to seqcount_latch_t, only that new data type will be allowed. References: 9b0fd802e8c0 ("seqcount: Add raw_write_seqcount_latch()") References: 7fc26327b756 ("seqlock: Introduce raw_read_seqcount_latch()") References: aadd6e5caaac ("time/sched_clock: Use raw_read_seqcount_latch()") Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200827114044.11173-4-a.darwish@linutronix.de
2020-09-10virtio: Add get_shm_region methodSebastien Boeuf
Virtio defines 'shared memory regions' that provide a continuously shared region between the host and guest. Provide a method to find a particular region on a device. Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Cc: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Cc: "Michael S. Tsirkin" <mst@redhat.com> Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2020-09-10soundwire: bus: use quirk to filter out invalid parity errorsPierre-Louis Bossart
If a Slave device reports with a quirk that its initial parity check may be incorrect, filter it but keep the parity checks active in steady state. Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by: Bard Liao <yung-chuan.liao@linux.intel.com> Reviewed-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Reviewed-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com> Link: https://lore.kernel.org/r/20200908134521.6781-5-yung-chuan.liao@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-09-10soundwire: slave: add first_interrupt_done statusPierre-Louis Bossart
Some Slaves report incorrect information in their interrupt status registers after a master/bus reset, track the initial interrupt handling so that quirks can be introduced to filter out incorrect information while keeping interrupts enabled in steady state. Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by: Bard Liao <yung-chuan.liao@linux.intel.com> Reviewed-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Reviewed-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com> Link: https://lore.kernel.org/r/20200908134521.6781-4-yung-chuan.liao@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-09-10ASoC/soundwire: bus: use property to set interrupt masksPierre-Louis Bossart
Add a slave-level property and program the SCP_INT1_MASK as desired by the codec driver. Since there is no DisCo property this has to be an implementation-specific firmware property or hard-coded in the driver. The only functionality change is that implementation-defined interrupts are no longer set for amplifiers - those interrupts are typically for jack detection or acoustic event detection/hotwording. Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by: Bard Liao <yung-chuan.liao@linux.intel.com> Tested-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Reviewed-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Reviewed-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com> Acked-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200908134521.6781-2-yung-chuan.liao@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-09-09net: qed: Disable aRFS for NPAR and 100GDmitry Bogdanov
In CMT and NPAR the PF is unknown when the GFS block processes the packet. Therefore cannot use searcher as it has a per PF database, and thus ARFS must be disabled. Fixes: d51e4af5c209 ("qed: aRFS infrastructure support") Signed-off-by: Manish Chopra <manishc@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Dmitry Bogdanov <dbogdanov@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-09Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-nextDavid S. Miller
Pablo Neira Ayuso says: ==================== Netfilter updates for net-next The following patchset contains Netfilter updates for net-next: 1) Rewrite inner header IPv6 in ICMPv6 messages in ip6t_NPT, from Michael Zhou. 2) do_ip_vs_set_ctl() dereferences uninitialized value, from Peilin Ye. 3) Support for userdata in tables, from Jose M. Guisado. 4) Do not increment ct error and invalid stats at the same time, from Florian Westphal. 5) Remove ct ignore stats, also from Florian. 6) Add ct stats for clash resolution, from Florian Westphal. 7) Bump reference counter bump on ct clash resolution only, this is safe because bucket lock is held, again from Florian. 8) Use ip_is_fragment() in xt_HMARK, from YueHaibing. 9) Add wildcard support for nft_socket, from Balazs Scheidler. 10) Remove superfluous IPVS dependency on iptables, from Yaroslav Bolyukin. 11) Remove unused definition in ebt_stp, from Wang Hai. 12) Replace CONFIG_NFT_CHAIN_NAT_{IPV4,IPV6} by CONFIG_NFT_NAT in selftests/net, from Fabian Frederick. 13) Add userdata support for nft_object, from Jose M. Guisado. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-09net: add __must_check to skb_put_padto()Eric Dumazet
skb_put_padto() and __skb_put_padto() callers must check return values or risk use-after-free. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-09Merge series "opp: Unconditionally call dev_pm_opp_of_remove_table()" from ↵Mark Brown
Viresh Kumar <viresh.kumar@linaro.org>: Hello, This cleans up some of the user code around calls to dev_pm_opp_of_remove_table(). All the patches can be picked by respective maintainers directly except for the last patch, which needs the previous two to get merged first. These are based for 5.9-rc1. Rajendra, Since most of these changes are related to qcom stuff, it would be great if you can give them a try. I wasn't able to test them due to lack of hardware. Ulf, I had to revise the sdhci patch, sorry about that. Please pick this one. Diff between V1 and V2 is mentioned in each of the patches separately. Viresh Kumar (8): cpufreq: imx6q: Unconditionally call dev_pm_opp_of_remove_table() drm/lima: Unconditionally call dev_pm_opp_of_remove_table() drm/msm: Unconditionally call dev_pm_opp_of_remove_table() mmc: sdhci-msm: Unconditionally call dev_pm_opp_of_remove_table() spi: spi-geni-qcom: Unconditionally call dev_pm_opp_of_remove_table() spi: spi-qcom-qspi: Unconditionally call dev_pm_opp_of_remove_table() tty: serial: qcom_geni_serial: Unconditionally call dev_pm_opp_of_remove_table() qcom-geni-se: remove has_opp_table drivers/cpufreq/imx6q-cpufreq.c | 10 ++-------- drivers/gpu/drm/lima/lima_devfreq.c | 6 +----- drivers/gpu/drm/lima/lima_devfreq.h | 1 - drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 14 +++++--------- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h | 1 - drivers/gpu/drm/msm/dsi/dsi_host.c | 8 ++------ drivers/mmc/host/sdhci-msm.c | 14 +++++--------- drivers/spi/spi-geni-qcom.c | 13 +++++-------- drivers/spi/spi-qcom-qspi.c | 15 ++++++--------- drivers/tty/serial/qcom_geni_serial.c | 13 +++++-------- include/linux/qcom-geni-se.h | 2 -- 11 files changed, 31 insertions(+), 66 deletions(-) base-commit: f4d51dffc6c01a9e94650d95ce0104964f8ae822 -- 2.25.0.rc1.19.g042ed3e048af
2020-09-09gpio: tc35894: Disable Direct KBD interrupts to enable gpio irqdillon min
On tc35894, have to disable direct keypad interrupts to make it as general purpose interrupts functionality work. if not, after chip reset, IRQST(0x91) will always 0x20, IRQN always low level, can't be clear. Configure DIRECTx to enable general purpose gpio mode, else read GPIOMISx register always zero in irq routine. verified on tc35894, need more test on other tc3589x. Signed-off-by: dillon min <dillon.minfei@gmail.com> Acked-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>