summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2020-03-16drm/sched: implement and export drm_sched_pick_bestNirmoy Das
Remove drm_sched_entity_get_free_sched() and use the logic of picking the least loaded drm scheduler from a drm scheduler list to implement drm_sched_pick_best(). This patch also exports drm_sched_pick_best() so that it can be utilized by other drm drivers. Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-03-16Revert "drm/scheduler: improve job distribution with multiple queues"changzhu
It needs to revert this patch to avoid amdgpu_test compute hang problem on picasso. This reverts commit 56822db194232c089601728d68ed078dccb97f8b. Signed-off-by: changzhu <Changfeng.Zhu@amd.com> Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-03-16kernfs: Add option to enable user xattrsDaniel Xu
User extended attributes are useful as metadata storage for kernfs consumers like cgroups. Especially in the case of cgroups, it is useful to have a central metadata store that multiple processes/services can use to coordinate actions. A concrete example is for userspace out of memory killers. We want to let delegated cgroup subtree owners (running as non-root) to be able to say "please avoid killing this cgroup". This is especially important for desktop linux as delegated subtrees owners are less likely to run as root. This patch introduces a new flag, KERNFS_ROOT_SUPPORT_USER_XATTR, that lets kernfs consumers enable user xattr support. An initial limit of 128 entries or 128KB -- whichever is hit first -- is placed per cgroup because xattrs come from kernel memory and we don't want to let unprivileged users accidentally eat up too much kernel memory. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Acked-by: Chris Down <chris@chrisdown.name> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2020-03-16kernfs: Add removed_size out param for simple_xattr_setDaniel Xu
This helps set up size accounting in the next commit. Without this out param, it's difficult to find out the removed xattr size without taking a lock for longer and walking the xattr linked list twice. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Acked-by: Chris Down <chris@chrisdown.name> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2020-03-16Merge series "ALSA: compress: Add wma, alac and ape support" from Vinod Koul ↵Mark Brown
<vkoul@kernel.org>: This series adds more WMA profiles and WMA decoder parameters to UAPI and then support for these in qcom driver. It also adds FLAC and APE IDs and decoder parameters to UAPI and then support in qcom driver This was tested on Dragon board RB3. Last, bump up the compressed version so that userspace can check for the support. Since the series touches compress uapi and asoc, it would make sense to go thru asoc tree with acks. Changes in v3: - add r-b from Srini - use macros for FLAC channel layout tags Changes in v2: - use bitflags for wma profiles Vinod Koul (9): ALSA: compress: add wma codec profiles ALSA: compress: Add wma decoder params ASoC: qcom: q6asm: pass codec profile to q6asm_open_write ASoC: qcom: q6asm: add support to wma config ASoC: qcom: q6asm-dai: add support to wma decoder ALSA: compress: add alac & ape decoder params ASoC: qcom: q6asm: add support for alac and ape configs ASoC: qcom: q6asm-dai: add support for ALAC and APE decoders ALSA: compress: bump the version include/uapi/sound/compress_offload.h | 2 +- include/uapi/sound/compress_params.h | 37 +++- sound/soc/qcom/qdsp6/q6asm-dai.c | 139 ++++++++++++++- sound/soc/qcom/qdsp6/q6asm.c | 243 +++++++++++++++++++++++++- sound/soc/qcom/qdsp6/q6asm.h | 51 +++++- 5 files changed, 465 insertions(+), 7 deletions(-) -- 2.24.1
2020-03-16ASoC: soc-dai: return proper error for get_sdw_stream()Srinivas Kandagatla
snd_soc_dai_get_sdw_stream() returns null if dai does not support this callback, this is no very useful for the caller to differentiate if this is an error or unsupported call for the dai. return -ENOTSUPP in cases where this callback is not supported. Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Link: https://lore.kernel.org/r/20200316151110.2580-1-srinivas.kandagatla@linaro.org Signed-off-by: Mark Brown <broonie@kernel.org>
2020-03-16ALSA: compress: bump the versionVinod Koul
We have added support for bunch of new decoders and parameters for decoders. To help users find the support bump the version up to 0,2,0. Signed-off-by: Vinod Koul <vkoul@kernel.org> Reviewed-by: Takashi Iwai <tiwai@suse.de> Link: https://lore.kernel.org/r/20200316055221.1944464-10-vkoul@kernel.org Signed-off-by: Mark Brown <broonie@kernel.org>
2020-03-16ALSA: compress: add alac & ape decoder paramsVinod Koul
Add ALAC (Apple Lossless Audio Codec) and APE (Monkey's Lossless Audio Codec) defines and parameters required to configure these. Signed-off-by: Vinod Koul <vkoul@kernel.org> Reviewed-by: Takashi Iwai <tiwai@suse.de> Link: https://lore.kernel.org/r/20200316055221.1944464-7-vkoul@kernel.org Signed-off-by: Mark Brown <broonie@kernel.org>
2020-03-16ALSA: compress: Add wma decoder paramsVinod Koul
Some WMA decoders like WMAv10 etc need some additional encoder option parameters, so add these as WMA decoder params. Signed-off-by: Vinod Koul <vkoul@kernel.org> Reviewed-by: Takashi Iwai <tiwai@suse.de> Link: https://lore.kernel.org/r/20200316055221.1944464-3-vkoul@kernel.org Signed-off-by: Mark Brown <broonie@kernel.org>
2020-03-16ALSA: compress: add wma codec profilesVinod Koul
Some codec profiles were missing for WMA, like WMA9/10 lossless and wma10 pro, so add these profiles Signed-off-by: Vinod Koul <vkoul@kernel.org> Reviewed-by: Takashi Iwai <tiwai@suse.de> Link: https://lore.kernel.org/r/20200316055221.1944464-2-vkoul@kernel.org Signed-off-by: Mark Brown <broonie@kernel.org>
2020-03-16Merge tag 'kvm-s390-next-5.7-1' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD KVM: s390: Features and Enhancements for 5.7 part1 1. Allow to disable gisa 2. protected virtual machines Protected VMs (PVM) are KVM VMs, where KVM can't access the VM's state like guest memory and guest registers anymore. Instead the PVMs are mostly managed by a new entity called Ultravisor (UV), which provides an API, so KVM and the PV can request management actions. PVMs are encrypted at rest and protected from hypervisor access while running. They switch from a normal operation into protected mode, so we can still use the standard boot process to load a encrypted blob and then move it into protected mode. Rebooting is only possible by passing through the unprotected/normal mode and switching to protected again. One mm related patch will go via Andrews mm tree ( mm/gup/writeback: add callbacks for inaccessible pages)
2020-03-16KVM: Drop largepages_enabled and its accessor/mutatorSean Christopherson
Drop largepages_enabled, kvm_largepages_enabled() and kvm_disable_largepages() now that all users are gone. Note, largepages_enabled was an x86-only flag that got left in common KVM code when KVM gained support for multiple architectures. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Drop gfn_to_pfn_atomic()Peter Xu
It's never used anywhere now. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: x86: enable dirty log gradually in small chunksJay Zhou
It could take kvm->mmu_lock for an extended period of time when enabling dirty log for the first time. The main cost is to clear all the D-bits of last level SPTEs. This situation can benefit from manual dirty log protect as well, which can reduce the mmu_lock time taken. The sequence is like this: 1. Initialize all the bits of the dirty bitmap to 1 when enabling dirty log for the first time 2. Only write protect the huge pages 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level SPTEs gradually in small chunks Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did some tests with a 128G windows VM and counted the time taken of memory_global_dirty_log_start, here is the numbers: VM Size Before After optimization 128G 460ms 10ms Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Dynamically size memslot array based on number of used slotsSean Christopherson
Now that the memslot logic doesn't assume memslots are always non-NULL, dynamically size the array of memslots instead of unconditionally allocating memory for the maximum number of memslots. Note, because a to-be-deleted memslot must first be invalidated, the array size cannot be immediately reduced when deleting a memslot. However, consecutive deletions will realize the memory savings, i.e. a second deletion will trim the entry. Tested-by: Christoffer Dall <christoffer.dall@arm.com> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Terminate memslot walks via used_slotsSean Christopherson
Refactor memslot handling to treat the number of used slots as the de facto size of the memslot array, e.g. return NULL from id_to_memslot() when an invalid index is provided instead of relying on npages==0 to detect an invalid memslot. Rework the sorting and walking of memslots in advance of dynamically sizing memslots to aid bisection and debug, e.g. with luck, a bug in the refactoring will bisect here and/or hit a WARN instead of randomly corrupting memory. Alternatively, a global null/invalid memslot could be returned, i.e. so callers of id_to_memslot() don't have to explicitly check for a NULL memslot, but that approach runs the risk of introducing difficult-to- debug issues, e.g. if the global null slot is modified. Constifying the return from id_to_memslot() to combat such issues is possible, but would require a massive refactoring of arch specific code and would still be susceptible to casting shenanigans. Add function comments to update_memslots() and search_memslots() to explicitly (and loudly) state how memslots are sorted. Opportunistically stuff @hva with a non-canonical value when deleting a private memslot on x86 to detect bogus usage of the freed slot. No functional change intended. Tested-by: Christoffer Dall <christoffer.dall@arm.com> Tested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Ensure validity of memslot with respect to kvm_get_dirty_log()Sean Christopherson
Rework kvm_get_dirty_log() so that it "returns" the associated memslot on success. A future patch will rework memslot handling such that id_to_memslot() can return NULL, returning the memslot makes it more obvious that the validity of the memslot has been verified, i.e. precludes the need to add validity checks in the arch code that are technically unnecessary. To maintain ordering in s390, move the call to kvm_arch_sync_dirty_log() from s390's kvm_vm_ioctl_get_dirty_log() to the new kvm_get_dirty_log(). This is a nop for PPC, the only other arch that doesn't select KVM_GENERIC_DIRTYLOG_READ_PROTECT, as its sync_dirty_log() is empty. Ideally, moving the sync_dirty_log() call would be done in a separate patch, but it can't be done in a follow-on patch because that would temporarily break s390's ordering. Making the move in a preparatory patch would be functionally correct, but would create an odd scenario where the moved sync_dirty_log() would operate on a "different" memslot due to consuming the result of a different id_to_memslot(). The memslot couldn't actually be different as slots_lock is held, but the code is confusing enough as it is, i.e. moving sync_dirty_log() in this patch is the lesser of all evils. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Provide common implementation for generic dirty log functionsSean Christopherson
Move the implementations of KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG for CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT into common KVM code. The arch specific implemenations are extremely similar, differing only in whether the dirty log needs to be sync'd from hardware (x86) and how the TLBs are flushed. Add new arch hooks to handle sync and TLB flush; the sync will also be used for non-generic dirty log support in a future patch (s390). The ulterior motive for providing a common implementation is to eliminate the dependency between arch and common code with respect to the memslot referenced by the dirty log, i.e. to make it obvious in the code that the validity of the memslot is guaranteed, as a future patch will rework memslot handling such that id_to_memslot() can return NULL. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Simplify kvm_free_memslot() and all its descendentsSean Christopherson
Now that all callers of kvm_free_memslot() pass NULL for @dont, remove the param from the top-level routine and all arch's implementations. No functional change intended. Tested-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Drop "const" attribute from old memslot in commit_memory_region()Sean Christopherson
Drop the "const" attribute from @old in kvm_arch_commit_memory_region() to allow arch specific code to free arch specific resources in the old memslot without having to cast away the attribute. Freeing resources in kvm_arch_commit_memory_region() paves the way for simplifying kvm_free_memslot() by eliminating the last usage of its @dont param. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Drop kvm_arch_create_memslot()Sean Christopherson
Remove kvm_arch_create_memslot() now that all arch implementations are effectively nops. Removing kvm_arch_create_memslot() eliminates the possibility for arch specific code to allocate memory prior to setting a memslot, which sets the stage for simplifying kvm_free_memslot(). Cc: Janosch Frank <frankja@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16mtd: spi-nor: Emphasise which is the generic set_4byte_addr_mode() methodTudor Ambarus
Rename (*set_4byte)() to (*set_4byte_addr_mode)() for a better differentiation between the 4 byte address mode and opcodes. Rename macronix_set_4byte() to spi_nor_set_4byte_addr_mode(), it will be the only 4 byte address mode method exposed to the manufacturer drivers. Here's how the manufacturers enter and exit the 4 byte address mode: - eon, gidadevice, issi, macronix, xmc use EN4B/EX4B - micron-st needs WEN. st_micron_set_4byte_addr_mode() will become a private method, as they are the only ones that need WEN before the EN4B/EX4B commands. - newer spansion have a 4BAM opcode (this translates to a new, public command). Older spansion flashes use the BRWR command (legacy in core.c -> spansion_set_4byte_addr_mode()) - winbond's method is hackish and may be reason for just a flash fixup hook -> private method Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Reviewed-by: Vignesh Raghavendra <vigneshr@ti.com>
2020-03-16sunrpc: Add tracing for cache eventsTrond Myklebust
Add basic tracing for debugging the sunrpc cache events. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16SUNRPC/cache: Allow garbage collection of invalid cache entriesTrond Myklebust
If the cache entry never gets initialised, we want the garbage collector to be able to evict it. Otherwise if the upcall daemon fails to initialise the entry, we end up never expiring it. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> [ cel: resolved a merge conflict ] Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16nfsd: export upcalls must not return ESTALE when mountd is downTrond Myklebust
If the rpc.mountd daemon goes down, then that should not cause all exports to start failing with ESTALE errors. Let's explicitly distinguish between the cache upcall cases that need to time out, and those that do not. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16SUNRPC: Refactor xs_sendpages()Chuck Lever
Re-locate xs_sendpages() so that it can be shared with server code. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Avoid DMA mapping small RPC RepliesChuck Lever
On some platforms, DMA mapping part of a page is more costly than copying bytes. Indeed, not involving the I/O MMU can help the RPC/RDMA transport scale better for tiny I/Os across more RDMA devices. This is because interaction with the I/O MMU is eliminated for each of these small I/Os. Without the explicit unmapping, the NIC no longer needs to do a costly internal TLB shoot down for buffers that are just a handful of bytes. Since pull-up is now a more a frequent operation, I've introduced a trace point in the pull-up path. It can be used for debugging or user-space tools that count pull-up frequency. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Fix double sync of transport header bufferChuck Lever
Performance optimization: Avoid syncing the transport buffer twice when Reply buffer pull-up is necessary. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Refactor chunk list encodersChuck Lever
Same idea as the receive-side changes I did a while back: use xdr_stream helpers rather than open-coding the XDR chunk list encoders. This builds the Reply transport header from beginning to end without backtracking. As additional clean-ups, fill in documenting comments for the XDR encoders and sprinkle some trace points in the new encoding functions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16SUNRPC: Add encoders for list item discriminatorsChuck Lever
Clean up. These are taken from the client-side RPC/RDMA transport to a more global header file so they can be used elsewhere. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Rename svcrdma_encode trace points in send routinesChuck Lever
These trace points are misnamed: trace_svcrdma_encode_wseg trace_svcrdma_encode_write trace_svcrdma_encode_reply trace_svcrdma_encode_rseg trace_svcrdma_encode_read trace_svcrdma_encode_pzr Because they actually trace posting on the Send Queue. Let's rename them so that I can add trace points in the chunk list encoders that actually do trace chunk list encoding events. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Update synopsis of svc_rdma_map_reply_msg()Chuck Lever
Preparing for subsequent patches, no behavior change expected. Pass the RPC Call's svc_rdma_recv_ctxt deeper into the sendto() path. This enables passing more information about Requester- provided Write and Reply chunks into those lower-level functions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Update synopsis of svc_rdma_send_reply_chunk()Chuck Lever
Preparing for subsequent patches, no behavior change expected. Pass the RPC Call's svc_rdma_recv_ctxt deeper into the sendto() path. This enables passing more information about Requester- provided Write and Reply chunks into the lower-level send functions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: De-duplicate code that locates Write and Reply chunksChuck Lever
Cache the locations of the Requester-provided Write list and Reply chunk so that the Send path doesn't need to parse the Call header again. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Use struct xdr_stream to decode ingress transport headersChuck Lever
The logic that checks incoming network headers has to be scrupulous. De-duplicate: replace open-coded buffer overflow checks with the use of xdr_stream helpers that are used most everywhere else XDR decoding is done. One minor change to the sanity checks: instead of checking the length of individual segments, cap the length of the whole chunk to be sure it can fit in the set of pages available in rq_pages. This should be a better test of whether the server can handle the chunks in each request. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Remove svcrdma_cm_event() trace pointChuck Lever
Clean up. This trace point is no longer needed because the RDMA/core CMA code has an equivalent trace point that was added by commit ed999f820a6c ("RDMA/cma: Add trace points in RDMA Connection Manager"). Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16svcrdma: Create a generic tracing class for displaying xdr_buf layoutChuck Lever
This class can be used to create trace points in either the RPC client or RPC server paths. It simply displays the length of each part of an xdr_buf, which is useful to determine that the transport and XDR codecs are operating correctly. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16SUNRPC: Clean up: Replace dprintk and BUG_ON call sites in svcauth_gss.cChuck Lever
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16SUNRPC: Add xdr_pad_size() helperChuck Lever
Introduce a helper function to compute the XDR pad size of a variable-length XDR object. Clean up: Replace open-coded calculation of XDR pad sizes. I'm sure I haven't found every instance of this calculation. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16nfsd: Fix NFSv4 READ on RDMA when using readvChuck Lever
svcrdma expects that the payload falls precisely into the xdr_buf page vector. This does not seem to be the case for nfsd4_encode_readv(). This code is called only when fops->splice_read is missing or when RQ_SPLICE_OK is clear, so it's not a noticeable problem in many common cases. Add new transport method: ->xpo_read_payload so that when a READ payload does not fit exactly in rq_res's page vector, the XDR encoder can inform the RPC transport exactly where that payload is, without the payload's XDR pad. That way, when a Write chunk is present, the transport knows what byte range in the Reply message is supposed to be matched with the chunk. Note that the Linux NFS server implementation of NFS/RDMA can currently handle only one Write chunk per RPC-over-RDMA message. This simplifies the implementation of this fix. Fixes: b04209806384 ("nfsd4: allow exotic read compounds") Buglink: https://bugzilla.kernel.org/show_bug.cgi?id=198053 Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16sunrpc: Replace zero-length array with flexible-array memberGustavo A. R. Silva
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-03-16irqchip/sifive-plic: Enable/Disable external interrupts upon cpu online/offlineAtish Patra
Currently, PLIC threshold is only initialized once in the beginning. However, threshold can be set to disabled if a CPU is marked offline with CPU hotplug feature. This will not allow to change the irq affinity to a CPU that just came online. Add PLIC specific CPU hotplug callbacks and enable the threshold when a CPU comes online. Take this opportunity to move the external interrupt enable code from trap init to PLIC driver as well. On cpu offline path, the driver performs the exact opposite operations i.e. disable the interrupt and the threshold. Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Anup Patel <anup@brainfault.org> Link: https://lore.kernel.org/r/20200302231146.15530-2-atish.patra@wdc.com
2020-03-16floppy: separate the FDC's base address from its registersWilly Tarreau
FDC registers FD_STATUS, FD_DATA, FD_DOR, FD_DIR and FD_DCR used to be defined relative to FD_IOPORT, which is the FDC's base address, itself a macro depending on the "fdc" local or global variable. This patch changes this so that the register macros above now only reference the address offset, and that the FDC's address is explicitly passed in each call to fd_inb() and fd_outb(), thus removing the macro. With this change there is no more implicit usage of the local/global "fdc" variable. One place in the ARM code used to check if the port was equal to FD_DOR, this was changed to testing the register by applying a mask to the port, as was already done in the sparc code. There are still occurrences of fd_inb() and fd_outb() in the PARISC code and these ones remain unaffected since they already used to work with a base address and a register offset. The sparc, m68k and parisc code could now be slightly cleaned up to benefit from the macro definitions above instead of the equivalent hard-coded values. Link: https://lore.kernel.org/r/20200301195555.11154-6-w@1wt.eu Cc: Ian Molton <spyro@f2s.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Denis Efremov <efremov@linux.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-16SUNRPC: Remove xdr_buf_read_mic()Chuck Lever
Clean up: this function is no longer used. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-03-16SUNRPC: Add a flag to avoid reference counts on credentialsTrond Myklebust
Add a flag to signal to the RPC layer that the credential is already pinned for the duration of the RPC call. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-03-16Merge branch 'timers/drivers/timer-ti-dm' into timers/drivers/nextDaniel Lezcano
2020-03-16clocksource/drivers/timer-ti-dm: Enable autoreload in set_pwmLokesh Vutla
dm timer ops set_load() api allows to configure the load value and to set the auto reload feature. But auto reload feature is independent of load value and should be part of configuring pwm. This way pwm can be disabled by disabling auto reload feature using set_pwm() so that the current pwm cycle will be completed. Else pwm disabling causes the cycle to be stopped abruptly. Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/20200305082715.15861-7-lokeshvutla@ti.com
2020-03-16clocksource/drivers/timer-ti-dm: Add support to get pwm current statusLokesh Vutla
omap_dm_timer_ops provide support to configure the pwm but there is no support to get the current status. For configuring pwm it is advised to check the current hw status instead of relying on pwm framework. So implement a new timer ops to get the current status of pwm. Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Acked-by: Tony Lindgen <tony@atomide.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/20200305082715.15861-6-lokeshvutla@ti.com
2020-03-16clocksource/drivers/timer-ti-dm: Implement cpu_pm notifier for context save ↵Lokesh Vutla
and restore omap_dm_timer_enable() restores the entire context(including counter) based on 2 conditions: - If get_context_loss_count is populated and context is lost. - If get_context_loss_count is not populated update unconditionally. Case2 has a side effect of updating the counter register even though context is not lost. When timer is configured in pwm mode, this is causing undesired behaviour in the pwm period. Instead of using get_context_loss_count call back, implement cpu_pm notifier with context save and restore support. And delete the get_context_loss_count callback all together. Suggested-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> [tony@atomide.com: removed pm_runtime calls from cpuidle calls] Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/20200316111453.15441-1-lokeshvutla@ti.com
2020-03-16clocksource/drivers/timer-ti-dm: Prepare for using cpuidleTony Lindgren
Let's add runtime_suspend and resume functions and atomic enabled flag. This way we can use these when converting to use cpuidle for saving and restoring device context. And we need to maintain the driver state in the driver as documented in "9. Autosuspend, or automatically-delayed suspends" in the Documentation/power/runtime_pm.rst document related to using driver private lock and races with runtime_suspend(). Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/20200305082715.15861-3-lokeshvutla@ti.com