summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2022-11-10riscv: dts: renesas: Add minimal DTS for Renesas RZ/Five SMARC EVKLad Prabhakar
Enable the minimal blocks required for booting the Renesas RZ/Five SMARC EVK with initramfs. Below are the blocks which are enabled: - CPG - CPU0 - DDR (memory regions) - PINCTRL - PLIC - SCIF0 As we are reusing the RZ/G2UL SoC base DTSI [0], RZ/G2UL SMARC SoM [1] and carrier [2] board DTSIs which enables almost all the blocks supported by the RZ/G2UL SMARC EVK and whereas on RZ/Five SoC we will be gradually enabling the blocks hence the aliases for ETH/I2C are deleted and rest of the IP blocks are marked as disabled/deleted. [0] arch/arm64/boot/dts/renesas/r9a07g043.dtsi [1] arch/arm64/boot/dts/renesas/rzg2ul-smarc-som.dtsi [2] arch/arm64/boot/dts/renesas/rzg2ul-smarc.dtsi Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Link: https://lore.kernel.org/r/20221028165921.94487-6-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2022-11-10riscv: dts: renesas: Add initial devicetree for Renesas RZ/Five SoCLad Prabhakar
Add initial device tree for Renesas RZ/Five RISC-V CPU Core (AX45MP Single). RZ/Five SoC is almost identical to RZ/G2UL Type-1 SoC (ARM64) hence we will be reusing r9a07g043.dtsi [0] as a base DTSI for both the SoC's. r9a07g043f.dtsi includes RZ/Five SoC specific blocks. Below are the RZ/Five SoC specific blocks added in the initial DTSI which can be used to boot via initramfs on RZ/Five SMARC EVK: - AX45MP CPU - PLIC [0] arch/arm64/boot/dts/renesas/r9a07g043.dtsi Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Link: https://lore.kernel.org/r/20221028165921.94487-5-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2022-11-10riscv: Kconfig.socs: Add ARCH_RENESAS kconfig optionLad Prabhakar
Add ARCH_RENESAS config option to allow selecting the Renesas RISC-V SoCs. We currently have the newly added RZ/Five (R9A07G043) RISC-V based SoC. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Guo Ren <guoren@kernel.org> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Link: https://lore.kernel.org/r/20221028165921.94487-4-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2022-11-10KVM: arm64: Handle stage-2 faults in parallelOliver Upton
The stage-2 map walker has been made parallel-aware, and as such can be called while only holding the read side of the MMU lock. Rip out the conditional locking in user_mem_abort() and instead grab the read lock. Continue to take the write lock from other callsites to kvm_pgtable_stage2_map(). Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107220033.1895655-1-oliver.upton@linux.dev
2022-11-10KVM: arm64: Make table->block changes parallel-awareOliver Upton
stage2_map_walker_try_leaf() and friends now handle stage-2 PTEs generically, and perform the correct flush when a table PTE is removed. Additionally, they've been made parallel-aware, using an atomic break to take ownership of the PTE. Stop clearing the PTE in the pre-order callback and instead let stage2_map_walker_try_leaf() deal with it. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107220006.1895572-1-oliver.upton@linux.dev
2022-11-10KVM: arm64: Make leaf->leaf PTE changes parallel-awareOliver Upton
Convert stage2_map_walker_try_leaf() to use the new break-before-make helpers, thereby making the handler parallel-aware. As before, avoid the break-before-make if recreating the existing mapping. Additionally, retry execution if another vCPU thread is modifying the same PTE. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Ben Gardon <bgardon@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215934.1895478-1-oliver.upton@linux.dev
2022-11-10KVM: arm64: Make block->table PTE changes parallel-awareOliver Upton
In order to service stage-2 faults in parallel, stage-2 table walkers must take exclusive ownership of the PTE being worked on. An additional requirement of the architecture is that software must perform a 'break-before-make' operation when changing the block size used for mapping memory. Roll these two concepts together into helpers for performing a 'break-before-make' sequence. Use a special PTE value to indicate a PTE has been locked by a software walker. Additionally, use an atomic compare-exchange to 'break' the PTE when the stage-2 page tables are possibly shared with another software walker. Elide the DSB + TLBI if the evicted PTE was invalid (and thus not subject to break-before-make). All of the atomics do nothing for now, as the stage-2 walker isn't fully ready to perform parallel walks. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215855.1895367-1-oliver.upton@linux.dev
2022-11-10KVM: arm64: Split init and set for table PTEOliver Upton
Create a helper to initialize a table and directly call smp_store_release() to install it (for now). Prepare for a subsequent change that generalizes PTE writes with a helper. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-11-oliver.upton@linux.dev
2022-11-10KVM: arm64: Atomically update stage 2 leaf attributes in parallel walksOliver Upton
The stage2 attr walker is already used for parallel walks. Since commit f783ef1c0e82 ("KVM: arm64: Add fast path to handle permission relaxation during dirty logging"), KVM acquires the read lock when write-unprotecting a PTE. However, the walker only uses a simple store to update the PTE. This is safe as the only possible race is with hardware updates to the access flag, which is benign. However, a subsequent change to KVM will allow more changes to the stage 2 page tables to be done in parallel. Prepare the stage 2 attribute walker by performing atomic updates to the PTE when walking in parallel. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-10-oliver.upton@linux.dev
2022-11-10KVM: arm64: Protect stage-2 traversal with RCUOliver Upton
Use RCU to safely walk the stage-2 page tables in parallel. Acquire and release the RCU read lock when traversing the page tables. Defer the freeing of table memory to an RCU callback. Indirect the calls into RCU and provide stubs for hypervisor code, as RCU is not available in such a context. The RCU protection doesn't amount to much at the moment, as readers are already protected by the read-write lock (all walkers that free table memory take the write lock). Nonetheless, a subsequent change will futher relax the locking requirements around the stage-2 MMU, thereby depending on RCU. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-9-oliver.upton@linux.dev
2022-11-10KVM: arm64: Tear down unlinked stage-2 subtree after break-before-makeOliver Upton
The break-before-make sequence is a bit annoying as it opens a window wherein memory is unmapped from the guest. KVM should replace the PTE as quickly as possible and avoid unnecessary work in between. Presently, the stage-2 map walker tears down a removed table before installing a block mapping when coalescing a table into a block. As the removed table is no longer visible to hardware walkers after the DSB+TLBI, it is possible to move the remaining cleanup to happen after installing the new PTE. Reshuffle the stage-2 map walker to install the new block entry in the pre-order callback. Unwire all of the teardown logic and replace it with a call to kvm_pgtable_stage2_free_removed() after fixing the PTE. The post-order visitor is now completely unnecessary, so drop it. Finally, touch up the comments to better represent the now simplified map walker. Note that the call to tear down the unlinked stage-2 is indirected as a subsequent change will use an RCU callback to trigger tear down. RCU is not available to pKVM, so there is a need to use different implementations on pKVM and non-pKVM VMs. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Ben Gardon <bgardon@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-8-oliver.upton@linux.dev
2022-11-10KVM: arm64: Use an opaque type for ptepsOliver Upton
Use an opaque type for pteps and require visitors explicitly dereference the pointer before using. Protecting page table memory with RCU requires that KVM dereferences RCU-annotated pointers before using. However, RCU is not available for use in the nVHE hypervisor and the opaque type can be conditionally annotated with RCU for the stage-2 MMU. Call the type a 'pteref' to avoid a naming collision with raw pteps. No functional change intended. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-7-oliver.upton@linux.dev
2022-11-10KVM: arm64: Add a helper to tear down unlinked stage-2 subtreesOliver Upton
A subsequent change to KVM will move the tear down of an unlinked stage-2 subtree out of the critical path of the break-before-make sequence. Introduce a new helper for tearing down unlinked stage-2 subtrees. Leverage the existing stage-2 free walkers to do so, with a deep call into __kvm_pgtable_walk() as the subtree is no longer reachable from the root. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-6-oliver.upton@linux.dev
2022-11-10KVM: arm64: Don't pass kvm_pgtable through kvm_pgtable_walk_dataOliver Upton
In order to tear down page tables from outside the context of kvm_pgtable (such as an RCU callback), stop passing a pointer through kvm_pgtable_walk_data. No functional change intended. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Ben Gardon <bgardon@google.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-5-oliver.upton@linux.dev
2022-11-10KVM: arm64: Pass mm_ops through the visitor contextOliver Upton
As a prerequisite for getting visitors off of struct kvm_pgtable, pass mm_ops through the visitor context. No functional change intended. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Ben Gardon <bgardon@google.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-4-oliver.upton@linux.dev
2022-11-10KVM: arm64: Stash observed pte value in visitor contextOliver Upton
Rather than reading the ptep all over the shop, read the ptep once from __kvm_pgtable_visit() and stick it in the visitor context. Reread the ptep after visiting a leaf in case the callback installed a new table underneath. No functional change intended. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Ben Gardon <bgardon@google.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-3-oliver.upton@linux.dev
2022-11-10KVM: arm64: Combine visitor arguments into a context structureOliver Upton
Passing new arguments by value to the visitor callbacks is extremely inflexible for stuffing new parameters used by only some of the visitors. Use a context structure instead and pass the pointer through to the visitor callback. While at it, redefine the 'flags' parameter to the visitor to contain the bit indicating the phase of the walk. Pass the entire set of flags through the context structure such that the walker can communicate additional state to the visitor callback. No functional change intended. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Ben Gardon <bgardon@google.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221107215644.1895162-2-oliver.upton@linux.dev
2022-11-10arm64: dts: mt7986: fix trng node nameFrank Wunderlich
Binding requires node name to be rng not trng: trng@1020f000: $nodename:0: 'trng@1020f000' does not match '^rng@[0-9a-f]+$' Fixes: 50137c150f5f ("arm64: dts: mediatek: add basic mt7986 support") Signed-off-by: Frank Wunderlich <frank-w@public-files.de> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20221027151022.5541-1-linux@fw-web.de Signed-off-by: Matthias Brugger <matthias.bgg@gmail.com>
2022-11-10KVM: arm64: Enable ring-based dirty memory trackingGavin Shan
Enable ring-based dirty memory tracking on ARM64: - Enable CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL. - Enable CONFIG_NEED_KVM_DIRTY_RING_WITH_BITMAP. - Set KVM_DIRTY_LOG_PAGE_OFFSET for the ring buffer's physical page offset. - Add ARM64 specific kvm_arch_allow_write_without_running_vcpu() to keep the site of saving vgic/its tables out of the no-running-vcpu radar. Signed-off-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221110104914.31280-5-gshan@redhat.com
2022-11-10KVM: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.hGavin Shan
Not all architectures like ARM64 need to override the function. Move its declaration to kvm_dirty_ring.h to avoid the following compiling warning on ARM64 when the feature is enabled. arch/arm64/kvm/../../../virt/kvm/dirty_ring.c:14:12: \ warning: no previous prototype for 'kvm_cpu_dirty_log_size' \ [-Wmissing-prototypes] \ int __weak kvm_cpu_dirty_log_size(void) Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221110104914.31280-3-gshan@redhat.com
2022-11-10KVM: x86: Introduce KVM_REQ_DIRTY_RING_SOFT_FULLGavin Shan
The VCPU isn't expected to be runnable when the dirty ring becomes soft full, until the dirty pages are harvested and the dirty ring is reset from userspace. So there is a check in each guest's entrace to see if the dirty ring is soft full or not. The VCPU is stopped from running if its dirty ring has been soft full. The similar check will be needed when the feature is going to be supported on ARM64. As Marc Zyngier suggested, a new event will avoid pointless overhead to check the size of the dirty ring ('vcpu->kvm->dirty_ring_size') in each guest's entrance. Add KVM_REQ_DIRTY_RING_SOFT_FULL. The event is raised when the dirty ring becomes soft full in kvm_dirty_ring_push(). The event is only cleared in the check, done in the newly added helper kvm_dirty_ring_check_request(). Since the VCPU is not runnable when the dirty ring becomes soft full, the KVM_REQ_DIRTY_RING_SOFT_FULL event is always set to prevent the VCPU from running until the dirty pages are harvested and the dirty ring is reset by userspace. kvm_dirty_ring_soft_full() becomes a private function with the newly added helper kvm_dirty_ring_check_request(). The alignment for the various event definitions in kvm_host.h is changed to tab character by the way. In order to avoid using 'container_of()', the argument @ring is replaced by @vcpu in kvm_dirty_ring_push(). Link: https://lore.kernel.org/kvmarm/87lerkwtm5.wl-maz@kernel.org Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221110104914.31280-2-gshan@redhat.com
2022-11-10Merge tag 'kvmarm-fixes-6.1-3' into kvm-arm64/dirty-ringMarc Zyngier
KVM/arm64 fixes for 6.1, take #3 - Fix the pKVM stage-1 walker erronously using the stage-2 accessor - Correctly convert vcpu->kvm to a hyp pointer when generating an exception in a nVHE+MTE configuration - Check that KVM_CAP_DIRTY_LOG_* are valid before enabling them - Fix SMPRI_EL1/TPIDR2_EL0 trapping on VHE - Document the boot requirements for FGT when entering the kernel at EL1 Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-11-10x86/mtrr: Simplify mtrr_ops initializationJuergen Gross
The way mtrr_if is initialized with the correct mtrr_ops structure is quite weird. Simplify that by dropping the vendor specific init functions and the mtrr_ops[] array. Replace those with direct assignments of the related vendor specific ops array to mtrr_if. Note that a direct assignment is okay even for 64-bit builds, where the symbol isn't present, as the related code will be subject to "dead code elimination" due to how cpu_feature_enabled() is implemented. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-17-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/cacheinfo: Switch cache_ap_init() to hotplug callbackJuergen Gross
Instead of explicitly calling cache_ap_init() in identify_secondary_cpu() use a CPU hotplug callback instead. By registering the callback only after having started the non-boot CPUs and initializing cache_aps_delayed_init with "true", calling set_cache_aps_delayed_init() at boot time can be dropped. It should be noted that this change results in cache_ap_init() being called a little bit later when hotplugging CPUs. By using a new hotplug slot right at the start of the low level bringup this is not problematic, as no operations requiring a specific caching mode are performed that early in CPU initialization. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-15-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86: Decouple PAT and MTRR handlingJuergen Gross
Today, PAT is usable only with MTRR being active, with some nasty tweaks to make PAT usable when running as a Xen PV guest which doesn't support MTRR. The reason for this coupling is that both PAT MSR changes and MTRR changes require a similar sequence and so full PAT support was added using the already available MTRR handling. Xen PV PAT handling can work without MTRR, as it just needs to consume the PAT MSR setting done by the hypervisor without the ability and need to change it. This in turn has resulted in a convoluted initialization sequence and wrong decisions regarding cache mode availability due to misguiding PAT availability flags. Fix all of that by allowing to use PAT without MTRR and by reworking the current PAT initialization sequence to match better with the newly introduced generic cache initialization. This removes the need of the recently added pat_force_disabled flag, so remove the remnants of the patch adding it. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-14-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Add a stop_machine() handler calling only cache_cpu_init()Juergen Gross
Instead of having a stop_machine() handler for either a specific MTRR register or all state at once, add a handler just for calling cache_cpu_init() if appropriate. Add functions for calling stop_machine() with this handler as well. Add a generic replacement for mtrr_bp_restore() and a wrapper for mtrr_bp_init(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-13-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Let cache_aps_delayed_init replace mtrr_aps_delayed_initJuergen Gross
In order to prepare decoupling MTRR and PAT replace the MTRR-specific mtrr_aps_delayed_init flag with a more generic cache_aps_delayed_init one. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-12-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Get rid of __mtrr_enabled boolJuergen Gross
There is no need for keeping __mtrr_enabled as it can easily be replaced by testing mtrr_if to be not NULL. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-11-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Simplify mtrr_bp_init()Juergen Gross
In case of the generic cache interface being used (Intel CPUs or a 64-bit system), the initialization sequence of the boot CPU is more complicated than necessary: - check if MTRR enabled, if yes, call mtrr_bp_pat_init() which will disable caching, set the PAT MSR, and reenable caching - call mtrr_cleanup(), in case that changed anything, call cache_cpu_init() doing the same caching disable/enable dance as above, but this time with setting the (modified) MTRR state (even if MTRR was disabled) AND setting the PAT MSR (again even with disabled MTRR) The sequence can be simplified a lot while removing potential inconsistencies: - check if MTRR enabled, if yes, call mtrr_cleanup() and then cache_cpu_init() This ensures to: - no longer disable/enable caching more than once - avoid to set MTRRs and/or the PAT MSR on the boot processor in case of MTRR cleanups even if MTRRs meant to be disabled With that mtrr_bp_pat_init() can be removed. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-10-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Remove set_all callback from struct mtrr_opsJuergen Gross
Instead of using an indirect call to mtrr_if->set_all just call the only possible target cache_cpu_init() directly. Remove the set_all function pointer from struct mtrr_ops. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-9-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Disentangle MTRR init from PAT initJuergen Gross
Add a main cache_cpu_init() init routine which initializes MTRR and/or PAT support depending on what has been detected on the system. Leave the MTRR-specific initialization in a MTRR-specific init function where the smp_changes_mask setting happens now with caches disabled. This global mask update was done with caches enabled before probably because atomic operations while running uncached might have been quite expensive. But since only systems with a broken BIOS should ever require to set any bit in smp_changes_mask, hurting those devices with a penalty of a few microseconds during boot shouldn't be a real issue. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-8-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Move cache control code to cacheinfo.cJuergen Gross
Prepare making PAT and MTRR support independent from each other by moving some code needed by both out of the MTRR-specific sources. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-7-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Split MTRR-specific handling from cache dis/enablingJuergen Gross
Split the MTRR-specific actions from cache_disable() and cache_enable() into new functions mtrr_disable() and mtrr_enable(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-6-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Rename prepare_set() and post_set()Juergen Gross
Rename the currently MTRR-specific functions prepare_set() and post_set() in preparation to move them. Make them non-static and put their prototypes into cacheinfo.h, where they will end after moving them to their final position anyway. Expand the comment before the functions with an introductory line and rename two related static variables, too. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-5-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Replace use_intel() with a local flagJuergen Gross
In MTRR code use_intel() is only used in one source file, and the relevant use_intel_if member of struct mtrr_ops is set only in generic_mtrr_ops. Replace use_intel() with a single flag in cacheinfo.c which can be set when assigning generic_mtrr_ops to mtrr_if. This allows to drop use_intel_if from mtrr_ops, while preparing to decouple PAT from MTRR. As another preparation for the PAT/MTRR decoupling use a bit for MTRR control and one for PAT control. For now set both bits together, this can be changed later. As the new flag will be set only if mtrr_enabled is set, the test for mtrr_enabled can be dropped at some places. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-4-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10ARM: imx_v6_v7_defconfig: Enable the cyttsp5 touchscreenAlistair Francis
The imx6/7 based devices Remarkable 2, Kobo Clara HD, Kobo Libra H2O, Tolino Shine 3, Tolino Vision 5 all contain a Cypress TT2100 touchscreen so enable the corresponding driver. Signed-off-by: Alistair Francis <alistair@alistair23.me> Signed-off-by: Shawn Guo <shawnguo@kernel.org>
2022-11-10s390: select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAPGerald Schaefer
Enable HUGETLB_PAGE_OPTIMIZE_VMEMMAP for s390. With this, vmemmap pages used to back struct pages for compound tail pages of hugetlb pages are freed and remapped to compound head page frame as RO, see also Documentation/vm/vmemmap_dedup.rst. For 1M hugetlb pages, this results in freeing 3 of 4 vmemmap pages, saving 12K of memory for each 1M hugetlb page (~1.2%). /sys/kernel/debug/kernel_page_tables will show the impact: ---[ vmemmap Area Start ]--- [...] 0x0000037202d84000-0x0000037202d85000 4K PTE RW NX 0x0000037202d85000-0x0000037202d88000 12K PTE RO NX For 2G hugetlb pages, this results in freeing 8191 of 8192 vmemmap pages, saving 32764K of memory for each 2G hugetlb page (~1.6%) /sys/kernel/debug/kernel_page_tables will show the impact: ---[ vmemmap Area Start ]--- [...] 0x000003720a000000-0x000003720a001000 4K PTE RW NX 0x000003720a001000-0x000003720c000000 32764K PTE RO NX The memory savings come with some costs: - vmemmap mapping for compound hugetlb pages is not a PMD mapping any more, but split to 4K PTE mappings, and it will not be coalesced back to PMD mapping after freeing hugetlb pages from the pool. Apart from theoretical performance impact, this will also (slightly) relativize the memory savings because of additional 2K PTE pagetable allocations. - Workload using "on the fly" hugetlb allocations via "nr_overcommit_hugepages" instead of using the hugetlb pool via "nr_hugepages" will suffer from considerably increased fault handling time, see also description from commit 78f39084b41d ("mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl"). - Freeing hugetlb pages from the pool will require re-allocation of the freed struct pages, and therefore needs some memory available to the kernel. This might fail in memory constrained scenarios. - For the same reason, memory offline might fail even for ZONE_MOVABLE when hugetlb pages are present (but not for s390, since we do not support ARCH_ENABLE_HUGEPAGE_MIGRATION, and therefore cannot have hugetlb pages in ZONE_MOVABLE). - General increased complexity and overhead in kernel handling of compound (head) pages. Therefore, this feature is disabled by default, and has to be enabled explicitly either by adding "hugetlb_free_vmemmap=on" kernel parameter, or during run-time via "/proc/sys/vm/hugetlb_optimize_vmemmap" sysctl. Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2022-11-09arm64: dts: qcom: sc8280xp-x13s: Add LID switchBjorn Andersson
Add gpio-keys for exposing the LID switch state. Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: Johan Hovold <johan+linaro@kernel.org> Tested-by: Johan Hovold <johan+linaro@kernel.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@somainline.org> Tested-by: Steev Klimaszewski <steev@kali.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220730193617.1688563-1-bjorn.andersson@linaro.org
2022-11-09ARM: dts: qcom-msm8960-cdp: align TLMM pin configuration with DT schemaKrzysztof Kozlowski
DT schema expects TLMM pin configuration nodes to be named with '-state' suffix and their optional children with '-pins' suffix. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> [bjorn: Fixed up expected typo in cs-pins function] Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221109105140.48196-3-krzysztof.kozlowski@linaro.org
2022-11-09ARM: dts: qcom-msm8960: use define for interrupt constantsKrzysztof Kozlowski
Replace GIC_PPI, GIC_SPI and interrupt type numbers with appropriate defines. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221109105140.48196-2-krzysztof.kozlowski@linaro.org
2022-11-09arm64: dts: qcom: ipq8074: align TLMM pin configuration with DT schemaKrzysztof Kozlowski
DT schema expects TLMM pin configuration nodes to be named with '-state' suffix and their optional children with '-pins' suffix. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221108142357.67202-2-krzysztof.kozlowski@linaro.org
2022-11-09arm64: dts: qcom: sc7280: Remove redundant soundwire propertySrinivasa Rao Mandadapu
Remove redundant and undocumented property qcom,port-offset in soundwire controller nodes. This patch is required to avoid dtbs_check errors with qcom,soundwire.yaml Fixes: 12ef689f09ab ("arm64: dts: qcom: sc7280: Add nodes for soundwire and va tx rx digital macro codecs") Signed-off-by: Srinivasa Rao Mandadapu <quic_srivasam@quicinc.com> Co-developed-by: Ratna Deepthi Kudaravalli <quic_rkudarav@quicinc.com> Signed-off-by: Ratna Deepthi Kudaravalli <quic_rkudarav@quicinc.com> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/1667918763-32445-4-git-send-email-quic_srivasam@quicinc.com
2022-11-09arm64: dts: qcom: sm8250: Remove redundant soundwire propertySrinivasa Rao Mandadapu
Remove redundant and undocumented property qcom,port-offset in soundwire controller nodes. This patch is required to avoid dtbs_check errors with qcom,soundwire.yaml Fixes: 24f52ef0c4bf ("arm64: dts: qcom: sm8250: Add nodes for tx and rx macros with soundwire masters") Signed-off-by: Srinivasa Rao Mandadapu <quic_srivasam@quicinc.com> Co-developed-by: Ratna Deepthi Kudaravalli <quic_rkudarav@quicinc.com> Signed-off-by: Ratna Deepthi Kudaravalli <quic_rkudarav@quicinc.com> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/1667918763-32445-3-git-send-email-quic_srivasam@quicinc.com
2022-11-09arm64: dts: qcom: Update soundwire secondary node namesSrinivasa Rao Mandadapu
Update soundwire secondary nodes of WSA speaker to match with dt-bindings pattern properties regular expression. This modification is required to avoid dtbs-check errors occurred with qcom,soundwire.yaml. Signed-off-by: Srinivasa Rao Mandadapu <quic_srivasam@quicinc.com> Co-developed-by: Ratna Deepthi Kudaravalli <quic_rkudarav@quicinc.com> Signed-off-by: Ratna Deepthi Kudaravalli <quic_rkudarav@quicinc.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/1667918763-32445-2-git-send-email-quic_srivasam@quicinc.com
2022-11-09arm64: dts: qcom: sc7280-idp: don't modify &ipa twiceAlex Elder
In "sc7280-idp.dts", the IPA node is modified after being defined. However that file includes "sc7280-idp.dtsi", which also modifies the IPA node (in the same way). This only needs to be done in "sc7280-idp.dtsi". Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221108201625.1220919-1-elder@linaro.org
2022-11-09arm64: dts: qcom: Add power-domains property for apps_rscMaulik Shah
Add power-domains property which allows apps_rsc device to attach to cluster power domain on sm8150, sm8250, sm8350 and sm8450. Signed-off-by: Maulik Shah <quic_mkshah@quicinc.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> # SM8450 Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221018152837.619426-4-ulf.hansson@linaro.org
2022-11-10arm64: defconfig: Enable Tegra186 timer supportJon Hunter
Enable Tegra186 timer support which is needed for Tegra186, Tegra194 and Tegra234 devices. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
2022-11-09arm64: dts: broadcom: bcmbca: bcm6858: add TWD blockRafał Miłecki
BCM6858 contains TWD block with timers, watchdog, and reset subblocks. Describe it. Signed-off-by: Rafał Miłecki <rafal@milecki.pl> Link: https://lore.kernel.org/r/20221103110015.21761-1-zajec5@gmail.com Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
2022-11-09arm64: dts: broadcom: bcmbca: bcm4908: add TWD block timerRafał Miłecki
BCM4908 TWD contains block with 4 timers. Add binding for it. Signed-off-by: Rafał Miłecki <rafal@milecki.pl> Link: https://lore.kernel.org/r/20221103105316.21294-1-zajec5@gmail.com Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
2022-11-09ARM: dts: bcm53016: Add devicetree for D-Link DWL-8610APLinus Walleij
This adds a device tree for the BCM53016-based D-Link DWL-8610AP access point wireless router. The TRX-format partitions had to be named "firmware" due to an OpenWrt patch that only accepts parting such nodes if they are named "firmware". Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20221019193449.3036010-2-linus.walleij@linaro.org Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>