summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2020-10-08powerpc/lmb-size: Use addr #size-cells value when fetching lmb-sizeAneesh Kumar K.V
Make it consistent with other usages. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007114836.282468-5-aneesh.kumar@linux.ibm.com
2020-10-08powerpc/book3s64/radix: Make radix_mem_block_size 64bitAneesh Kumar K.V
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. Fixes: af9d00e93a4f ("powerpc/mm/radix: Create separate mappings for hot-plugged memory") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007114836.282468-4-aneesh.kumar@linux.ibm.com
2020-10-08powerpc/memhotplug: Make lmb size 64bitAneesh Kumar K.V
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. This was found by code audit. Cc: stable@vger.kernel.org Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007114836.282468-3-aneesh.kumar@linux.ibm.com
2020-10-08powerpc/drmem: Make lmb_size 64 bitAneesh Kumar K.V
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. This was found by code audit. Cc: stable@vger.kernel.org Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Acked-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007114836.282468-2-aneesh.kumar@linux.ibm.com
2020-10-08powerpc/security: Fix link stack flush instructionNicholas Piggin
The inline execution path for the hardware assisted branch flush instruction failed to set CTR to the correct value before bcctr, causing a crash when the feature is enabled. Fixes: 4d24e21cc694 ("powerpc/security: Allow for processors that flush the link stack using the special bcctr") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007080605.64423-1-npiggin@gmail.com
2020-10-07s390/pci: track whether util_str is valid in the zpci_devMatthew Rosato
We'll need to keep track of whether or not the byte string in util_str is valid and thus needs to be passed to a vfio-pci passthrough device. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Acked-by: Niklas Schnelle <schnelle@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-10-07s390/pci: stash version in the zpci_devMatthew Rosato
In preparation for passing the info on to vfio-pci devices, stash the supported PCI version for the target device in the zpci_dev. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Acked-by: Niklas Schnelle <schnelle@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-10-07s390/sie: fix typo in SIGP code descriptionJulian Wiedmann
s/ait address/at address Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-10-07s390/lib: fix kernel doc for memcmp()Julian Wiedmann
s/count/n Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-10-07s390/sclp: Add support for SCLP AP adapter config/deconfigHarald Freudenberger
Add support for AP bus adapter config and deconfig to the sclp core code. The code is statically build into the kernel when ZCRYPT is configured either as module or with static support. This is the base functionality for having configure/deconfigure support in the AP bus and card code. Another patch will exploit this soon. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Suggested-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-10-07x86/asm: Add an enqcmds() wrapper for the ENQCMDS instructionDave Jiang
Currently, the MOVDIR64B instruction is used to atomically submit 64-byte work descriptors to devices. Although it can encounter errors like device queue full, command not accepted, device not ready, etc when writing to a device MMIO, MOVDIR64B can not report back on errors from the device itself. This means that MOVDIR64B users need to separately interact with a device to see if a descriptor was successfully queued, which slows down device interactions. ENQCMD and ENQCMDS also atomically submit 64-byte work descriptors to devices. But, they *can* report back errors directly from the device, such as if the device was busy, or device not enabled or does not support the command. This immediate feedback from the submission instruction itself reduces the number of interactions with the device and can greatly increase efficiency. ENQCMD can be used at any privilege level, but can effectively only submit work on behalf of the current process. ENQCMDS is a ring0-only instruction and can explicitly specify a process context instead of being tied to the current process or needing to reprogram the IA32_PASID MSR. Use ENQCMDS for work submission within the kernel because a Process Address ID (PASID) is setup to translate the kernel virtual address space. This PASID is provided to ENQCMDS from the descriptor structure submitted to the device and not retrieved from IA32_PASID MSR, which is setup for the current user address space. See Intel Software Developer’s Manual for more information on the instructions. [ bp: - Make operand constraints like movdir64b() because both insns are basically doing the same thing, more or less. - Fixup comments and cleanup. ] Link: https://lkml.kernel.org/r/20200924180041.34056-3-dave.jiang@intel.com Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20201005151126.657029-3-dave.jiang@intel.com
2020-10-07x86/asm: Carve out a generic movdir64b() helper for general usageDave Jiang
Carve out the MOVDIR64B inline asm primitive into a generic helper so that it can be used by other functions. Move it to special_insns.h and have iosubmit_cmds512() call it. [ bp: Massage commit message. ] Suggested-by: Michael Matz <matz@suse.de> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201005151126.657029-2-dave.jiang@intel.com
2020-10-07Merge branch 'for-next/late-arrivals' into for-next/coreWill Deacon
Late patches for 5.10: MTE selftests, minor KCSAN preparation and removal of some unused prototypes. (Amit Daniel Kachhap and others) * for-next/late-arrivals: arm64: random: Remove no longer needed prototypes arm64: initialize per-cpu offsets earlier kselftest/arm64: Check mte tagged user address in kernel kselftest/arm64: Verify KSM page merge for MTE pages kselftest/arm64: Verify all different mmap MTE options kselftest/arm64: Check forked child mte memory accessibility kselftest/arm64: Verify mte tag inclusion via prctl kselftest/arm64: Add utilities and a test to validate mte memory
2020-10-07arm64: random: Remove no longer needed prototypesAndre Przywara
Commit 9bceb80b3cc4 ("arm64: kaslr: Use standard early random function") removed the direct calls of the __arm64_rndr() and __early_cpu_has_rndr() functions, but left the dummy prototypes in the #else branch of the #ifdef CONFIG_ARCH_RANDOM guard. Remove the redundant prototypes, as they have no users outside of this header file. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20201006194453.36519-1-andre.przywara@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-10-07powerpc/hv-gpci: Add sysfs files inside hv-gpci device to show cpumaskKajol Jain
Patch here adds a cpumask attr to hv_gpci pmu along with ABI documentation. Primary use to expose the cpumask is for the perf tool which has the capability to parse the driver sysfs folder and understand the cpumask file. Having cpumask file will reduce the number of perf command line parameters (will avoid "-C" option in the perf tool command line). It can also notify the user which is the current cpu used to retrieve the counter data. command:# cat /sys/devices/hv_gpci/cpumask 0 Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201003074943.338618-5-kjain@linux.ibm.com
2020-10-07powerpc/perf/hv-gpci: Add cpu hotplug supportKajol Jain
Patch here adds cpu hotplug functions to hv_gpci pmu. A new cpuhp_state "CPUHP_AP_PERF_POWERPC_HV_GPCI_ONLINE" enum is added. The online callback function updates the cpumask only if its empty. As the primary intention of adding hotplug support is to designate a CPU to make HCALL to collect the counter data. The offline function test and clear corresponding cpu in a cpumask and update cpumask to any other active cpu. Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201003074943.338618-4-kjain@linux.ibm.com
2020-10-07powerpc/perf/hv-gpci: Fix starting index valueKajol Jain
Commit 9e9f60108423f ("powerpc/perf/{hv-gpci, hv-common}: generate requests with counters annotated") adds a framework for defining gpci counters. In this patch, they adds starting_index value as '0xffffffffffffffff'. which is wrong as starting_index is of size 32 bits. Because of this, incase we try to run hv-gpci event we get error. In power9 machine: command#: perf stat -e hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ -C 0 -I 1000 event syntax error: '..bie_count_and_time_tlbie_instructions_issued/' \___ value too big for format, maximum is 4294967295 This patch fix this issue and changes starting_index value to '0xffffffff' After this patch: command#: perf stat -e hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ -C 0 -I 1000 1.000085786 1,024 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ 2.000287818 1,024 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ 2.439113909 17,408 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ Fixes: 9e9f60108423 ("powerpc/perf/{hv-gpci, hv-common}: generate requests with counters annotated") Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201003074943.338618-1-kjain@linux.ibm.com
2020-10-07powerpc/pseries/eeh: Fix use of uninitialised variableOliver O'Halloran
If the RTAS call to query the PE address for a device fails we jump the err: label where an error message is printed along with the return code. However, the printed return code is from the "ret" variable which isn't set at that point since we assigned the result to "addr" instead. Fix this by consistently using the "ret" variable for the result of the RTAS call helpers an dropping the "addr" local variable" Fixes: 98ba956f6a38 ("powerpc/pseries/eeh: Rework device EEH PE determination") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007040903.819081-2-oohall@gmail.com
2020-10-07powerpc/eeh: Delete eeh_pe->config_addrOliver O'Halloran
The eeh_pe->config_addr field was supposed to be removed in commit 35d64734b643 ("powerpc/eeh: Clean up PE addressing") which made it largely unused. Finish the job. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007040903.819081-1-oohall@gmail.com
2020-10-07Merge branches 'arm/allwinner', 'arm/mediatek', 'arm/renesas', 'arm/tegra', ↵Joerg Roedel
'arm/qcom', 'arm/smmu', 'ppc/pamu', 'x86/amd', 'x86/vt-d' and 'core' into next
2020-10-07x86/mce: Decode a kernel instruction to determine if it is copying from userTony Luck
All instructions copying data between kernel and user memory are tagged with either _ASM_EXTABLE_UA or _ASM_EXTABLE_CPY entries in the exception table. ex_fault_handler_type() returns EX_HANDLER_UACCESS for both of these. Recovery is only possible when the machine check was triggered on a read from user memory. In this case the same strategy for recovery applies as if the user had made the access in ring3. If the fault was in kernel memory while copying to user there is no current recovery plan. For MOV and MOVZ instructions a full decode of the instruction is done to find the source address. For MOVS instructions the source address is in the %rsi register. The function fault_in_kernel_space() determines whether the source address is kernel or user, upgrade it from "static" so it can be used here. Co-developed-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201006210910.21062-7-tony.luck@intel.com
2020-10-07x86/mce: Recover from poison found while copying from user spaceTony Luck
Existing kernel code can only recover from a machine check on code that is tagged in the exception table with a fault handling recovery path. Add two new fields in the task structure to pass information from machine check handler to the "task_work" that is queued to run before the task returns to user mode: + mce_vaddr: will be initialized to the user virtual address of the fault in the case where the fault occurred in the kernel copying data from a user address. This is so that kill_me_maybe() can provide that information to the user SIGBUS handler. + mce_kflags: copy of the struct mce.kflags needed by kill_me_maybe() to determine if mce_vaddr is applicable to this error. Add code to recover from a machine check while copying data from user space to the kernel. Action for this case is the same as if the user touched the poison directly; unmap the page and send a SIGBUS to the task. Use a new helper function to share common code between the "fault in user mode" case and the "fault while copying from user" case. New code paths will be activated by the next patch which sets MCE_IN_KERNEL_COPYIN. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201006210910.21062-6-tony.luck@intel.com
2020-10-07x86/mce: Avoid tail copy when machine check terminated a copy from userTony Luck
In the page fault case it is ok to see if a few more unaligned bytes can be copied from the source address. Worst case is that the page fault will be triggered again. Machine checks are more serious. Just give up at the point where the main copy loop triggered the #MC and return from the copy code as if the copy succeeded. The machine check handler will use task_work_add() to make sure that the task is sent a SIGBUS. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201006210910.21062-5-tony.luck@intel.com
2020-10-07x86/mce: Add _ASM_EXTABLE_CPY for copy user accessYouquan Song
_ASM_EXTABLE_UA is a general exception entry to record the exception fixup for all exception spots between kernel and user space access. To enable recovery from machine checks while coping data from user addresses it is necessary to be able to distinguish the places that are looping copying data from those that copy a single byte/word/etc. Add a new macro _ASM_EXTABLE_CPY and use it in place of _ASM_EXTABLE_UA in the copy functions. Record the exception reason number to regs->ax at ex_handler_uaccess which is used to check MCE triggered. The new fixup routine ex_handler_copy() is almost an exact copy of ex_handler_uaccess() The difference is that it sets regs->ax to the trap number. Following patches use this to avoid trying to copy remaining bytes from the tail of the copy and possibly hitting the poison again. New mce.kflags bit MCE_IN_KERNEL_COPYIN will be used by mce_severity() calculation to indicate that a machine check is recoverable because the kernel was copying from user space. Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201006210910.21062-4-tony.luck@intel.com
2020-10-07x86/mce: Provide method to find out the type of an exception handlerTony Luck
Avoid a proliferation of ex_has_*_handler() functions by having just one function that returns the type of the handler (if any). Drop the __visible attribute for this function. It is not called from assembler so the attribute is not necessary. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201006210910.21062-3-tony.luck@intel.com
2020-10-07x86/mce: Pass pointer to saved pt_regs to severity calculation routinesYouquan Song
New recovery features require additional information about processor state when a machine check occurred. Pass pt_regs down to the routines that need it. No functional change. Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201006210910.21062-2-tony.luck@intel.com
2020-10-07x86/platform/uv: Update Copyrights to conform to HPE standardsMike Travis
Add Copyrights to those files that have been updated for UV5 changes. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201005203929.148656-14-mike.travis@hpe.com
2020-10-07x86/platform/uv: Update for UV5 NMI MMR changesMike Travis
The UV NMI MMR addresses and fields moved between UV4 and UV5 necessitating a rewrite of the UV NMI handler. Adjust references to accommodate those changes. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dimitri Sivanich <dimitri.sivanich@hpe.com> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-13-mike.travis@hpe.com
2020-10-07x86/platform/uv: Update UV5 TSC checkingMike Travis
Update check of BIOS TSC sync status to include both possible "invalid" states provided by newer UV5 BIOS. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-12-mike.travis@hpe.com
2020-10-07x86/platform/uv: Update node present countingMike Travis
The changes in the UV5 arch shrunk the NODE PRESENT table to just 2x64 entries (128 total) so are in to 64 bit MMRs instead of a depth of 64 bits in an array. Adjust references when counting up the nodes present. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dimitri Sivanich <dimitri.sivanich@hpe.com> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-11-mike.travis@hpe.com
2020-10-07x86/platform/uv: Update UV5 MMR references in UV GRUMike Travis
Make modifications to the GRU mappings to accommodate changes for UV5. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dimitri Sivanich <dimitri.sivanich@hpe.com> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-10-mike.travis@hpe.com
2020-10-07x86/platform/uv: Adjust GAM MMR references affected by UV5 updatesMike Travis
Make modifications to the GAM MMR mappings to accommodate changes for UV5. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dimitri Sivanich <dimitri.sivanich@hpe.com> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-9-mike.travis@hpe.com
2020-10-07x86/platform/uv: Update MMIOH references based on new UV5 MMRsMike Travis
Make modifications to the MMIOH mappings to accommodate changes for UV5. [ Fix W=1 build warnings. ] Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-8-mike.travis@hpe.com
2020-10-07x86/platform/uv: Add and decode Arch Type in UVsystabMike Travis
When the UV BIOS starts the kernel it passes the UVsystab info struct to the kernel which contains information elements more specific than ACPI, and generally pertinent only to the MMRs. These are read only fields so information is passed one way only. A new field starting with UV5 is the UV architecture type so the ACPI OEM_ID field can be used for other purposes going forward. The UV Arch Type selects the entirety of the MMRs available, with their addresses and fields defined in uv_mmrs.h. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dimitri Sivanich <dimitri.sivanich@hpe.com> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-7-mike.travis@hpe.com
2020-10-07x86/platform/uv: Add UV5 direct referencesMike Travis
Add new references to UV5 (and UVY class) system MMR addresses and fields primarily caused by the expansion from 46 to 52 bits of physical memory address. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dimitri Sivanich <dimitri.sivanich@hpe.com> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-6-mike.travis@hpe.com
2020-10-07x86/platform/uv: Update UV MMRs for UV5Mike Travis
Update UV MMRs in uv_mmrs.h for UV5 based on Verilog output from the UV Hub hardware design files. This is the next UV architecture with a new class (UVY) being defined for 52 bit physical address masks. Uses a bitmask for UV arch identification so a single test can cover multiple versions. Includes other adjustments to match the uv_mmrs.h file to keep from encountering compile errors. New UV5 functionality is added in the patches that follow. [ Fix W=1 build warnings. ] Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-5-mike.travis@hpe.com
2020-10-07x86/platform/uv: Remove SCIR MMR references for UV systemsMike Travis
UV class systems no longer use System Controller for monitoring of CPU activity provided by this driver. Other methods have been developed for BIOS and the management controller (BMC). Remove that supporting code. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dimitri Sivanich <dimitri.sivanich@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-3-mike.travis@hpe.com
2020-10-07x86/platform/uv: Remove UV BAU TLB Shootdown HandlerMike Travis
The Broadcast Assist Unit (BAU) TLB shootdown handler is being rewritten to become the UV BAU APIC driver. It is designed to speed up sending IPIs to selective CPUs within the system. Remove the current TLB shutdown handler (tlb_uv.c) file and a couple of kernel hooks in the interim. Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dimitri Sivanich <dimitri.sivanich@hpe.com> Link: https://lkml.kernel.org/r/20201005203929.148656-2-mike.travis@hpe.com
2020-10-07arm/arm64: xen: Fix to convert percpu address to gfn correctlyMasami Hiramatsu
Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu address conversion. In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v) assumes the given virtual address is in linear mapped kernel memory area, it can not convert the per-cpu memory if it is allocated on vmalloc area. This depends on CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK. If it is enabled, the first chunk of percpu memory is linear mapped. In the other case, that is allocated from vmalloc area. Moreover, if the first chunk of percpu has run out until allocating xen_vcpu_info, it will be allocated on the 2nd chunk, which is based on kernel memory or vmalloc memory (depends on CONFIG_NEED_PER_CPU_KM). Without this fix and kernel configured to use vmalloc area for the percpu memory, the Dom0 kernel will fail to boot with following errors. [ 0.466172] Xen: initializing cpu0 [ 0.469601] ------------[ cut here ]------------ [ 0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180 [ 0.484435] Modules linked in: [ 0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4 [ 0.493895] Hardware name: Socionext Developer Box (DT) [ 0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--) [ 0.504836] pc : xen_starting_cpu+0x160/0x180 [ 0.509263] lr : xen_starting_cpu+0xb0/0x180 [ 0.513599] sp : ffff8000116cbb60 [ 0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000 [ 0.522366] x27: 0000000000000000 x26: 0000000000000000 [ 0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600 [ 0.533129] x23: 0000000000000000 x22: 0000000000000000 [ 0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68 [ 0.543892] x19: ffff8000113a9988 x18: 0000000000000010 [ 0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef [ 0.554655] x15: ffffffffffffffff x14: 0720072007200720 [ 0.560037] x13: 0720072007200720 x12: 0720072007200720 [ 0.565418] x11: 0720072007200720 x10: 0720072007200720 [ 0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208 [ 0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00 [ 0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000 [ 0.586945] x3 : 0000000000000000 x2 : ffff80000abec000 [ 0.592327] x1 : 000000000000002f x0 : 0000800000000000 [ 0.597716] Call trace: [ 0.600232] xen_starting_cpu+0x160/0x180 [ 0.604309] cpuhp_invoke_callback+0xac/0x640 [ 0.608736] cpuhp_issue_call+0xf4/0x150 [ 0.612728] __cpuhp_setup_state_cpuslocked+0x128/0x2c8 [ 0.618030] __cpuhp_setup_state+0x84/0xf8 [ 0.622192] xen_guest_init+0x324/0x364 [ 0.626097] do_one_initcall+0x54/0x250 [ 0.630003] kernel_init_freeable+0x12c/0x2c8 [ 0.634428] kernel_init+0x1c/0x128 [ 0.637988] ret_from_fork+0x10/0x18 [ 0.641635] ---[ end trace d95b5309a33f8b27 ]--- [ 0.646337] ------------[ cut here ]------------ [ 0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158! [ 0.657697] Internal error: Oops - BUG: 0 [#1] SMP [ 0.662548] Modules linked in: [ 0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 5.9.0-rc4+ #4 [ 0.673398] Hardware name: Socionext Developer Box (DT) [ 0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--) [ 0.684338] pc : xen_starting_cpu+0x178/0x180 [ 0.688765] lr : xen_starting_cpu+0x144/0x180 [ 0.693188] sp : ffff8000116cbb60 [ 0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000 [ 0.701955] x27: 0000000000000000 x26: 0000000000000000 [ 0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600 [ 0.712718] x23: 0000000000000000 x22: 0000000000000000 [ 0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68 [ 0.723481] x19: ffff8000113a9988 x18: 0000000000000010 [ 0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef [ 0.734245] x15: ffffffffffffffff x14: 0720072007200720 [ 0.739626] x13: 0720072007200720 x12: 0720072007200720 [ 0.745008] x11: 0720072007200720 x10: 0720072007200720 [ 0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208 [ 0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00 [ 0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000 [ 0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef [ 0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea [ 0.777304] Call trace: [ 0.779819] xen_starting_cpu+0x178/0x180 [ 0.783898] cpuhp_invoke_callback+0xac/0x640 [ 0.788325] cpuhp_issue_call+0xf4/0x150 [ 0.792317] __cpuhp_setup_state_cpuslocked+0x128/0x2c8 [ 0.797619] __cpuhp_setup_state+0x84/0xf8 [ 0.801779] xen_guest_init+0x324/0x364 [ 0.805683] do_one_initcall+0x54/0x250 [ 0.809590] kernel_init_freeable+0x12c/0x2c8 [ 0.814016] kernel_init+0x1c/0x128 [ 0.817583] ret_from_fork+0x10/0x18 [ 0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000) [ 0.827415] ---[ end trace d95b5309a33f8b28 ]--- [ 0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b [ 0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]--- Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Link: https://lore.kernel.org/r/160196697165.60224.17470743378683334995.stgit@devnote2 Signed-off-by: Juergen Gross <jgross@suse.com>
2020-10-06riscv: Fixup bootup failure with HARDENED_USERCOPYGuo Ren
6184358da000 ("riscv: Fixup static_obj() fail") attempted to elide a lockdep failure by rearranging our kernel image to place all initdata within [_stext, _end], thus triggering lockdep to treat these as static objects. These objects are released and eventually reallocated, causing check_kernel_text_object() to trigger a BUG(). This backs out the change to make [_stext, _end] all-encompassing, instead just moving initdata. This results in initdata being outside of [__init_begin, __init_end], which means initdata can't be freed. Link: https://lore.kernel.org/linux-riscv/1593266228-61125-1-git-send-email-guoren@kernel.org/T/#t Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reported-by: Aurelien Jarno <aurelien@aurel32.net> Tested-by: Aurelien Jarno <aurelien@aurel32.net> [Palmer: Clean up commit text] Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-06Merge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fix from Catalin Marinas: "Fix a kernel panic in the AES crypto code caused by a BR tail call not matching the target BTI instruction (when branch target identification is enabled)" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: crypto: arm64: Use x16 with indirect branch to bti_c
2020-10-06crypto: arm64: Use x16 with indirect branch to bti_cJeremy Linton
The AES code uses a 'br x7' as part of a function called by a macro. That branch needs a bti_j as a target. This results in a panic as seen below. Using x16 (or x17) with an indirect branch keeps the target bti_c. Bad mode in Synchronous Abort handler detected on CPU1, code 0x34000003 -- BTI CPU: 1 PID: 265 Comm: cryptomgr_test Not tainted 5.8.11-300.fc33.aarch64 #1 pstate: 20400c05 (nzCv daif +PAN -UAO BTYPE=j-) pc : aesbs_encrypt8+0x0/0x5f0 [aes_neon_bs] lr : aesbs_xts_encrypt+0x48/0xe0 [aes_neon_bs] sp : ffff80001052b730 aesbs_encrypt8+0x0/0x5f0 [aes_neon_bs] __xts_crypt+0xb0/0x2dc [aes_neon_bs] xts_encrypt+0x28/0x3c [aes_neon_bs] crypto_skcipher_encrypt+0x50/0x84 simd_skcipher_encrypt+0xc8/0xe0 crypto_skcipher_encrypt+0x50/0x84 test_skcipher_vec_cfg+0x224/0x5f0 test_skcipher+0xbc/0x120 alg_test_skcipher+0xa0/0x1b0 alg_test+0x3dc/0x47c cryptomgr_test+0x38/0x60 Fixes: 0e89640b640d ("crypto: arm64 - Use modern annotations for assembly functions") Cc: <stable@vger.kernel.org> # 5.6.x- Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Suggested-by: Dave P Martin <Dave.Martin@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20201006163326.2780619-1-jeremy.linton@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-10-06Merge tag 'v5.9-rc5' into asoc-5.10Mark Brown
Linux 5.9-rc5
2020-10-06perf/x86: Fix n_metric for cancelled txnPeter Zijlstra
When a group that has TopDown members is failed to be scheduled, any later TopDown groups will not return valid values. Here is an example. A background perf that occupies all the GP counters and the fixed counter 1. $perf stat -e "{cycles,cycles,cycles,cycles,cycles,cycles,cycles, cycles,cycles}:D" -a A user monitors a TopDown group. It works well, because the fixed counter 3 and the PERF_METRICS are available. $perf stat -x, --topdown -- ./workload retiring,bad speculation,frontend bound,backend bound, 18.0,16.1,40.4,25.5, Then the user tries to monitor a group that has TopDown members. Because of the cycles event, the group is failed to be scheduled. $perf stat -x, -e '{slots,topdown-retiring,topdown-be-bound, topdown-fe-bound,topdown-bad-spec,cycles}' -- ./workload <not counted>,,slots,0,0.00,, <not counted>,,topdown-retiring,0,0.00,, <not counted>,,topdown-be-bound,0,0.00,, <not counted>,,topdown-fe-bound,0,0.00,, <not counted>,,topdown-bad-spec,0,0.00,, <not counted>,,cycles,0,0.00,, The user tries to monitor a TopDown group again. It doesn't work anymore. $perf stat -x, --topdown -- ./workload ,,,,, In a txn, cancel_txn() is to truncate the event_list for a canceled group and update the number of events added in this transaction. However, the number of TopDown events added in this transaction is not updated. The kernel will probably fail to add new Topdown events. Fixes: 7b2c05a15d29 ("perf/x86/intel: Generic support for hardware TopDown metrics") Reported-by: Andi Kleen <ak@linux.intel.com> Reported-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Kan Liang <kan.liang@linux.intel.com> Link: https://lkml.kernel.org/r/20201005082611.GH2628@hirez.programming.kicks-ass.net
2020-10-06perf/x86: Fix n_pair for cancelled txnPeter Zijlstra
Kan reported that n_metric gets corrupted for cancelled transactions; a similar issue exists for n_pair for AMD's Large Increment thing. The problem was confirmed and confirmed fixed by Kim using: sudo perf stat -e "{cycles,cycles,cycles,cycles}:D" -a sleep 10 & # should succeed: sudo perf stat -e "{fp_ret_sse_avx_ops.all}:D" -a workload # should fail: sudo perf stat -e "{fp_ret_sse_avx_ops.all,fp_ret_sse_avx_ops.all,cycles}:D" -a workload # previously failed, now succeeds with this patch: sudo perf stat -e "{fp_ret_sse_avx_ops.all}:D" -a workload Fixes: 5738891229a2 ("perf/x86/amd: Add support for Large Increment per Cycle Events") Reported-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Kim Phillips <kim.phillips@amd.com> Link: https://lkml.kernel.org/r/20201005082516.GG2628@hirez.programming.kicks-ass.net
2020-10-06pseries/hotplug-memory: hot-add: skip redundant LMB lookupScott Cheloha
During memory hot-add, dlpar_add_lmb() calls memory_add_physaddr_to_nid() to determine which node id (nid) to use when later calling __add_memory(). This is wasteful. On pseries, memory_add_physaddr_to_nid() finds an appropriate nid for a given address by looking up the LMB containing the address and then passing that LMB to of_drconf_to_nid_single() to get the nid. In dlpar_add_lmb() we get this address from the LMB itself. In short, we have a pointer to an LMB and then we are searching for that LMB *again* in order to find its nid. If we call of_drconf_to_nid_single() directly from dlpar_add_lmb() we can skip the redundant lookup. The only error handling we need to duplicate from memory_add_physaddr_to_nid() is the fallback to the default nid when drconf_to_nid_single() returns -1 (NUMA_NO_NODE) or an invalid nid. Skipping the extra lookup makes hot-add operations faster, especially on machines with many LMBs. Consider an LPAR with 126976 LMBs. In one test, hot-adding 126000 LMBs on an upatched kernel took ~3.5 hours while a patched kernel completed the same operation in ~2 hours: Unpatched (12450 seconds): Sep 9 04:06:31 ltc-brazos1 drmgr[810169]: drmgr: -c mem -a -q 126000 Sep 9 04:06:31 ltc-brazos1 kernel: pseries-hotplug-mem: Attempting to hot-add 126000 LMB(s) [...] Sep 9 07:34:01 ltc-brazos1 kernel: pseries-hotplug-mem: Memory at 20000000 (drc index 80000002) was hot-added Patched (7065 seconds): Sep 8 21:49:57 ltc-brazos1 drmgr[877703]: drmgr: -c mem -a -q 126000 Sep 8 21:49:57 ltc-brazos1 kernel: pseries-hotplug-mem: Attempting to hot-add 126000 LMB(s) [...] Sep 8 23:27:42 ltc-brazos1 kernel: pseries-hotplug-mem: Memory at 20000000 (drc index 80000002) was hot-added It should be noted that the speedup grows more substantial when hot-adding LMBs at the end of the drconf range. This is because we are skipping a linear LMB search. To see the distinction, consider smaller hot-add test on the same LPAR. A perf-stat run with 10 iterations showed that hot-adding 4096 LMBs completed less than 1 second faster on a patched kernel: Unpatched: Performance counter stats for 'drmgr -c mem -a -q 4096' (10 runs): 104,753.42 msec task-clock # 0.992 CPUs utilized ( +- 0.55% ) 4,708 context-switches # 0.045 K/sec ( +- 0.69% ) 2,444 cpu-migrations # 0.023 K/sec ( +- 1.25% ) 394 page-faults # 0.004 K/sec ( +- 0.22% ) 445,902,503,057 cycles # 4.257 GHz ( +- 0.55% ) (66.67%) 8,558,376,740 stalled-cycles-frontend # 1.92% frontend cycles idle ( +- 0.88% ) (49.99%) 300,346,181,651 stalled-cycles-backend # 67.36% backend cycles idle ( +- 0.76% ) (50.01%) 258,091,488,691 instructions # 0.58 insn per cycle # 1.16 stalled cycles per insn ( +- 0.22% ) (66.67%) 70,568,169,256 branches # 673.660 M/sec ( +- 0.17% ) (50.01%) 3,100,725,426 branch-misses # 4.39% of all branches ( +- 0.20% ) (49.99%) 105.583 +- 0.589 seconds time elapsed ( +- 0.56% ) Patched: Performance counter stats for 'drmgr -c mem -a -q 4096' (10 runs): 104,055.69 msec task-clock # 0.993 CPUs utilized ( +- 0.32% ) 4,606 context-switches # 0.044 K/sec ( +- 0.20% ) 2,463 cpu-migrations # 0.024 K/sec ( +- 0.93% ) 394 page-faults # 0.004 K/sec ( +- 0.25% ) 442,951,129,921 cycles # 4.257 GHz ( +- 0.32% ) (66.66%) 8,710,413,329 stalled-cycles-frontend # 1.97% frontend cycles idle ( +- 0.47% ) (50.06%) 299,656,905,836 stalled-cycles-backend # 67.65% backend cycles idle ( +- 0.39% ) (50.02%) 252,731,168,193 instructions # 0.57 insn per cycle # 1.19 stalled cycles per insn ( +- 0.20% ) (66.66%) 68,902,851,121 branches # 662.173 M/sec ( +- 0.13% ) (49.94%) 3,100,242,882 branch-misses # 4.50% of all branches ( +- 0.15% ) (49.98%) 104.829 +- 0.325 seconds time elapsed ( +- 0.31% ) This is consistent. An add-by-count hot-add operation adds LMBs greedily, so LMBs near the start of the drconf range are considered first. On an otherwise idle LPAR with so many LMBs we would expect to find the LMBs we need near the start of the drconf range, hence the smaller speedup. Signed-off-by: Scott Cheloha <cheloha@linux.ibm.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200916145122.3408129-1-cheloha@linux.ibm.com
2020-10-06powerpc/rtas: Restrict RTAS requests from userspaceAndrew Donnellan
A number of userspace utilities depend on making calls to RTAS to retrieve information and update various things. The existing API through which we expose RTAS to userspace exposes more RTAS functionality than we actually need, through the sys_rtas syscall, which allows root (or anyone with CAP_SYS_ADMIN) to make any RTAS call they want with arbitrary arguments. Many RTAS calls take the address of a buffer as an argument, and it's up to the caller to specify the physical address of the buffer as an argument. We allocate a buffer (the "RMO buffer") in the Real Memory Area that RTAS can access, and then expose the physical address and size of this buffer in /proc/powerpc/rtas/rmo_buffer. Userspace is expected to read this address, poke at the buffer using /dev/mem, and pass an address in the RMO buffer to the RTAS call. However, there's nothing stopping the caller from specifying whatever address they want in the RTAS call, and it's easy to construct a series of RTAS calls that can overwrite arbitrary bytes (even without /dev/mem access). Additionally, there are some RTAS calls that do potentially dangerous things and for which there are no legitimate userspace use cases. In the past, this would not have been a particularly big deal as it was assumed that root could modify all system state freely, but with Secure Boot and lockdown we need to care about this. We can't fundamentally change the ABI at this point, however we can address this by implementing a filter that checks RTAS calls against a list of permitted calls and forces the caller to use addresses within the RMO buffer. The list is based off the list of calls that are used by the librtas userspace library, and has been tested with a number of existing userspace RTAS utilities. For compatibility with any applications we are not aware of that require other calls, the filter can be turned off at build time. Cc: stable@vger.kernel.org Reported-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200820044512.7543-1-ajd@linux.ibm.com
2020-10-06powerpc/perf: Exclude pmc5/6 from the irrelevant PMU group constraintsAthira Rajeev
PMU counter support functions enforces event constraints for group of events to check if all events in a group can be monitored. Incase of event codes using PMC5 and PMC6 ( 500fa and 600f4 respectively ), not all constraints are applicable, say the threshold or sample bits. But current code includes pmc5 and pmc6 in some group constraints (like IC_DC Qualifier bits) which is actually not applicable and hence results in those events not getting counted when scheduled along with group of other events. Patch fixes this by excluding PMC5/6 from constraints which are not relevant for it. Fixes: 7ffd948 ("powerpc/perf: factor out power8 pmu functions") Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1600672204-1610-1-git-send-email-atrajeev@linux.vnet.ibm.com
2020-10-06powerpc/smp: Optimize update_coregroup_maskSrikar Dronamraju
All threads of a SMT4/SMT8 core can either be part of CPU's coregroup mask or outside the coregroup. Use this relation to reduce the number of iterations needed to find all the CPUs that share the same coregroup Use a temporary mask to iterate through the CPUs that may share coregroup mask. Also instead of setting one CPU at a time into cpu_coregroup_mask, copy the SMT4/SMT8/submask at one shot. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-12-srikar@linux.vnet.ibm.com
2020-10-06powerpc/smp: Move coregroup mask updation to a new functionSrikar Dronamraju
Move the logic for updating the coregroup mask of a CPU to its own function. This will help in reworking the updation of coregroup mask in subsequent patch. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-11-srikar@linux.vnet.ibm.com