summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-09-28firmware: arm_sdei: Retrieve event number from event instanceGavin Shan
In sdei_event_create(), the event number is retrieved from the variable @event_num for the shared event. The event number was stored in the event instance. So we can fetch it from the event instance, similar to what we're doing for the private event. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-4-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28firmware: arm_sdei: Common block for failing path in sdei_event_create()Gavin Shan
There are multiple calls of kfree(event) in the failing path of sdei_event_create() to free the SDEI event. It means we need to call it again when adding more code in the failing path. It's prone to miss doing that and introduce memory leakage. This introduces common block for failing path in sdei_event_create() to resolve the issue. This shouldn't cause functional changes. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-3-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28firmware: arm_sdei: Remove sdei_is_err()Gavin Shan
sdei_is_err() is only called by sdei_to_linux_errno(). The logic of checking on the error number is common to them. They can be combined finely. This removes sdei_is_err() and its logic is combined to the function sdei_to_linux_errno(). Also, the assignment of @err to zero is also dropped in invoke_sdei_fn() because it's always overridden afterwards. This shouldn't cause functional changes. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200922130423.10173-2-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm_pmu: arm64: Use NMIs for PMUJulien Thierry
Add required PMU interrupt operations for NMIs. Request interrupt lines as NMIs when possible, otherwise fall back to normal interrupts. NMIs are only supported on the arm64 architecture with a GICv3 irqchip. [Alexandru E.: Added that NMIs only work on arm64 + GICv3, print message when PMU is using NMIs] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-8-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm_pmu: Introduce pmu_irq_opsJulien Thierry
Currently the PMU interrupt can either be a normal irq or a percpu irq. Supporting NMI will introduce two cases for each existing one. It becomes a mess of 'if's when managing the interrupt. Define sets of callbacks for operations commonly done on the interrupt. The appropriate set of callbacks is selected at interrupt request time and simplifies interrupt enabling/disabling and freeing. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-7-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28KVM: arm64: pmu: Make overflow handler NMI safeJulien Thierry
kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from NMI context, defer waking the vcpu to an irq_work queue. A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent running the irq_work for a non-existent vcpu by calling irq_work_sync() on the PMU destroy path. [Alexandru E.: Added irq_work_sync()] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com> Cc: kvm@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/20200924110706.254996-6-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Defer irq_work to IPI_IRQ_WORKJulien Thierry
When handling events, armv8pmu_handle_irq() calls perf_event_overflow(), and subsequently calls irq_work_run() to handle any work queued by perf_event_overflow(). As perf_event_overflow() raises IPI_IRQ_WORK when queuing the work, this isn't strictly necessary and the work could be handled as part of the IPI_IRQ_WORK handler. In the common case the IPI handler will run immediately after the PMU IRQ handler, and where the PE is heavily loaded with interrupts other handlers may run first, widening the window where some counters are disabled. In practice this window is unlikely to be a significant issue, and removing the call to irq_work_run() would make the PMU IRQ handler NMI safe in addition to making it simpler, so let's do that. [Alexandru E.: Reworded commit message] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-5-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Remove PMU lockingJulien Thierry
The PMU is disabled and enabled, and the counters are programmed from contexts where interrupts or preemption is disabled. The functions to toggle the PMU and to program the PMU counters access the registers directly and don't access data modified by the interrupt handler. That, and the fact that they're always called from non-preemptible contexts, means that we don't need to disable interrupts or use a spinlock. [Alexandru E.: Explained why locking is not needed, removed WARN_ONs] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-4-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Avoid PMXEV* indirectionMark Rutland
Currently we access the counter registers and their respective type registers indirectly. This requires us to write to PMSELR, issue an ISB, then access the relevant PMXEV* registers. This is unfortunate, because: * Under virtualization, accessing one register requires two traps to the hypervisor, even though we could access the register directly with a single trap. * We have to issue an ISB which we could otherwise avoid the cost of. * When we use NMIs, the NMI handler will have to save/restore the select register in case the code it preempted was attempting to access a counter or its type register. We can avoid these issues by directly accessing the relevant registers. This patch adds helpers to do so. In armv8pmu_enable_event() we still need the ISB to prevent the PE from reordering the write to PMINTENSET_EL1 register. If the interrupt is enabled before we disable the counter and the new event is configured, we might get an interrupt triggered by the previously programmed event overflowing, but which we wrongly attribute to the event that we are enabling. Execute an ISB after we disable the counter. In the process, remove the comment that refers to the ARMv7 PMU. [Julien T.: Don't inline read/write functions to avoid big code-size increase, remove unused read_pmevtypern function, fix counter index issue.] [Alexandru E.: Removed comment, removed trailing semicolons in macros, added ISB] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-3-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Add missing ISB in armv8pmu_enable_counter()Alexandru Elisei
Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In armv8pmu_enable_event(), the PE can reorder configuring the event type after we have enabled the counter and the interrupt. This can lead to an interrupt being asserted because of the previous event type that we were counting using the same counter, not the one that we've just configured. The same rationale applies to writes to the PMINTENSET_EL1 register. The PE can reorder enabling the interrupt at any point in the future after we have enabled the event. Prevent both situations from happening by adding an ISB just before we enable the event counter. Fixes: 030896885ade ("arm64: Performance counters support") Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-2-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28perf: Add Arm CMN-600 PMU driverRobin Murphy
Initial driver for PMU event counting on the Arm CMN-600 interconnect. CMN sports an obnoxiously complex distributed PMU system as part of its debug and trace features, which can do all manner of things like sampling, cross-triggering and generating CoreSight trace. This driver covers the PMU functionality, plus the relevant aspects of watchpoints for simply counting matching flits. Tested-by: Tsahi Zidenberg <tsahee@amazon.com> Tested-by: Tuan Phan <tuanphan@os.amperecomputing.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28perf: Add Arm CMN-600 DT bindingRobin Murphy
Document the requirements for the CMN-600 DT binding. The internal topology is almost entirely discoverable by walking a tree of ID registers, but sadly both the starting point for that walk and the exact format of those registers are configuration-dependent and not discoverable from some sane fixed location. Oh well. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Add support caps under sysfsShaokun Zhang
ARMv8.4-PMU introduces the PMMIR_EL1 registers and some new PMU events, like STALL_SLOT etc, are related to it. Let's add a caps directory to /sys/bus/event_source/devices/armv8_pmuv3_0/ and support slots from PMMIR_EL1 registers in this entry. The user programs can get the slots from sysfs directly. /sys/bus/event_source/devices/armv8_pmuv3_0/caps/slots is exposed under sysfs. Both ARMv8.4-PMU and STALL_SLOT event are implemented, it returns the slots from PMMIR_EL1, otherwise it will return 0. Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1600754025-53535-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/mm: return cpu_all_mask when node is NUMA_NO_NODEZhengyuan Liu
The @node passed to cpumask_of_node() can be NUMA_NO_NODE, in that case it will trigger the following WARN_ON(node >= nr_node_ids) due to mismatched data types of @node and @nr_node_ids. Actually we should return cpu_all_mask just like most other architectures do if passed NUMA_NO_NODE. Also add a similar check to the inline cpumask_of_node() in numa.h. Signed-off-by: Zhengyuan Liu <liuzhengyuan@tj.kylinos.cn> Reviewed-by: Gavin Shan <gshan@redhat.com> Link: https://lore.kernel.org/r/20200921023936.21846-1-liuzhengyuan@tj.kylinos.cn Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64: Move console stack display code to stacktrace.cMark Brown
Currently the code for displaying a stack trace on the console is located in traps.c rather than stacktrace.c, using the unwinding code that is in stacktrace.c. This can be confusing and make the code hard to find since such output is often referred to as a stack trace which might mislead the unwary. Due to this and since traps.c doesn't interact with this code except for via the public interfaces move the code to stacktrace.c to make it easier to find. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200921122341.11280-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUsMarc Zyngier
Commit 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") changed the way we deal with ARCH_WORKAROUND_1, by moving most of the enabling code to the .matches() callback. This has the unfortunate effect that the workaround gets only enabled on the first affected CPU, and no other. In order to address this, forcefully call the .matches() callback from a .cpu_enable() callback, which brings us back to the original behaviour. Fixes: 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") Cc: <stable@vger.kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64: Make use of ARCH_WORKAROUND_1 even when KVM is not enabledMarc Zyngier
We seem to be pretending that we don't have any firmware mitigation when KVM is not compiled in, which is not quite expected. Bring back the mitigation in this case. Fixes: 4db61fef16a1 ("arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations") Cc: <stable@vger.kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/sve: Implement a helper to load SVE registers from FPSIMD stateJulien Grall
In a follow-up patch, we may save the FPSIMD rather than the full SVE state when the state has to be zeroed on return to userspace (e.g during a syscall). Introduce an helper to load SVE vectors from FPSIMD state and zero the rest of SVE registers. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-7-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/sve: Implement a helper to flush SVE registersJulien Grall
Introduce a new helper that will zero all SVE registers but the first 128-bits of each vector. This will be used by subsequent patches to avoid costly store/maipulate/reload sequences in places like do_sve_acc(). Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-6-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/fpsimdmacros: Allow the macro "for" to be used in more casesJulien Grall
The current version of the macro "for" is not able to work when the counter is used to generate registers using mnemonics. This is because gas is not able to evaluate the expression generated if used in register's name (i.e x\n). Gas offers a way to evaluate macro arguments by using % in front of them under the alternate macro mode. The implementation of "for" is updated to use the alternate macro mode and %, so we can use the macro in more cases. As the alternate macro mode may have side-effects, this is disabled when expanding the body. While it is enough to prefix the argument of the macro "__for_body" with %, the arguments of "__for" are also prefixed to get a more bearable value in case of compilation error. Suggested-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/fpsimdmacros: Introduce a macro to update ZCR_EL1.LENJulien Grall
A follow-up patch will need to update ZCR_EL1.LEN. Add a macro that could be re-used in the current and new places to avoid code duplication. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/signal: Update the comment in preserve_sve_contextJulien Grall
The SVE state is saved by fpsimd_signal_preserve_current_state() and not preserve_fpsimd_context(). Update the comment in preserve_sve_context to reflect the current behavior. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/fpsimd: Update documentation of do_sve_accJulien Grall
fpsimd_restore_current_state() enables and disables the SVE access trap based on TIF_SVE, not task_fpsimd_load(). Update the documentation of do_sve_acc to reflect this behavior. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMMWill Deacon
When generating instructions at runtime, for example due to kernel text patching or the BPF JIT, we can emit a trapping BRK instruction if we are asked to encode an invalid instruction such as an out-of-range] branch. This is indicative of a bug in the caller, and will result in a crash on executing the generated code. Unfortunately, the message from the crash is really unhelpful, and mumbles something about ptrace: | Unexpected kernel BRK exception at EL1 | Internal error: ptrace BRK handler: f2000100 [#1] SMP We can do better than this. Install a break handler for FAULT_BRK_IMM, which is the immediate used to encode the "I've been asked to generate an invalid instruction" error, and triage the faulting PC to determine whether or not the failure occurred in the BPF JIT. Link: https://lore.kernel.org/r/20200915141707.GB26439@willie-the-truck Reported-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18drivers/perf: thunderx2_pmu: Fix memory resource error handlingMark Salter
In tx2_uncore_pmu_init_dev(), a call to acpi_dev_get_resources() is used to create a list _CRS resources which is searched for the device base address. There is an error check following this: if (!rentry->res) return NULL In no case, will rentry->res be NULL, so the test is useless. Even if the test worked, it comes before the resource list memory is freed. None of this really matters as long as the ACPI table has the memory resource. Let's clean it up so that it makes sense and will give a meaningful error should firmware leave out the memory resource. Fixes: 69c32972d593 ("drivers/perf: Add Cavium ThunderX2 SoC UNCORE PMU driver") Signed-off-by: Mark Salter <msalter@redhat.com> Link: https://lore.kernel.org/r/20200915204110.326138-2-msalter@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18drivers/perf: xgene_pmu: Fix uninitialized resource structMark Salter
This splat was reported on newer Fedora kernels booting on certain X-gene based machines: xgene-pmu APMC0D83:00: X-Gene PMU version 3 Unable to handle kernel read from unreadable memory at virtual \ address 0000000000004006 ... Call trace: string+0x50/0x100 vsnprintf+0x160/0x750 devm_kvasprintf+0x5c/0xb4 devm_kasprintf+0x54/0x60 __devm_ioremap_resource+0xdc/0x1a0 devm_ioremap_resource+0x14/0x20 acpi_get_pmu_hw_inf.isra.0+0x84/0x15c acpi_pmu_dev_add+0xbc/0x21c acpi_ns_walk_namespace+0x16c/0x1e4 acpi_walk_namespace+0xb4/0xfc xgene_pmu_probe_pmu_dev+0x7c/0xe0 xgene_pmu_probe.part.0+0x2c0/0x310 xgene_pmu_probe+0x54/0x64 platform_drv_probe+0x60/0xb4 really_probe+0xe8/0x4a0 driver_probe_device+0xe4/0x100 device_driver_attach+0xcc/0xd4 __driver_attach+0xb0/0x17c bus_for_each_dev+0x6c/0xb0 driver_attach+0x30/0x40 bus_add_driver+0x154/0x250 driver_register+0x84/0x140 __platform_driver_register+0x54/0x60 xgene_pmu_driver_init+0x28/0x34 do_one_initcall+0x40/0x204 do_initcalls+0x104/0x144 kernel_init_freeable+0x198/0x210 kernel_init+0x20/0x12c ret_from_fork+0x10/0x18 Code: 91000400 110004e1 eb08009f 540000c0 (38646846) ---[ end trace f08c10566496a703 ]--- This is due to use of an uninitialized local resource struct in the xgene pmu driver. The thunderx2_pmu driver avoids this by using the resource list constructed by acpi_dev_get_resources() rather than using a callback from that function. The callback in the xgene driver didn't fully initialize the resource. So get rid of the callback and search the resource list as done by thunderx2. Fixes: 832c927d119b ("perf: xgene: Add APM X-Gene SoC Performance Monitoring Unit driver") Signed-off-by: Mark Salter <msalter@redhat.com> Link: https://lore.kernel.org/r/20200915204110.326138-1-msalter@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64: mm: Fix missing-prototypes in pageattr.cTian Tao
Fix the following warnings. ‘set_memory_valid’ [-Wmissing-prototypes] int set_memory_valid(unsigned long addr, int numpages, int enable) ^ ‘set_direct_map_invalid_noflush’ [-Wmissing-prototypes] int set_direct_map_invalid_noflush(struct page *page) ^ ‘set_direct_map_default_noflush’ [-Wmissing-prototypes] int set_direct_map_default_noflush(struct page *page) ^ Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Link: https://lore.kernel.org/r/1600222847-56792-1-git-send-email-tiantao6@hisilicon.com arch/arm64/mm/pageattr.c:138:5: warning: no previous prototype for arch/arm64/mm/pageattr.c:150:5: warning: no previous prototype for arch/arm64/mm/pageattr.c:165:5: warning: no previous prototype for Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64/fpsimd: Fix missing-prototypes in fpsimd.cTian Tao
Fix the following warnings. arch/arm64/kernel/fpsimd.c:935:6: warning: no previous prototype for ‘do_sve_acc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:962:6: warning: no previous prototype for ‘do_fpsimd_acc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:971:6: warning: no previous prototype for ‘do_fpsimd_exc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:1266:6: warning: no previous prototype for ‘kernel_neon_begin’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:1292:6: warning: no previous prototype for ‘kernel_neon_end’ [-Wmissing-prototypes] Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/1600157999-14802-1-git-send-email-tiantao6@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64: stacktrace: Convert to ARCH_STACKWALKMark Brown
Historically architectures have had duplicated code in their stack trace implementations for filtering what gets traced. In order to avoid this duplication some generic code has been provided using a new interface arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which factors all this out into the generic stack trace code. Convert arm64 to use this common infrastructure. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lore.kernel.org/r/20200914153409.25097-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64: stacktrace: Make stack walk callback consistent with generic codeMark Brown
As with the generic arch_stack_walk() code the arm64 stack walk code takes a callback that is called per stack frame. Currently the arm64 code always passes a struct stackframe to the callback and the generic code just passes the pc, however none of the users ever reference anything in the struct other than the pc value. The arm64 code also uses a return type of int while the generic code uses a return type of bool though in both cases the return value is a boolean value and the sense is inverted between the two. In order to reduce code duplication when arm64 is converted to use arch_stack_walk() change the signature and return sense of the arm64 specific callback to match that of the generic code. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lore.kernel.org/r/20200914153409.25097-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18stacktrace: Remove reliable argument from arch_stack_walk() callbackMark Brown
Currently the callback passed to arch_stack_walk() has an argument called reliable passed to it to indicate if the stack entry is reliable, a comment says that this is used by some printk() consumers. However in the current kernel none of the arch_stack_walk() implementations ever set this flag to true and the only callback implementation we have is in the generic stacktrace code which ignores the flag. It therefore appears that this flag is redundant so we can simplify and clarify things by removing it. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lore.kernel.org/r/20200914153409.25097-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18selftests: arm64: Add build and documentation for FP testsMark Brown
Integrate the FP tests with the build system and add some documentation for the ones run outside the kselftest infrastructure. The content in the README was largely written by Dave Martin with edits by me. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Link: https://lore.kernel.org/r/20200819114837.51466-7-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18selftests: arm64: Add wrapper scripts for stress testsMark Brown
Add wrapper scripts which invoke fpsimd-test and sve-test with several copies per CPU such that the context switch code will be appropriately exercised. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Link: https://lore.kernel.org/r/20200819114837.51466-6-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18selftests: arm64: Add utility to set SVE vector lengthsMark Brown
vlset is a small utility for use in conjunction with tests like the sve-test stress test which allows another executable to be invoked with a configured SVE vector length. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Link: https://lore.kernel.org/r/20200819114837.51466-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18selftests: arm64: Add stress tests for FPSMID and SVE context switchingMark Brown
Add programs sve-test and fpsimd-test which spin reading and writing to the SVE and FPSIMD registers, verifying the operations they perform. The intended use is to leave them running to stress the context switch code's handling of these registers which isn't compatible with what kselftest does so they're not integrated into the framework but there's no other obvious testsuite where they fit so let's store them here. These tests were written by Dave Martin and lightly adapted by me. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Link: https://lore.kernel.org/r/20200819114837.51466-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18selftests: arm64: Add test for the SVE ptrace interfaceMark Brown
Add a test case that does some basic verification of the SVE ptrace interface, forking off a child with known values in the registers and then using ptrace to inspect and manipulate the SVE registers of the child, including in FPSIMD mode to account for sharing between the SVE and FPSIMD registers. This program was written by Dave Martin and modified for kselftest by me. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Link: https://lore.kernel.org/r/20200819114837.51466-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18selftests: arm64: Test case for enumeration of SVE vector lengthsMark Brown
Add a test case that verifies that we can enumerate the SVE vector lengths on systems where we detect SVE, and that those SVE vector lengths are valid. This program was written by Dave Martin and adapted to kselftest by me. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Link: https://lore.kernel.org/r/20200819114837.51466-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18kselftests/arm64: add PAuth tests for single threaded consistency and ↵Boyan Karatotev
differently initialized keys PAuth adds 5 different keys that can be used to sign addresses. Add a test that verifies that the kernel initializes them to different values and preserves them across context switches. Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918104715.182310-5-boian4o1@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18kselftests/arm64: add PAuth test for whether exec() changes keysBoyan Karatotev
Kernel documentation states that it will change PAuth keys on exec() calls. Verify that all keys are correctly switched to new ones. Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918104715.182310-4-boian4o1@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18kselftests/arm64: add nop checks for PAuth testsBoyan Karatotev
PAuth adds sign/verify controls to enable and disable groups of instructions in hardware for compatibility with libraries that do not implement PAuth. The kernel always enables them if it detects PAuth. Add a test that checks that each group of instructions is enabled, if the kernel reports PAuth as detected. Note: For groups, for the purpose of this patch, we intend instructions that use a certain key. Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918104715.182310-3-boian4o1@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18kselftests/arm64: add a basic Pointer Authentication testBoyan Karatotev
PAuth signs and verifies return addresses on the stack. It does so by inserting a Pointer Authentication code (PAC) into some of the unused top bits of an address. This is achieved by adding paciasp/autiasp instructions at the beginning and end of a function. This feature is partially backwards compatible with earlier versions of the ARM architecture. To coerce the compiler into emitting fully backwards compatible code the main file is compiled to target an earlier ARM version. This allows the tests to check for the feature and print meaningful error messages instead of crashing. Add a test to verify that corrupting the return address results in a SIGSEGV on return. Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918104715.182310-2-boian4o1@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64: Enable PCI write-combine resources under sysfsClint Sbisa
This change exposes write-combine mappings under sysfs for prefetchable PCI resources on arm64. Originally, the usage of "write combine" here was driven by the x86 definition of write combine. This definition is specific to x86 and does not generalize to other architectures. However, the usage of WC has mutated to "write combine" semantics, which is implemented differently on each arch. Generally, prefetchable BARs are accepted to allow speculative accesses, write combining, and re-ordering-- from the PCI perspective, this means there are no read side effects. (This contradicts the PCI spec which allows prefetchable BARs to have read side effects, but this definition is ill-advised as it is impossible to meet.) On x86, prefetchable BARs are mapped as WC as originally defined (with some conditionals on arch features). On arm64, WC is taken to mean normal non-cacheable memory. In practice, write combine semantics are used to minimize write operations. A common usage of this is minimizing PCI TLPs which can significantly improve performance with PCI devices. In order to provide the same benefits to userspace, we need to allow userspace to map prefetchable BARs with write combine semantics. The resourceX_wc mapping is used today by userspace programs and libraries. While this model is flawed as "write combine" is very ill-defined, it is already used by multiple non-x86 archs to expose write combine semantics to user space. We enable this on arm64 to give userspace on arm64 an equivalent mechanism for utilizing write combining with PCI devices. Signed-off-by: Clint Sbisa <csbisa@amazon.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Bjorn Helgaas <helgaas@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918033312.ddfpibgfylfjpex2@amazon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-15perf: arm_dsu: Support DSU ACPI devicesTuan Phan
Add support for probing device from ACPI node. Each DSU ACPI node and its associated cpus are inside a cluster node. Signed-off-by: Tuan Phan <tuanphan@os.amperecomputing.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/1600106656-9542-1-git-send-email-tuanphan@os.amperecomputing.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: hibernate: Remove unused including <linux/version.h>Tian Tao
Remove including <linux/version.h> that don't need it. Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Link: https://lore.kernel.org/r/1600068522-54499-1-git-send-email-tiantao6@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64/mm: Refactor {pgd, pud, pmd, pte}_ERROR()Gavin Shan
The function __{pgd, pud, pmd, pte}_error() are introduced so that they can be called by {pgd, pud, pmd, pte}_ERROR(). However, some of the functions could never be called when the corresponding page table level isn't enabled. For example, __{pud, pmd}_error() are unused when PUD and PMD are folded to PGD. This removes __{pgd, pud, pmd, pte}_error() and call pr_err() from {pgd, pud, pmd, pte}_ERROR() directly, similar to what x86/powerpc are doing. With this, the code looks a bit simplified either. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20200913234730.23145-1-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: kprobe: clarify the comment of steppable hint instructionsAmit Daniel Kachhap
The existing comment about steppable hint instruction is not complete and only describes NOP instructions as steppable. As the function aarch64_insn_is_steppable_hint allows all white-listed instruction to be probed so the comment is updated to reflect this. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-7-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: kprobe: disable probe of fault prone ptrauth instructionAmit Daniel Kachhap
With the addition of ARMv8.3-FPAC feature, the probe of authenticate ptrauth instructions (AUT*) may cause ptrauth fault exception in case of authenticate failure so they cannot be safely single stepped. Hence the probe of authenticate instructions is disallowed but the corresponding pac ptrauth instruction (PAC*) is not affected and they can still be probed. Also AUTH* instructions do not make sense at function entry points so most realistic probes would be unaffected by this change. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-6-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: cpufeature: Modify address authentication cpufeature to exactAmit Daniel Kachhap
The current address authentication cpufeature levels are set as LOWER_SAFE which is not compatible with the different configurations added for Armv8.3 ptrauth enhancements as the different levels have different behaviour and there is no tunable to enable the lower safe versions. This is rectified by setting those cpufeature type as EXACT. The current cpufeature framework also does not interfere in the booting of non-exact secondary cpus but rather marks them as tainted. As a workaround this is fixed by replacing the generic match handler with a new handler specific to ptrauth. After this change, if there is any variation in ptrauth configurations in secondary cpus from boot cpu then those mismatched cpus are parked in an infinite loop. Following ptrauth crash log is observed in Arm fastmodel with simulated mismatched cpus without this fix, CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64ISAR1_EL1. Boot CPU: 0x11111110211402, CPU4: 0x11111110211102 CPU features: Unsupported CPU feature variation detected. GICv3: CPU4: found redistributor 100 region 0:0x000000002f180000 CPU4: Booted secondary processor 0x0000000100 [0x410fd0f0] Unable to handle kernel paging request at virtual address bfff800010dadf3c Mem abort info: ESR = 0x86000004 EC = 0x21: IABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 [bfff800010dadf3c] address between user and kernel address ranges Internal error: Oops: 86000004 [#1] PREEMPT SMP Modules linked in: CPU: 4 PID: 29 Comm: migration/4 Tainted: G S 5.8.0-rc4-00005-ge658591d66d1-dirty #158 Hardware name: Foundation-v8A (DT) pstate: 60000089 (nZCv daIf -PAN -UAO BTYPE=--) pc : 0xbfff800010dadf3c lr : __schedule+0x2b4/0x5a8 sp : ffff800012043d70 x29: ffff800012043d70 x28: 0080000000000000 x27: ffff800011cbe000 x26: ffff00087ad37580 x25: ffff00087ad37000 x24: ffff800010de7d50 x23: ffff800011674018 x22: 0784800010dae2a8 x21: ffff00087ad37000 x20: ffff00087acb8000 x19: ffff00087f742100 x18: 0000000000000030 x17: 0000000000000000 x16: 0000000000000000 x15: ffff800011ac1000 x14: 00000000000001bd x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 71519a147ddfeb82 x9 : 825d5ec0fb246314 x8 : ffff00087ad37dd8 x7 : 0000000000000000 x6 : 00000000fffedb0e x5 : 00000000ffffffff x4 : 0000000000000000 x3 : 0000000000000028 x2 : ffff80086e11e000 x1 : ffff00087ad37000 x0 : ffff00087acdc600 Call trace: 0xbfff800010dadf3c schedule+0x78/0x110 schedule_preempt_disabled+0x24/0x40 __kthread_parkme+0x68/0xd0 kthread+0x138/0x160 ret_from_fork+0x10/0x34 Code: bad PC value After this fix, the mismatched CPU4 is parked as, CPU features: CPU4: Detected conflict for capability 39 (Address authentication (IMP DEF algorithm)), System: 1, CPU: 0 CPU4: will not boot CPU4: failed to come online CPU4: died during early boot [Suzuki: Introduce new matching function for address authentication] Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-5-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancementsAmit Daniel Kachhap
Some Armv8.3 Pointer Authentication enhancements have been introduced which are mandatory for Armv8.6 and optional for Armv8.3. These features are, * ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens finding the correct PAC value of the authenticated pointer. * ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication instruction fails in authenticating the PAC present in the address. This is different from earlier case when such failures just adds an error code in the top byte and waits for subsequent load/store to abort. The ptrauth instructions which may cause this fault are autiasp, retaa etc. The above features are now represented by additional configurations for the Address Authentication cpufeature and a new ESR exception class. The userspace fault received in the kernel due to ARMv8.3-FPAC is treated as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN as the signal code. Note that this is different from earlier ARMv8.3 ptrauth where signal SIGSEGV is issued due to Pointer authentication failures. The in-kernel PAC fault causes kernel to crash. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-4-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: traps: Allow force_signal_inject to pass esr error codeAmit Daniel Kachhap
Some error signal need to pass proper ARM esr error code to userspace to better identify the cause of the signal. So the function force_signal_inject is extended to pass this as a parameter. The existing code is not affected by this change. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-3-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>