summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2018-07-24arm64: dts: meson-axg: add audio arb reset controllerJerome Brunet
Add the audio memory arbiter which control the access of the audio fifos to the DDR. Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2018-07-24arm64: dts: meson-axg: add usb power regulatorJerome Brunet
The usb power regulator is supplied by the vcc 5v regulator and controlled by a GPIO. This will be needed to enable usb. Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2018-07-24arm64: dts: meson-axg: add vcc 5v regulator on the s400Jerome Brunet
This regulator is controlled by a GPIO and supplies various devices on the board, such as the lineout codec, the usb supply or the lcd controller. Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2018-07-24arm64: dts: meson-axg: improve power supplies descriptionJerome Brunet
Add the parent supply of the s400 power supplies. Also add 'regulator-always-on' property on the regulators which can't be disabled Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2018-07-24media: sh: migor: Remove stale soc_camera includeJacopo Mondi
Remove a stale inclusion for the soc_camera header. Signed-off-by: Jacopo Mondi <jacopo+renesas@jmondi.org> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
2018-07-24MIPS: Hardcode cpu_has_* where known at compile time due to ISAPaul Burton
Many architectural features have over time moved from being optional to either be required or removed by newer architecture releases. This means that in many cases we can know at compile time whether a feature will be supported or not purely due to the knowledge we have about the ISA the kernel build is targeting. This patch introduces a bunch of utility macros for checking for supported options, ASEs & combinations of those with ISA revisions. It then makes use of these in the default definitions of cpu_has_* macros. The result is that many of the macros become compile-time constant, allowing more optimisation opportunities for the compiler - particularly with kernels built for later ISA revisions. To demonstrate the effect of this patch, the following table shows the size in bytes of the kernel binary as reported by scripts/bloat-o-meter for v4.12-rc4 maltasmvp_defconfig kernels with & without this patch. A variant of maltasmvp_defconfig with CONFIG_CPU_MIPS32_R6 selected is also shown, to demonstrate that MIPSr6 systems benefit more due to extra features becoming required by that architecture revision. Builds of pistachio_defconfig are also shown, as although this is a MIPSr2 platform it doesn't hardcode any features in a machine-specific cpu-feature-overrides.h, which allows it to gain more from this patch than the equivalent Malta r2 build. Config | Before | After | Change ----------------|---------|---------|--------- maltasmvp | 7248316 | 7247714 | -602 maltasmvp + r6 | 6955595 | 6950777 | -4818 pistachio | 8650977 | 8363898 | -287079 Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/16360/ Cc: Joshua Kinard <kumba@gentoo.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
2018-07-24MIPS: jz4780: DTS: Probe the spi-gpio driver from devicetreeMathieu Malaterre
Make use of the spi-gpio driver to provide SPI support on the Ingenic JZ4780 SoC using the pins that can be used with the SSI0 device as GPIOs, until such time as we have support for the Ingenic SPI/SSI controller. [paul.burton@mips.com: Rewrite commit message.] Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19489/ Cc: James Hogan <jhogan@kernel.org> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: devicetree@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org
2018-07-24MIPS: Ci20: Enable SPI/GPIO driverMathieu Malaterre
Enable CONFIG_SPI_GPIO in ci20_defconfig, in order to make use of the spi-gpio driver in a further commit. [paul.burton@mips.com: Rewrite commit message.] Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19488/ Cc: James Hogan <jhogan@kernel.org> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: devicetree@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org
2018-07-24ARM: exynos: Define EINT_WAKEUP_MASK registers for S5Pv210 and Exynos5433Krzysztof Kozlowski
S5Pv210 and Exynos5433/Exynos7 have different address of EINT_WAKEUP_MASK register. Rename existing S5P_EINT_WAKEUP_MASK to avoid confusion and add new ones. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Cc: Tomasz Figa <tomasz.figa@gmail.com> Cc: Sylwester Nawrocki <snawrocki@kernel.org> Acked-by: Tomasz Figa <tomasz.figa@gmail.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
2018-07-24MIPS: Octeon: Select HAS_RAPIDIOAlexander Sverdlin
All Octeons starting with Octeon II have RAPIDIO controller which can function even with PCI disabled. Signed-off-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Acked-by: Alexandre Bounine <alex.bou9@gmail.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19988/ Cc: linux-mips@linux-mips.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Matt Porter <mporter@kernel.crashing.org>
2018-07-24MIPS: Introduce HAS_RAPIDIO Kconfig optionAlexander Sverdlin
Introduce the same option as PPC and ARM already have because RAPIDIO can function in the absence of PCI. Signed-off-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Acked-by: Alexandre Bounine <alex.bou9@gmail.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19987/ Cc: linux-mips@linux-mips.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Matt Porter <mporter@kernel.crashing.org>
2018-07-24mips: use asm-generic version of msi.hThomas Petazzoni
This is necessary to be able to include <linux/msi.h> when CONFIG_GENERIC_MSI_IRQ_DOMAIN is enabled. Without this, a build with CONFIG_GENERIC_MSI_IRQ_DOMAIN fails with: In file included from include/linux/kvm_host.h:20:0, from arch/mips/kernel/asm-offsets.c:24: >> include/linux/msi.h:197:10: fatal error: asm/msi.h: No such file or directory #include <asm/msi.h> ^~~~~~~~~~~ compilation terminated. make[2]: *** [arch/mips/kernel/asm-offsets.s] Error 1 make[2]: Target '__build' not remade because of errors. make[1]: *** [prepare0] Error 2 make[1]: Target 'prepare' not remade because of errors. make: *** [sub-make] Error 2 Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19986/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Hanna Hawa <hannah@marvell.com>
2018-07-24ARM: exynos: Clear global variable on init error pathKrzysztof Kozlowski
For most of Exynos SoCs, Power Management Unit (PMU) address space is mapped into global variable 'pmu_base_addr' very early when initializing PMU interrupt controller. A lot of other machine code depends on it so when doing iounmap() on this address, clear the global as well to avoid usage of invalid value (pointing to unmapped memory region). Properly mapped PMU address space is a requirement for all other machine code so this fix is purely theoretical. Boot will fail immediately in many other places after following this error path. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2018-07-24ARM: exynos: Remove outdated maintainer informationKrzysztof Kozlowski
The current maintainers are specified in MAINTAINERS file, so remove in-sources information with outdated e-mail address (Thomas Abraham's email does not work, Kukjin Kim uses @kernel.org). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2018-07-24ARM: dts: sun4i: Add GPU nodeSteven Vanden Branden
Add mali gpu node to sun4i a10 platforms. Tested with offscreen rendering with lima mesa (freedesktop gitlab) Signed-off-by: Steven Vanden Branden <stevenvandenbrandenstift@gmail.com> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
2018-07-24powerpc/powernv: implement opal_put_chars_atomicNicholas Piggin
The RAW console does not need writes to be atomic, so relax opal_put_chars to be able to do partial writes, and implement an _atomic variant which does not take a spinlock. This API is used in xmon, so the less locking that is used, the better chance there is that a crash can be debugged. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv: move opal console flushing to udbgNicholas Piggin
OPAL console writes do not have to synchronously flush firmware / hardware buffers unless they are going through the udbg path. Remove the unconditional flushing from opal_put_chars. Flush if there was no space in the buffer as an optimisation (callers loop waiting for success in that case). udbg flushing is moved to udbg_opal_putc. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv: Remove OPALv1 support from opal console driverNicholas Piggin
opal_put_chars deals with partial writes because in OPALv1, opal_console_write_buffer_space did not work correctly. That firmware is not supported. This reworks the opal_put_chars code to no longer deal with partial writes by turning them into full writes. Partial write handling is still supported in terms of what gets returned to the caller, but it may not go to the console atomically. A warning message is printed in this case. This allows console flushing to be moved out of the opal_write_lock spinlock. That could cause the lock to be held for long periods if the console is busy (especially if it was being spammed by firmware), which is dangerous because the lock is taken by xmon to debug the system. Flushing outside the lock improves the situation a bit. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv: Implement and use opal_flush_consoleNicholas Piggin
A new console flushing firmware API was introduced to replace event polling loops, and implemented in opal-kmsg with affddff69c55e ("powerpc/powernv: Add a kmsg_dumper that flushes console output on panic"), to flush the console in the panic path. The OPAL console driver has other situations where interrupts are off and it needs to flush the console synchronously. These still use a polling loop. So move the opal-kmsg flush code to opal_flush_console, and use the new function in opal-kmsg and opal_put_chars. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv: opal-kmsg use flush fallback from console codeNicholas Piggin
Use the more refined and tested event polling loop from opal_put_chars as the fallback console flush in the opal-kmsg path. This loop is used by the console driver today, whereas the opal-kmsg fallback is not likely to have been used for years. Use WARN_ONCE rather than a printk when the fallback is invoked to prepare for moving the console flush into a common function. Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv: opal-kmsg standardise OPAL_BUSY handlingNicholas Piggin
OPAL_CONSOLE_FLUSH is documented as being able to return OPAL_BUSY, so implement the standard OPAL_BUSY handling for it. Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv: Fix OPAL console driver OPAL_BUSY loopsNicholas Piggin
The OPAL console driver does not delay in case it gets OPAL_BUSY or OPAL_BUSY_EVENT from firmware. It can't yet be made to sleep because it is called under spinlock, but it can be changed to the standard OPAL_BUSY loop form, and a delay added to keep it from hitting the firmware too frequently. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv: opal_put_chars partial write fixNicholas Piggin
The intention here is to consume and discard the remaining buffer upon error. This works if there has not been a previous partial write. If there has been, then total_len is no longer total number of bytes to copy. total_len is always "bytes left to copy", so it should be added to written bytes. This code may not be exercised any more if partial writes will not be hit, but this is a small bugfix before a larger change. Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv/opal-dump : Use IRQ_HANDLED instead of numbers in interrupt ↵Mukesh Ojha
handler Fixes: 8034f715f ("powernv/opal-dump: Convert to irq domain") Converts all the return explicit number to a more proper IRQ_HANDLED, which looks proper incase of interrupt handler returning case. Here, It also removes error message like "nobody cared" which was getting unveiled while returning -1 or 0 from handler. Signed-off-by: Mukesh Ojha <mukesh02@linux.vnet.ibm.com> Reviewed-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/powernv/opal-dump : Handles opal_dump_info properlyMukesh Ojha
Moves the return value check of 'opal_dump_info' to a proper place which was previously unnecessarily filling all the dump info even on failure. Signed-off-by: Mukesh Ojha <mukesh02@linux.vnet.ibm.com> Acked-by: Stewart Smith <stewart@linux.vnet.ibm.com> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/tm: Remove struct thread_info param from tm_reclaim_thread()Cyril Bur
Since commit dc3106690b20 ("powerpc: tm: Always use fp_state and vr_state to store live registers") tm_reclaim_thread() doesn't use the parameter anymore, both callers have to bother getting it as they have no need for a struct thread_info either. Just remove it and adjust the callers. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/tm: Update function prototype commentCyril Bur
In commit eb5c3f1c8647 ("powerpc: Always save/restore checkpointed regs during treclaim/trecheckpoint") __tm_recheckpoint was modified to no longer take the second parameter 'unsigned long orig_msr' as part of a TM rewrite to simplify the reclaiming/recheckpointing process. There is a comment in the asm file where the function is delcared which has an incorrect prototype with the 'orig_msr' parameter. This patch corrects the comment. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/64: add 32 bytes prechecking before using VMX optimization on memcmp()Simon Guo
This patch is based on the previous VMX patch on memcmp(). To optimize ppc64 memcmp() with VMX instruction, we need to think about the VMX penalty brought with: If kernel uses VMX instruction, it needs to save/restore current thread's VMX registers. There are 32 x 128 bits VMX registers in PPC, which means 32 x 16 = 512 bytes for load and store. The major concern regarding the memcmp() performance in kernel is KSM, who will use memcmp() frequently to merge identical pages. So it will make sense to take some measures/enhancement on KSM to see whether any improvement can be done here. Cyril Bur indicates that the memcmp() for KSM has a higher possibility to fail (unmatch) early in previous bytes in following mail. https://patchwork.ozlabs.org/patch/817322/#1773629 And I am taking a follow-up on this with this patch. Per some testing, it shows KSM memcmp() will fail early at previous 32 bytes. More specifically: - 76% cases will fail/unmatch before 16 bytes; - 83% cases will fail/unmatch before 32 bytes; - 84% cases will fail/unmatch before 64 bytes; So 32 bytes looks a better choice than other bytes for pre-checking. The early failure is also true for memcmp() for non-KSM case. With a non-typical call load, it shows ~73% cases fail before first 32 bytes. This patch adds a 32 bytes pre-checking firstly before jumping into VMX operations, to avoid the unnecessary VMX penalty. It is not limited to KSM case. And the testing shows ~20% improvement on memcmp() average execution time with this patch. And note the 32B pre-checking is only performed when the compare size is long enough (>=4K currently) to allow VMX operation. The detail data and analysis is at: https://github.com/justdoitqd/publicFiles/blob/master/memcmp/README.md Signed-off-by: Simon Guo <wei.guo.simon@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/64: enhance memcmp() with VMX instruction for long bytes comparisionSimon Guo
This patch add VMX primitives to do memcmp() in case the compare size is equal or greater than 4K bytes. KSM feature can benefit from this. Test result with following test program(replace the "^>" with ""): ------ ># cat tools/testing/selftests/powerpc/stringloops/memcmp.c >#include <malloc.h> >#include <stdlib.h> >#include <string.h> >#include <time.h> >#include "utils.h" >#define SIZE (1024 * 1024 * 900) >#define ITERATIONS 40 int test_memcmp(const void *s1, const void *s2, size_t n); static int testcase(void) { char *s1; char *s2; unsigned long i; s1 = memalign(128, SIZE); if (!s1) { perror("memalign"); exit(1); } s2 = memalign(128, SIZE); if (!s2) { perror("memalign"); exit(1); } for (i = 0; i < SIZE; i++) { s1[i] = i & 0xff; s2[i] = i & 0xff; } for (i = 0; i < ITERATIONS; i++) { int ret = test_memcmp(s1, s2, SIZE); if (ret) { printf("return %d at[%ld]! should have returned zero\n", ret, i); abort(); } } return 0; } int main(void) { return test_harness(testcase, "memcmp"); } ------ Without this patch (but with the first patch "powerpc/64: Align bytes before fall back to .Lshort in powerpc64 memcmp()." in the series): 4.726728762 seconds time elapsed ( +- 3.54%) With VMX patch: 4.234335473 seconds time elapsed ( +- 2.63%) There is ~+10% improvement. Testing with unaligned and different offset version (make s1 and s2 shift random offset within 16 bytes) can archieve higher improvement than 10%.. Signed-off-by: Simon Guo <wei.guo.simon@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc: add vcmpequd/vcmpequb ppc instruction macroSimon Guo
Some old tool chains don't know about instructions like vcmpequd. This patch adds .long macro for vcmpequd and vcmpequb, which is a preparation to optimize ppc64 memcmp with VMX instructions. Signed-off-by: Simon Guo <wei.guo.simon@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/64: Align bytes before fall back to .Lshort in powerpc64 memcmp()Simon Guo
Currently memcmp() 64bytes version in powerpc will fall back to .Lshort (compare per byte mode) if either src or dst address is not 8 bytes aligned. It can be opmitized in 2 situations: 1) if both addresses are with the same offset with 8 bytes boundary: memcmp() can compare the unaligned bytes within 8 bytes boundary firstly and then compare the rest 8-bytes-aligned content with .Llong mode. 2) If src/dst addrs are not with the same offset of 8 bytes boundary: memcmp() can align src addr with 8 bytes, increment dst addr accordingly, then load src with aligned mode and load dst with unaligned mode. This patch optmizes memcmp() behavior in the above 2 situations. Tested with both little/big endian. Performance result below is based on little endian. Following is the test result with src/dst having the same offset case: (a similar result was observed when src/dst having different offset): (1) 256 bytes Test with the existing tools/testing/selftests/powerpc/stringloops/memcmp: - without patch 29.773018302 seconds time elapsed ( +- 0.09% ) - with patch 16.485568173 seconds time elapsed ( +- 0.02% ) -> There is ~+80% percent improvement (2) 32 bytes To observe performance impact on < 32 bytes, modify tools/testing/selftests/powerpc/stringloops/memcmp.c with following: ------- #include <string.h> #include "utils.h" -#define SIZE 256 +#define SIZE 32 #define ITERATIONS 10000 int test_memcmp(const void *s1, const void *s2, size_t n); -------- - Without patch 0.244746482 seconds time elapsed ( +- 0.36%) - with patch 0.215069477 seconds time elapsed ( +- 0.51%) -> There is ~+13% improvement (3) 0~8 bytes To observe <8 bytes performance impact, modify tools/testing/selftests/powerpc/stringloops/memcmp.c with following: ------- #include <string.h> #include "utils.h" -#define SIZE 256 -#define ITERATIONS 10000 +#define SIZE 8 +#define ITERATIONS 1000000 int test_memcmp(const void *s1, const void *s2, size_t n); ------- - Without patch 1.845642503 seconds time elapsed ( +- 0.12% ) - With patch 1.849767135 seconds time elapsed ( +- 0.26% ) -> They are nearly the same. (-0.2%) Signed-off-by: Simon Guo <wei.guo.simon@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/pseries/mm: Improve error reporting on HCALL failuresAneesh Kumar K.V
This patch adds error reporting to H_ENTER and H_READ hcalls. A failure for both these hcalls are mostly fatal and it would be good to log the failure reason. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Split out of larger patch] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/pseries: Use pr_xxx() in lpar.cAneesh Kumar K.V
Switch from printk to pr_fmt() / pr_xxx(). Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Split out of larger patch] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/mm/hash: Reduce contention on hpte lockAneesh Kumar K.V
We do this in some part. This patch make sure we always try to search for hpte without holding lock and redo the compare with lock held once match found. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/mm/hash: Add hpte_get_old_v and use that instead of opencodingAneesh Kumar K.V
No functional change Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/mm/hash: Remove the superfluous bitwise operation when find hpte groupAneesh Kumar K.V
When computing the starting slot number for a hash page table group we used to do this hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL; Multiplying with 8 (HPTES_PER_GROUP) imply the last three bits are 0. Hence we really don't need to clear then separately. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/mm: Increase MAX_PHYSMEM_BITS to 128TB with SPARSEMEM_VMEMMAP configAneesh Kumar K.V
We do this only with VMEMMAP config so that our page_to_[nid/section] etc are not impacted. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/mm: Check memblock_add against MAX_PHYSMEM_BITS rangeAneesh Kumar K.V
With SPARSEMEM config enabled, we make sure that we don't add sections beyond MAX_PHYSMEM_BITS range. This results in not building vmemmap mapping for range beyond max range. But our memblock layer looks the device tree and create mapping for the full memory range. Prevent this by checking against MAX_PHSYSMEM_BITS when doing memblock_add. We don't do similar check for memeblock_reserve_range. If reserve range is beyond MAX_PHYSMEM_BITS we expect that to be configured with 'nomap'. Any other reserved range should come from existing memblock ranges which we already filtered while adding. This avoids crash as below when running on a system with system ram config above MAX_PHSYSMEM_BITS Unable to handle kernel paging request for data at address 0xc00a001000000440 Faulting instruction address: 0xc000000001034118 cpu 0x0: Vector: 300 (Data Access) at [c00000000124fb30] pc: c000000001034118: __free_pages_bootmem+0xc0/0x1c0 lr: c00000000103b258: free_all_bootmem+0x19c/0x22c sp: c00000000124fdb0 msr: 9000000002001033 dar: c00a001000000440 dsisr: 40000000 current = 0xc00000000120dd00 paca = 0xc000000001f60000^I irqmask: 0x03^I irq_happened: 0x01 pid = 0, comm = swapper [c00000000124fe20] c00000000103b258 free_all_bootmem+0x19c/0x22c [c00000000124fee0] c000000001010a68 mem_init+0x3c/0x5c [c00000000124ff00] c00000000100401c start_kernel+0x298/0x5e4 [c00000000124ff90] c00000000000b57c start_here_common+0x1c/0x520 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc: Add ppc64le and ppc64_book3e allmodconfig targetsMichael Ellerman
Similarly as we just did for 32-bit, add phony targets for generating a little endian and Book3E allmodconfig. These aren't covered by the regular allmodconfig, which is big endian and Book3S due to the way the Kconfig symbols are structured. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc: Add ppc32_allmodconfig defconfig targetMichael Ellerman
Because the allmodconfig logic just sets every symbol to M or Y, it has the effect of always generating a 64-bit config, because CONFIG_PPC64 becomes Y. So to make it easier for folks to test 32-bit code, provide a phony defconfig target that generates a 32-bit allmodconfig. The 32-bit port has several mutually exclusive CPU types, we choose the Book3S variants as that's what the help text in Kconfig says is most common. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2Michael Ellerman
When I added the spectre_v2 information in sysfs, I included the availability of the ori31 speculation barrier. Although the ori31 barrier can be used to mitigate v2, it's primarily intended as a spectre v1 mitigation. Spectre v2 is mitigated by hardware changes. So rework the sysfs files to show the ori31 information in the spectre_v1 file, rather than v2. Currently we display eg: $ grep . spectre_v* spectre_v1:Mitigation: __user pointer sanitization spectre_v2:Mitigation: Indirect branch cache disabled, ori31 speculation barrier enabled After: $ grep . spectre_v* spectre_v1:Mitigation: __user pointer sanitization, ori31 speculation barrier enabled spectre_v2:Mitigation: Indirect branch cache disabled Fixes: d6fbe1c55c55 ("powerpc/64s: Wire up cpu_show_spectre_v2()") Cc: stable@vger.kernel.org # v4.17+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc: NMI IPI make NMI IPIs fully sychronousNicholas Piggin
There is an asynchronous aspect to smp_send_nmi_ipi. The caller waits for all CPUs to call in to the handler, but it does not wait for completion of the handler. This is a needless complication, so remove it and always wait synchronously. The synchronous wait allows the caller to easily time out and clear the wait for completion (zero nmi_ipi_busy_count) in the case of badly behaved handlers. This would have prevented the recent smp_send_stop NMI IPI bug from causing the system to hang. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/64s: make PACA_IRQ_HARD_DIS track MSR[EE] closelyNicholas Piggin
When the masked interrupt handler clears MSR[EE] for an interrupt in the PACA_IRQ_MUST_HARD_MASK set, it does not set PACA_IRQ_HARD_DIS. This makes them get out of synch. With that taken into account, it's only low level irq manipulation (and interrupt entry before reconcile) where they can be out of synch. This makes the code less surprising. It also allows the IRQ replay code to rely on the IRQ_HARD_DIS value and not have to mtmsrd again in this case (e.g., for an external interrupt that has been masked). The bigger benefit might just be that there is not such an element of surprise in these two bits of state. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/pkeys: make protection key 0 less specialRam Pai
Applications need the ability to associate an address-range with some key and latter revert to its initial default key. Pkey-0 comes close to providing this function but falls short, because the current implementation disallows applications to explicitly associate pkey-0 to the address range. Lets make pkey-0 less special and treat it almost like any other key. Thus it can be explicitly associated with any address range, and can be freed. This gives the application more flexibility and power. The ability to free pkey-0 must be used responsibily, since pkey-0 is associated with almost all address-range by default. Even with this change pkey-0 continues to be slightly more special from the following point of view. (a) it is implicitly allocated. (b) it is the default key assigned to any address-range. (c) its permissions cannot be modified by userspace. NOTE: (c) is specific to powerpc only. pkey-0 is associated by default with all pages including kernel pages, and pkeys are also active in kernel mode. If any permission is denied on pkey-0, the kernel running in the context of the application will be unable to operate. Tested on powerpc. Signed-off-by: Ram Pai <linuxram@us.ibm.com> [mpe: Drop #define PKEY_0 0 in favour of plain old 0] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/pkeys: Preallocate execute-only keyRam Pai
execute-only key is allocated dynamically. This is a problem. When a thread implicitly creates an execute-only key, and resets the UAMOR for that key, the UAMOR value does not percolate to all the other threads. Any other thread may ignorantly change the permissions on the key. This can cause the key to be not execute-only for that thread. Preallocate the execute-only key and ensure that no thread can change the permission of the key, by resetting the corresponding bit in UAMOR. Fixes: 5586cf61e108 ("powerpc: introduce execute-only pkey") Cc: stable@vger.kernel.org # v4.16+ Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/pkeys: Fix calculation of total pkeys.Ram Pai
Total number of pkeys calculation is off by 1. Fix it. Fixes: 4fb158f65ac5 ("powerpc: track allocation status of all pkeys") Cc: stable@vger.kernel.org # v4.16+ Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24arm64: fix ACPI dependenciesArnd Bergmann
Kconfig reports a warning on x86 builds after the ARM64 dependency was added. drivers/acpi/Kconfig:6:error: recursive dependency detected! drivers/acpi/Kconfig:6: symbol ACPI depends on EFI This rephrases the dependency to keep the ARM64 details out of the shared Kconfig file, so Kconfig no longer gets confused by it. For consistency, all three architectures that support ACPI now select ARCH_SUPPORTS_ACPI in exactly the configuration in which they allow it. We still need the 'default x86', as each one wants a different default: default-y on x86, default-n on arm64, and always-y on ia64. Fixes: 5bcd44083a08 ("drivers: acpi: add dependency of EFI for arm64") Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-07-24powerpc/pkeys: Save the pkey registers before forkRam Pai
When a thread forks the contents of AMR, IAMR, UAMOR registers in the newly forked thread are not inherited. Save the registers before forking, for content of those registers to be automatically copied into the new thread. Fixes: cf43d3b26452 ("powerpc: Enable pkey subsystem") Cc: stable@vger.kernel.org # v4.16+ Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/pkeys: key allocation/deallocation must not change pkey registersRam Pai
Key allocation and deallocation has the side effect of programming the UAMOR/AMR/IAMR registers. This is wrong, since its the responsibility of the application and not that of the kernel, to modify the permission on the key. Do not modify the pkey registers at key allocation/deallocation. This patch also fixes a bug where a sys_pkey_free() resets the UAMOR bits of the key, thus making its permissions unmodifiable from user space. Later if the same key gets reallocated from a different thread this thread will no longer be able to change the permissions on the key. Fixes: cf43d3b26452 ("powerpc: Enable pkey subsystem") Cc: stable@vger.kernel.org # v4.16+ Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-24powerpc/pkeys: Deny read/write/execute by defaultRam Pai
Deny all permissions on all keys, with some exceptions. pkey-0 must allow all permissions, or else everything comes to a screaching halt. Execute-only key must allow execute permission. Fixes: cf43d3b26452 ("powerpc: Enable pkey subsystem") Cc: stable@vger.kernel.org # v4.16+ Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>