summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-09-09Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM fixes from Russell King: "A few ARM fixes: - Robin Murphy noticed that the non-secure privileged entry was relying on undefined behaviour, which needed to be fixed. - Vladimir Murzin noticed that prov-v7 fails to build for MMUless configurations because a required header file wasn't included. - A bunch of fixes for StrongARM regressions found while testing 4.8-rc on such platforms" * 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: sa1100: clear reset status prior to reboot ARM: 8600/1: Enforce some NS-SVC initialisation ARM: 8599/1: mm: pull asm/memory.h explicitly ARM: sa1100: register clocks early ARM: sa1100: fix 3.6864MHz clock
2016-09-09x86/efi: Allow invocation of arbitrary boot servicesLukas Wunner
We currently allow invocation of 8 boot services with efi_call_early(). Not included are LocateHandleBuffer and LocateProtocol in particular. For graphics output or to retrieve PCI ROMs and Apple device properties, we're thus forced to use the LocateHandle + AllocatePool + LocateHandle combo, which is cumbersome and needs more code. The ARM folks allow invocation of the full set of boot services but are restricted to our 8 boot services in functions shared across arches. Thus, rather than adding just LocateHandleBuffer and LocateProtocol to struct efi_config, let's rework efi_call_early() to allow invocation of arbitrary boot services by selecting the 64 bit vs 32 bit code path in the macro itself. When compiling for 32 bit or for 64 bit without mixed mode, the unused code path is optimized away and the binary code is the same as before. But on 64 bit with mixed mode enabled, this commit adds one compare instruction to each invocation of a boot service and, depending on the code path selected, two jump instructions. (Most of the time gcc arranges the jumps in the 32 bit code path.) The result is a minuscule performance penalty and the binary code becomes slightly larger and more difficult to read when disassembled. This isn't a hot path, so these drawbacks are arguably outweighed by the attainable simplification of the C code. We have some overhead anyway for thunking or conversion between calling conventions. The 8 boot services can consequently be removed from struct efi_config. No functional change intended (for now). Example -- invocation of free_pool before (64 bit code path): 0x2d4 movq %ds:efi_early, %rdx ; efi_early 0x2db movq %ss:arg_0-0x20(%rsp), %rsi 0x2e0 xorl %eax, %eax 0x2e2 movq %ds:0x28(%rdx), %rdi ; efi_early->free_pool 0x2e6 callq *%ds:0x58(%rdx) ; efi_early->call() Example -- invocation of free_pool after (64 / 32 bit mixed code path): 0x0dc movq %ds:efi_early, %rax ; efi_early 0x0e3 cmpb $0, %ds:0x28(%rax) ; !efi_early->is64 ? 0x0e7 movq %ds:0x20(%rax), %rdx ; efi_early->call() 0x0eb movq %ds:0x10(%rax), %rax ; efi_early->boot_services 0x0ef je $0x150 0x0f1 movq %ds:0x48(%rax), %rdi ; free_pool (64 bit) 0x0f5 xorl %eax, %eax 0x0f7 callq *%rdx ... 0x150 movl %ds:0x30(%rax), %edi ; free_pool (32 bit) 0x153 jmp $0x0f5 Size of eboot.o text section: CONFIG_X86_32: 6464 before, 6318 after CONFIG_X86_64 && !CONFIG_EFI_MIXED: 7670 before, 7573 after CONFIG_X86_64 && CONFIG_EFI_MIXED: 7670 before, 8319 after Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi: Optimize away setup_gop32/64 if unusedLukas Wunner
Commit 2c23b73c2d02 ("x86/efi: Prepare GOP handling code for reuse as generic code") introduced an efi_is_64bit() macro to x86 which previously only existed for arm arches. The macro is used to choose between the 64 bit or 32 bit code path in gop.c at runtime. However the code path that's going to be taken is known at compile time when compiling for x86_32 or for x86_64 with mixed mode disabled. Amend the macro to eliminate the unused code path in those cases. Size of gop.o text section: CONFIG_X86_32: 1758 before, 1299 after CONFIG_X86_64 && !CONFIG_EFI_MIXED: 2201 before, 1406 after CONFIG_X86_64 && CONFIG_EFI_MIXED: 2201 before and after Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi: Use kmalloc_array() in efi_call_phys_prolog()Markus Elfring
* A multiplication for the size determination of a memory allocation indicated that an array data structure should be processed. Thus reuse the corresponding function "kmalloc_array". This issue was detected by using the Coccinelle software. * Replace the specification of a data type by a pointer dereference to make the corresponding size determination a bit safer according to the Linux coding style convention. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi/arm64: Treat regions with WT/WC set but WB cleared as memoryArd Biesheuvel
Currently, memory regions are only recorded in the memblock memory table if they have the EFI_MEMORY_WB memory type attribute set. In case the region is of a reserved type, it is also marked as MEMBLOCK_NOMAP, which will leave it out of the linear mapping. However, memory regions may legally have the EFI_MEMORY_WT or EFI_MEMORY_WC attributes set, and the EFI_MEMORY_WB cleared, in which case the region in question is obviously backed by normal memory, but is not recorded in the memblock memory table at all. Since it would be useful to be able to identify any UEFI reported memory region using memblock_is_memory(), it makes sense to add all memory to the memblock memory table, and simply mark it as MEMBLOCK_NOMAP if it lacks the EFI_MEMORY_WB attribute. While implementing this, let's refactor the code slightly to make it easier to understand: replace is_normal_ram() with is_memory(), and make it return true for each region that has any of the WB|WT|WC bits set. (This follows the AArch64 bindings in the UEFI spec, which state that those are the attributes that map to normal memory) Also, replace is_reserve_region() with is_usable_memory(), and only invoke it if the region in question was identified as memory by is_memory() in the first place. The net result is the same (only reserved regions that are backed by memory end up in the memblock memory table with the MEMBLOCK_NOMAP flag set) but carried out in a more straightforward way. Finally, we remove the trailing asterisk in the EFI debug output. Keeping it clutters the code, and it serves no real purpose now that we no longer temporarily reserve BootServices code and data regions like we did in the early days of EFI support on arm64 Linux (which it inherited from the x86 implementation) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org> Tested-by: James Morse <james.morse@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Add efi_test driver for exporting UEFI runtime service interfacesIvan Hu
This driver is used by the Firmware Test Suite (FWTS) for testing the UEFI runtime interfaces readiness of the firmware. This driver exports UEFI runtime service interfaces into userspace, which allows to use and test UEFI runtime services provided by the firmware. This driver uses the efi.<service> function pointers directly instead of going through the efivar API to allow for direct testing of the UEFI runtime service interfaces provided by the firmware. Details for FWTS are available from, <https://wiki.ubuntu.com/FirmwareTestSuite> Signed-off-by: Ivan Hu <ivan.hu@canonical.com> Cc: joeyli <jlee@suse.com> Cc: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi: Defer efi_esrt_init until after memblock_x86_fillRicardo Neri
Commit 7b02d53e7852 ("efi: Allow drivers to reserve boot services forever") introduced a new efi_mem_reserve to reserve the boot services memory regions forever. This reservation involves allocating a new EFI memory range descriptor. However, allocation can only succeed if there is memory available for the allocation. Otherwise, error such as the following may occur: esrt: Reserving ESRT space from 0x000000003dd6a000 to 0x000000003dd6a010. Kernel panic - not syncing: ERROR: Failed to allocate 0x9f0 bytes below \ 0x0. CPU: 0 PID: 0 Comm: swapper Not tainted 4.7.0-rc5+ #503 0000000000000000 ffffffff81e03ce0 ffffffff8131dae8 ffffffff81bb6c50 ffffffff81e03d70 ffffffff81e03d60 ffffffff8111f4df 0000000000000018 ffffffff81e03d70 ffffffff81e03d08 00000000000009f0 00000000000009f0 Call Trace: [<ffffffff8131dae8>] dump_stack+0x4d/0x65 [<ffffffff8111f4df>] panic+0xc5/0x206 [<ffffffff81f7c6d3>] memblock_alloc_base+0x29/0x2e [<ffffffff81f7c6e3>] memblock_alloc+0xb/0xd [<ffffffff81f6c86d>] efi_arch_mem_reserve+0xbc/0x134 [<ffffffff81fa3280>] efi_mem_reserve+0x2c/0x31 [<ffffffff81fa3280>] ? efi_mem_reserve+0x2c/0x31 [<ffffffff81fa40d3>] efi_esrt_init+0x19e/0x1b4 [<ffffffff81f6d2dd>] efi_init+0x398/0x44a [<ffffffff81f5c782>] setup_arch+0x415/0xc30 [<ffffffff81f55af1>] start_kernel+0x5b/0x3ef [<ffffffff81f55434>] x86_64_start_reservations+0x2f/0x31 [<ffffffff81f55520>] x86_64_start_kernel+0xea/0xed ---[ end Kernel panic - not syncing: ERROR: Failed to allocate 0x9f0 bytes below 0x0. An inspection of the memblock configuration reveals that there is no memory available for the allocation: MEMBLOCK configuration: memory size = 0x0 reserved size = 0x4f339c0 memory.cnt = 0x1 memory[0x0] [0x00000000000000-0xffffffffffffffff], 0x0 bytes on node 0\ flags: 0x0 reserved.cnt = 0x4 reserved[0x0] [0x0000000008c000-0x0000000008c9bf], 0x9c0 bytes flags: 0x0 reserved[0x1] [0x0000000009f000-0x000000000fffff], 0x61000 bytes\ flags: 0x0 reserved[0x2] [0x00000002800000-0x0000000394bfff], 0x114c000 bytes\ flags: 0x0 reserved[0x3] [0x000000304e4000-0x00000034269fff], 0x3d86000 bytes\ flags: 0x0 This situation can be avoided if we call efi_esrt_init after memblock has memory regions for the allocation. Also, the EFI ESRT driver makes use of early_memremap'pings. Therfore, we do not want to defer efi_esrt_init for too long. We must call such function while calls to early_memremap are still valid. A good place to meet the two aforementioned conditions is right after memblock_x86_fill, grouped with other EFI-related functions. Reported-by: Scott Lawson <scott.lawson@intel.com> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Peter Jones <pjones@redhat.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi/arm64: Add debugfs node to dump UEFI runtime page tablesArd Biesheuvel
Register the debugfs node 'efi_page_tables' to allow the UEFI runtime page tables to be inspected. Note that ARM does not have 'asm/ptdump.h' [yet] so for now, this is arm64 only. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi: Remove unused find_bits() functionLukas Wunner
Left behind by commit fc37206427ce ("efi/libstub: Move Graphics Output Protocol handling to generic code"). Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09fs/efivarfs: Fix double kfree() in error pathMatt Fleming
Julia reported that we may double free 'name' in efivarfs_callback(), and that this bug was introduced by commit 0d22f33bc37c ("efi: Don't use spinlocks for efi vars"). Move one of the kfree()s until after the point at which we know we are definitely on the success path. Reported-by: Julia Lawall <julia.lawall@lip6.fr> Acked-by: Julia Lawall <julia.lawall@lip6.fr> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Sylvain Chouleur <sylvain.chouleur@gmail.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi: Map in physical addresses in efi_map_region_fixedAlex Thorlton
This is a simple change to add in the physical mappings as well as the virtual mappings in efi_map_region_fixed. The motivation here is to get access to EFI runtime code that is only available via the 1:1 mappings on a kexec'd kernel. The added call is essentially the kexec analog of the first __map_region that Boris put in efi_map_region in commit d2f7cbe7b26a ("x86/efi: Runtime services virtual mapping"). Signed-off-by: Alex Thorlton <athorlton@sgi.com> Cc: Russ Anderson <rja@sgi.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Mike Travis <travis@sgi.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Dave Young <dyoung@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09lib/ucs2_string: Speed up ucs2_utf8size()Lukas Wunner
No need to calculate the string length on every loop iteration. Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: Peter Jones <pjones@redhat.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09firmware-gsmi: Delete an unnecessary check before the function call ↵Markus Elfring
"dma_pool_destroy" The dma_pool_destroy() function tests whether its argument is NULL and then returns immediately. Thus the test around the call is not needed. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Julia Lawall <julia.lawall@lip6.fr> Cc: Mike Waychison <mikew@google.com> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi: Initialize status to ensure garbage is not returned on small sizeColin Ian King
Although very unlikey, if size is too small or zero, then we end up with status not being set and returning garbage. Instead, initializing status to EFI_INVALID_PARAMETER to indicate that size is invalid in the calls to setup_uga32 and setup_uga64. Signed-off-by: Colin Ian King <colin.king@canonical.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Replace runtime services spinlock with semaphoreArd Biesheuvel
The purpose of the efi_runtime_lock is to prevent concurrent calls into the firmware. There is no need to use spinlocks here, as long as we ensure that runtime service invocations from an atomic context (i.e., EFI pstore) cannot block. So use a semaphore instead, and use down_trylock() in the nonblocking case. We don't use a mutex here because the mutex_trylock() function must not be called from interrupt context, whereas the down_trylock() can. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Sylvain Chouleur <sylvain.chouleur@gmail.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Don't use spinlocks for efi varsSylvain Chouleur
All efivars operations are protected by a spinlock which prevents interruptions and preemption. This is too restricted, we just need a lock preventing concurrency. The idea is to use a semaphore of count 1 and to have two ways of locking, depending on the context: - In interrupt context, we call down_trylock(), if it fails we return an error - In normal context, we call down_interruptible() We don't use a mutex here because the mutex_trylock() function must not be called from interrupt context, whereas the down_trylock() can. Signed-off-by: Sylvain Chouleur <sylvain.chouleur@intel.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Sylvain Chouleur <sylvain.chouleur@gmail.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Use a file local lock for efivarsSylvain Chouleur
This patch replaces the spinlock in the efivars struct with a single lock for the whole vars.c file. The goal of this lock is to protect concurrent calls to efi variable services, registering and unregistering. This allows us to register new efivars operations without having in-progress call. Signed-off-by: Sylvain Chouleur <sylvain.chouleur@intel.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Sylvain Chouleur <sylvain.chouleur@gmail.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi/arm*: esrt: Add missing call to efi_esrt_init()Ard Biesheuvel
ESRT support is built by default for all architectures that define CONFIG_EFI. However, this support was not wired up yet for ARM/arm64, since efi_esrt_init() was never called. So add the missing call. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Jones <pjones@redhat.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi/esrt: Use memremap not ioremap to access ESRT table in memoryArd Biesheuvel
On ARM and arm64, ioremap() and memremap() are not interchangeable like on x86, and the use of ioremap() on ordinary RAM is typically flagged as an error if the memory region being mapped is also covered by the linear mapping, since that would lead to aliases with conflicting cacheability attributes. Since what we are dealing with is not an I/O region with side effects, using ioremap() here is arguably incorrect anyway, so let's replace it with memremap() instead. Acked-by: Peter Jones <pjones@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi-bgrt: Use efi_mem_reserve() to avoid copying image dataMatt Fleming
efi_mem_reserve() allows us to permanently mark EFI boot services regions as reserved, which means we no longer need to copy the image data out and into a separate buffer. Leaving the data in the original boot services region has the added benefit that BGRT images can now be passed across kexec reboot. Reviewed-by: Josh Triplett <josh@joshtriplett.org> Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Josh Boyer <jwboyer@fedoraproject.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Môshe van der Sterre <me@moshe.nl> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi/esrt: Use efi_mem_reserve() and avoid a kmalloc()Matt Fleming
We can use the new efi_mem_reserve() API to mark the ESRT table as reserved forever and save ourselves the trouble of copying the data out into a kmalloc buffer. The added advantage is that now the ESRT driver will work across kexec reboot. Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi/runtime-map: Use efi.memmap directly instead of a copyMatt Fleming
Now that efi.memmap is available all of the time there's no need to allocate and build a separate copy of the EFI memory map. Furthermore, efi.memmap contains boot services regions but only those regions that have been reserved via efi_mem_reserve(). Using efi.memmap allows us to pass boot services across kexec reboot so that the ESRT and BGRT drivers will now work. Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Allow drivers to reserve boot services foreverMatt Fleming
Today, it is not possible for drivers to reserve EFI boot services for access after efi_free_boot_services() has been called on x86. For ARM/arm64 it can be done simply by calling memblock_reserve(). Having this ability for all three architectures is desirable for a couple of reasons, 1) It saves drivers copying data out of those regions 2) kexec reboot can now make use of things like ESRT Instead of using the standard memblock_reserve() which is insufficient to reserve the region on x86 (see efi_reserve_boot_services()), a new API is introduced in this patch; efi_mem_reserve(). efi.memmap now always represents which EFI memory regions are available. On x86 the EFI boot services regions that have not been reserved via efi_mem_reserve() will be removed from efi.memmap during efi_free_boot_services(). This has implications for kexec, since it is not possible for a newly kexec'd kernel to access the same boot services regions that the initial boot kernel had access to unless they are reserved by every kexec kernel in the chain. Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Add efi_memmap_install() for installing new EFI memory mapsMatt Fleming
While efi_memmap_init_{early,late}() exist for architecture code to install memory maps from firmware data and for the virtual memory regions respectively, drivers don't care which stage of the boot we're at and just want to swap the existing memmap for a modified one. efi_memmap_install() abstracts the details of how the new memory map should be mapped and the existing one unmapped. Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Taku Izumi <izumi.taku@jp.fujitsu.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Split out EFI memory map functions into new fileMatt Fleming
Also move the functions from the EFI fake mem driver since future patches will require access to the memmap insertion code even if CONFIG_EFI_FAKE_MEM isn't enabled. This will be useful when we need to build custom EFI memory maps to allow drivers to mark regions as reserved. Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Taku Izumi <izumi.taku@jp.fujitsu.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi/fake_mem: Refactor main two code chunks into functionsMatt Fleming
There is a whole load of generic EFI memory map code inside of the fake_mem driver which is better suited to being grouped with the rest of the generic EFI code for manipulating EFI memory maps. In preparation for that, this patch refactors the core code, so that it's possible to move entire functions later. Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Taku Izumi <izumi.taku@jp.fujitsu.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Add efi_memmap_init_late() for permanent EFI memmapMatt Fleming
Drivers need a way to access the EFI memory map at runtime. ARM and arm64 currently provide this by remapping the EFI memory map into the vmalloc space before setting up the EFI virtual mappings. x86 does not provide this functionality which has resulted in the code in efi_mem_desc_lookup() where it will manually map individual EFI memmap entries if the memmap has already been torn down on x86, /* * If a driver calls this after efi_free_boot_services, * ->map will be NULL, and the target may also not be mapped. * So just always get our own virtual map on the CPU. * */ md = early_memremap(p, sizeof (*md)); There isn't a good reason for not providing a permanent EFI memory map for runtime queries, especially since the EFI regions are not mapped into the standard kernel page tables. Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09efi: Refactor efi_memmap_init_early() into arch-neutral codeMatt Fleming
Every EFI architecture apart from ia64 needs to setup the EFI memory map at efi.memmap, and the code for doing that is essentially the same across all implementations. Therefore, it makes sense to factor this out into the common code under drivers/firmware/efi/. The only slight variation is the data structure out of which we pull the initial memory map information, such as physical address, memory descriptor size and version, etc. We can address this by passing a generic data structure (struct efi_memory_map_data) as the argument to efi_memmap_init_early() which contains the minimum info required for initialising the memory map. In the process, this patch also fixes a few undesirable implementation differences: - ARM and arm64 were failing to clear the EFI_MEMMAP bit when unmapping the early EFI memory map. EFI_MEMMAP indicates whether the EFI memory map is mapped (not the regions contained within) and can be traversed. It's more correct to set the bit as soon as we memremap() the passed in EFI memmap. - Rename efi_unmmap_memmap() to efi_memmap_unmap() to adhere to the regular naming scheme. This patch also uses a read-write mapping for the memory map instead of the read-only mapping currently used on ARM and arm64. x86 needs the ability to update the memory map in-place when assigning virtual addresses to regions (efi_map_region()) and tagging regions when reserving boot services (efi_reserve_boot_services()). There's no way for the generic fake_mem code to know which mapping to use without introducing some arch-specific constant/hook, so just use read-write since read-only is of dubious value for the EFI memory map. Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Peter Jones <pjones@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi: Consolidate region mapping logicMatt Fleming
EFI regions are currently mapped in two separate places. The bulk of the work is done in efi_map_regions() but when CONFIG_EFI_MIXED is enabled the additional regions that are required when operating in mixed mode are mapping in efi_setup_page_tables(). Pull everything into efi_map_regions() and refactor the test for which regions should be mapped into a should_map_region() function. Generously sprinkle comments to clarify the different cases. Acked-by: Borislav Petkov <bp@suse.de> Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09x86/efi: Test for EFI_MEMMAP functionality when iterating EFI memmapMatt Fleming
Both efi_find_mirror() and efi_fake_memmap() really want to know whether the EFI memory map is available, not just whether the machine was booted using EFI. efi_fake_memmap() even has a check for EFI_MEMMAP at the start of the function. Since we've already got other code that has this dependency, merge everything under one if() conditional, and remove the now superfluous check from efi_fake_memmap(). Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump] Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm] Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Taku Izumi <izumi.taku@jp.fujitsu.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Xishi Qiu <qiuxishi@huawei.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09arm64: Remove shadowed asm-generic headersRobin Murphy
We've grown our own versions of bug.h, ftrace.h, pci.h and topology.h, so generating the generic ones as well is unnecessary and a potential source of build hiccups. At the very least, having them present has confused my source-indexing tool, and that simply will not do. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Work around systems with mismatched cache line sizesSuzuki K Poulose
Systems with differing CPU i-cache/d-cache line sizes can cause problems with the cache management by software when the execution is migrated from one to another. Usually, the application reads the cache size on a CPU and then uses that length to perform cache operations. However, if it gets migrated to another CPU with a smaller cache line size, things could go completely wrong. To prevent such cases, always use the smallest cache line size among the CPUs. The kernel CPU feature infrastructure already keeps track of the safe value for all CPUID registers including CTR. This patch works around the problem by : For kernel, dynamically patch the kernel to read the cache size from the system wide copy of CTR_EL0. For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT) and emulate the mrs instruction to return the system wide safe value of CTR_EL0. For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0 via read_system_reg), we keep track of the pointer to table entry for CTR_EL0 in the CPU feature infrastructure. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Refactor sysinstr exception handlingSuzuki K Poulose
Right now we trap some of the user space data cache operations based on a few Errata (ARM 819472, 826319, 827319 and 824069). We need to trap userspace access to CTR_EL0, if we detect mismatched cache line size. Since both these traps share the EC, refactor the handler a little bit to make it a bit more reader friendly. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Introduce raw_{d,i}cache_line_sizeSuzuki K Poulose
On systems with mismatched i/d cache min line sizes, we need to use the smallest size possible across all CPUs. This will be done by fetching the system wide safe value from CPU feature infrastructure. However the some special users(e.g kexec, hibernate) would need the line size on the CPU (rather than the system wide), when either the system wide feature may not be accessible or it is guranteed that the caller executes with a gurantee of no migration. Provide another helper which will fetch cache line size on the current CPU. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: James Morse <james.morse@arm.com> Reviewed-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: alternative: Add support for patching adrp instructionsSuzuki K Poulose
adrp uses PC-relative address offset to a page (of 4K size) of a symbol. If it appears in an alternative code patched in, we should adjust the offset to reflect the address where it will be run from. This patch adds support for fixing the offset for adrp instructions. Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: insn: Add helpers for adrp offsetsSuzuki K Poulose
Adds helpers for decoding/encoding the PC relative addresses for adrp. This will be used for handling dynamic patching of 'adrp' instructions in alternative code patching. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: alternative: Disallow patching instructions using literalsSuzuki K Poulose
The alternative code patching doesn't check if the replaced instruction uses a pc relative literal. This could cause silent corruption in the instruction stream as the instruction will be executed from a different address than what it was compiled for. Catch all such cases. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Suggested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Rearrange CPU errata workaround checksSuzuki K Poulose
Right now we run through the work around checks on a CPU from __cpuinfo_store_cpu. There are some problems with that: 1) We initialise the system wide CPU feature registers only after the Boot CPU updates its cpuinfo. Now, if a work around depends on the variance of a CPU ID feature (e.g, check for Cache Line size mismatch), we have no way of performing it cleanly for the boot CPU. 2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It is not an obvious place for that. This patch rearranges the CPU specific capability(aka work around) checks. 1) At the moment we use verify_local_cpu_capabilities() to check if a new CPU has all the system advertised features. Use this for the secondary CPUs to perform the work around check. For that we rename verify_local_cpu_capabilities() => check_local_cpu_capabilities() which: If the system wide capabilities haven't been initialised (i.e, the CPU is activated at the boot), update the system wide detected work arounds. Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the system wide capabilities. 2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have initialised the system wide CPU feature values. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Use consistent naming for errata handlingSuzuki K Poulose
This is a cosmetic change to rename the functions dealing with the errata work arounds to be more consistent with their naming. 1) check_local_cpu_errata() => update_cpu_errata_workarounds() check_local_cpu_errata() actually updates the system's errata work arounds. So rename it to reflect the same. 2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds() Use errata_workarounds instead of _errata. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Set the safe value for L1 icache policySuzuki K Poulose
Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is not defined at the moment. The safer value for the L1Ip should be the weakest of the policies, which happens to be AIVIVT. While at it, fix the comment about safe_val. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64/numa: remove the limitation that cpu0 must bind to node0Zhen Lei
1. Remove the old binding code. 2. Read the nid of cpu0 from dts. 3. Fallback the nid of cpu0 to 0 when numa=off is set in bootargs. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64/numa: remove some useless codeZhen Lei
When the deleted code is executed, only the bit of cpu0 was set on cpu_possible_mask. So that, only set_cpu_numa_node(0, NUMA_NO_NODE); will be executed. And map_cpu_to_node(0, 0) will soon be called. So these code can be safely removed. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64/numa: support HAVE_SETUP_PER_CPU_AREAZhen Lei
To make each percpu area allocated from its local numa node. Without this patch, all percpu areas will be allocated from the node which cpu0 belongs to. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: numa: Use pr_fmt()Kefeng Wang
Use pr_fmt to prefix kernel output, and remove duplicated msg of NUMA turned off. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09of_numa: Use pr_fmt()Kefeng Wang
Use pr_fmt to prefix kernel output. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09of_numa: Use of_get_next_parent to simplify codeKefeng Wang
Use of_get_next_parent() instead of open-code. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64/numa: avoid inconsistent information to be printedZhen Lei
numa_init may return error because of numa configuration error. So "No NUMA configuration found" is inaccurate. In fact, specific configuration error information should be immediately printed by the testing branch. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09of/numa: remove a duplicated warningZhen Lei
This warning has been printed in of_numa_parse_cpu_nodes before. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09of/numa: add nid check for memory blockZhen Lei
If the numa-id which was configured in memory@ devicetree node is greater than MAX_NUMNODES, we should report a warning. We have done this for cpus and distance-map dt nodes, this patch help them to be consistent. Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09of/numa: fix a memory@ node can only contains one memory blockZhen Lei
For a normal memory@ devicetree node, its reg property can contains more memory blocks. Because we don't known how many memory blocks maybe contained, so we try from index=0, increase 1 until error returned(the end). Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>