summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2018-10-09s390/kasan: avoid instrumentation of early C codeVasily Gorbik
Instrumented C code cannot run without the kasan shadow area. Exempt source code files from kasan which are running before / used during kasan initialization. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/kasan: avoid vdso instrumentationVasily Gorbik
vdso is mapped into user space processes, which won't have kasan shodow mapped. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/mm: add missing pfn_to_kaddr helperVasily Gorbik
kasan common code uses pfn_to_kaddr, which is defined by many other architectures. Adding it as well to avoid a build error. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390: move ipl block and cmd line handling to early boot phaseVasily Gorbik
To distinguish zfcpdump case and to be able to parse some of the kernel command line arguments early (e.g. mem=) ipl block retrieval and command line construction code is moved to the early boot phase. "memory_end" is set up correctly respecting "mem=" and hsa_size in case of the zfcpdump. arch/s390/boot/string.c is introduced to provide string handling and command line parsing functions to early boot phase code for the compressed kernel image case. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/sclp: introduce sclp_early_get_hsa_sizeVasily Gorbik
Introduce sclp_early_get_hsa_size function to be used during early memory detection. This function allows to find a memory limit imposed during zfcpdump. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/mem_detect: add info source debug printVasily Gorbik
Print mem_detect info source when memblock=debug is specified. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/mem_detect: replace tprot loop with binary searchVasily Gorbik
In a situation when other memory detection methods are not available (no SCLP and no z/VM diag260), continuous online memory is assumed. Replacing tprot loop with faster binary search, as only online memory end has to be found. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/mem_detect: use SCLP info for continuous memory detectionVasily Gorbik
When neither SCLP storage info, nor z/VM diag260 "storage configuration" are available assume a continuous online memory of size specified by SCLP info. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/mem_detect: introduce z/VM specific diag260 callVasily Gorbik
In the case when z/VM memory is defined with "define storage config" command, SCLP storage info is not available. Utilize diag260 "storage configuration" call, to get information about z/VM specific guest memory definitions with potential memory holes. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/mem_detect: introduce SCLP storage infoVasily Gorbik
SCLP storage info allows to detect continuous and non-continuous online memory under LPAR, z/VM and KVM, when standby memory is defined. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390: introduce .boot.data section compile time validationVasily Gorbik
Make sure that .boot.data sections of vmlinux and arch/s390/compressed/vmlinux match before producing the compressed kernel image. Symbols presence, order and sizes are cross-checked. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/mem_detect: move tprot loop to early boot phaseVasily Gorbik
Move memory detection to early boot phase. To store online memory regions "struct mem_detect_info" has been introduced together with for_each_mem_detect_block iterator. mem_detect_info is later converted to memblock. Also introduces sclp_early_get_meminfo function to get maximum physical memory and maximum increment number. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/sclp: move sclp_early_read_info to sclp_early_core.cVasily Gorbik
To enable early online memory detection sclp_early_read_info has been moved to sclp_early_core.c. sclp_info_sccb has been made a part of .boot.data, which allows to reuse it later during early kernel startup and make sclp_early_read_info call just once. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390: introduce .boot.data sectionVasily Gorbik
Introduce .boot.data section which is "shared" between the decompressor code and the decompressed kernel. The decompressor will store values in it, and copy over to the decompressed image before starting it. This method allows to avoid using pre-defined addresses and other hacks to pass values between those boot phases. .boot.data section is a part of init data, and will be freed after kernel initialization is complete. For uncompressed kernel image, .boot.data section is basically the same as .init.data Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/decompressor: clean up and rename compressed/misc.cVasily Gorbik
Since compressed/misc.c is conditionally compiled move error reporting code to boot/main.c. With that being done compressed/misc.c has no "miscellaneous" functions left and is all about plain decompression now. Rename it accordingly. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390: rescue initrd as early as possibleVasily Gorbik
To avoid multi-stage initrd rescue operation and to simplify assumptions during early memory allocations move initrd at some final safe destination as early as possible. This would also allow us to drop .bss usage restrictions for some files. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/decompressor: get rid of .bss usageVasily Gorbik
Using .bss in early code should be avoided. It might overlay initrd image or not yet be initialized. Clean up the last couple of places in the decompressor's code where .bss is used and enfore no .bss usage check on boot/compressed/misc.c. In particular: - initializing free_mem_ptr and free_mem_end_ptr with values guarantee that these variables won't end up in the .bss section. - define STATIC_RW_DATA to go into .data section. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/decompressor: rework uncompressed image info collectionVasily Gorbik
The kernel decompressor has to know several bits of information about uncompressed image. Currently this info is collected by running "nm" on uncompressed vmlinux + "sed" and producing sizes.h file. This method worked well, but it has several disadvantages. Obscure symbols name pattern matching is fragile. Adding new values makes pattern even longer. Logic is spread across code and make file. Limited ability to adjust symbols values (currently magic lma value of 0x100000 is always subtracted). Apart from that same pieces of information (and more) would be needed for early memory detection and features like KASLR outside of boot/compressed/ folder where sizes.h is generated. To overcome limitations new "struct vmlinux_info" has been introduced to include values needed for the decompressor and the rest of the boot code. The only static instance of vmlinux_info is produced during vmlinux link step by filling in struct fields by the linker (like it is done with input_data in boot/compressed/vmlinux.scr.lds.S). This way individual values could be adjusted with all the knowledge linker has and arithmetic it supports. Later .vmlinux.info section (which contains struct vmlinux_info) is transplanted into the decompressor image and dropped from uncompressed image altogether. While doing that replace "compressed/vmlinux.scr.lds.S" linker script (whose purpose is to rename .data section in piggy.o to .rodata.compressed) with plain objcopy command. And simplify decompressor's linker script. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390: remove decompressor's head.SVasily Gorbik
Decompressor's head.S provided "data mover" sole purpose of which has been to safely move uncompressed kernel at 0x100000 and jump to it. With current bzImage layout entire decompressor's code guaranteed to be in a safe location under 0x100000, and hence could not be overwritten during kernel move. For that reason head.S could be replaced with simple memmove function. To do so introduce early boot code phase which is executed from arch/s390/boot/head.S after "verify_facilities" and takes care of optional kernel image decompression and transition to it. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390: unify stack size definitionsVasily Gorbik
Remove STACK_ORDER and STACK_SIZE in favour of identical THREAD_SIZE_ORDER and THREAD_SIZE definitions. THREAD_SIZE and THREAD_SIZE_ORDER naming is misleading since it is used as general kernel stack size information. But both those definitions are used in the common code and throughout architectures specific code, so changing the naming is problematic. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390: add support for virtually mapped kernel stacksMartin Schwidefsky
With virtually mapped kernel stacks the kernel stack overflow detection is now fault based, every stack has a guard page in the vmalloc space. The panic_stack is renamed to nodat_stack and is used for all function that need to run without DAT, e.g. memcpy_real or do_start_kdump. The main effect is a reduction in the kernel image size as with vmap stacks the old style overflow checking that adds two instructions per function is not needed anymore. Result from bloat-o-meter: add/remove: 20/1 grow/shrink: 13/26854 up/down: 2198/-216240 (-214042) In regard to performance the micro-benchmark for fork has a hit of a few microseconds, allocating 4 pages in vmalloc space is more expensive compare to an order-2 page allocation. But with real workload I could not find a noticeable difference. Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390: add stack switch helperMartin Schwidefsky
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/pfault: do not use stack buffers for hardware dataMartin Schwidefsky
With CONFIG_VMAP_STACK=y the stack is allocated from the vmalloc space. Data structures passed to a hardware or a hypervisor interface that requires V=R can not be allocated on the stack anymore. Make the init and fini pfault parameter blocks static variables. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/hypfs: do not use stack buffers for hardware dataMartin Schwidefsky
With CONFIG_VMAP_STACK=y the stack is allocated from the vmalloc space. Data structures passed to a hardware or a hypervisor interface that requires V=R can not be allocated on the stack anymore. Use kmalloc to get memory for the hypsfs_diag304 structure. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/appldata: do not use stack buffers for hardware dataMartin Schwidefsky
With CONFIG_VMAP_STACK=y the stack is allocated from the vmalloc space. Data structures passed to a hardware or a hypervisor interface that requires V=R can not be allocated on the stack anymore. Use kmalloc to get memory for the appldata_product_id and the appldata_parameter_list structures. Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09s390/appldata: pass parameter list pointer to appldata_asmMartin Schwidefsky
In preparation for CONFIG_VMAP_STACK=y move the allocation of the struct appldata_parameter_list to the caller of appldata_asm(). Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-10-09Merge tag 'arc-4.19-rc8' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Vineet writes: "ARC updates for 4.19-rc8 - Fix clone syscall to update Thread pointer register - Make/build updates (needed for AGL/OE builds) [Alexey] - Typo fix [Colin Ian King]" * tag 'arc-4.19-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARC: clone syscall to setp r25 as thread pointer ARC: build: Don't set CROSS_COMPILE in arch's Makefile ARC: fix spelling mistake "entires" -> "entries" ARC: build: Get rid of toolchain check ARCv2: build: use mcpu=hs38 iso generic mcpu=archs
2018-10-09x86/mm: Avoid VLA in pgd_alloc()Kees Cook
Arnd Bergmann reported that turning on -Wvla found a new (unintended) VLA usage: arch/x86/mm/pgtable.c: In function 'pgd_alloc': include/linux/build_bug.h:29:45: error: ISO C90 forbids variable length array 'u_pmds' [-Werror=vla] arch/x86/mm/pgtable.c:190:34: note: in expansion of macro 'static_cpu_has' #define PREALLOCATED_USER_PMDS (static_cpu_has(X86_FEATURE_PTI) ? \ ^~~~~~~~~~~~~~ arch/x86/mm/pgtable.c:431:16: note: in expansion of macro 'PREALLOCATED_USER_PMDS' pmd_t *u_pmds[PREALLOCATED_USER_PMDS]; ^~~~~~~~~~~~~~~~~~~~~~ Use the actual size of the array that is used for X86_FEATURE_PTI, which is known at build time, instead of the variable size. [ mingo: Squashed original fix with followup fix to avoid bisection breakage, wrote new changelog. ] Reported-by: Arnd Bergmann <arnd@arndb.de> Original-written-by: Arnd Bergmann <arnd@arndb.de> Reported-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hpe.com> Fixes: 1be3f247c288 ("x86/mm: Avoid VLA in pgd_alloc()") Link: http://lkml.kernel.org/r/20181008235434.GA35035@beast Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09x86/intel_rdt: Fix initial allocation to consider CDPReinette Chatre
When a new resource group is created it is initialized with a default allocation that considers which portions of cache are currently available for sharing across all resource groups or which portions of cache are currently unused. If a CDP allocation forms part of a resource group that is in exclusive mode then it should be ensured that no new allocation overlaps with any resource that shares the underlying hardware. The current initial allocation does not take this sharing of hardware into account and a new allocation in a resource that shares the same hardware would affect the exclusive resource group. Fix this by considering the allocation of a peer RDT domain - a RDT domain sharing the same hardware - as part of the test to determine which portion of cache is in use and available for use. Fixes: 95f0b77efa57 ("x86/intel_rdt: Initialize new resource group with sane defaults") Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Cc: tony.luck@intel.com Cc: jithu.joseph@intel.com Cc: gavin.hindman@intel.com Cc: dave.hansen@intel.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/b1f7ec08b1695be067de416a4128466d49684317.1538603665.git.reinette.chatre@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09x86/intel_rdt: CBM overlap should also check for overlap with CDP peerReinette Chatre
The CBM overlap test is used to manage the allocations of RDT resources where overlap is possible between resource groups. When a resource group is in exclusive mode then there should be no overlap between resource groups. The current overlap test only considers overlap between the same resources, for example, that usage of a RDT_RESOURCE_L2DATA resource in one resource group does not overlap with usage of a RDT_RESOURCE_L2DATA resource in another resource group. The problem with this is that it allows overlap between a RDT_RESOURCE_L2DATA resource in one resource group with a RDT_RESOURCE_L2CODE resource in another resource group - even if both resource groups are in exclusive mode. This is a problem because even though these appear to be different resources they end up sharing the same underlying hardware and thus does not fulfill the user's request for exclusive use of hardware resources. Fix this by including the CDP peer (if there is one) in every CBM overlap test. This does not impact the overlap between resources within the same exclusive resource group that is allowed. Fixes: 49f7b4efa110 ("x86/intel_rdt: Enable setting of exclusive mode") Reported-by: Jithu Joseph <jithu.joseph@intel.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Jithu Joseph <jithu.joseph@intel.com> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Cc: tony.luck@intel.com Cc: gavin.hindman@intel.com Cc: dave.hansen@intel.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/e538b7f56f7ca15963dce2e00ac3be8edb8a68e1.1538603665.git.reinette.chatre@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09x86/intel_rdt: Introduce utility to obtain CDP peerReinette Chatre
Introduce a utility that, when provided with a RDT resource and an instance of this RDT resource (a RDT domain), would return pointers to the RDT resource and RDT domain that share the same hardware. This is specific to the CDP resources that share the same hardware. For example, if a pointer to the RDT_RESOURCE_L2DATA resource (struct rdt_resource) and a pointer to an instance of this resource (struct rdt_domain) is provided, then it will return a pointer to the RDT_RESOURCE_L2CODE resource as well as the specific instance that shares the same hardware as the provided rdt_domain. This utility is created in support of the "exclusive" resource group mode where overlap of resource allocation between resource groups need to be avoided. The overlap test need to consider not just the matching resources, but also the resources that share the same hardware. Temporarily mark it as unused in support of patch testing to avoid compile warnings until it is used. Fixes: 49f7b4efa110 ("x86/intel_rdt: Enable setting of exclusive mode") Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Jithu Joseph <jithu.joseph@intel.com> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Cc: tony.luck@intel.com Cc: gavin.hindman@intel.com Cc: dave.hansen@intel.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/9b4bc4d59ba2e903b6a3eb17e16ef41a8e7b7c3e.1538603665.git.reinette.chatre@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09Merge branch 'x86/urgent' into x86/cache, to pick up dependent fixIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09x86/intel_rdt: Fix out-of-bounds memory access in CBM testsReinette Chatre
While the DOC at the beginning of lib/bitmap.c explicitly states that "The number of valid bits in a given bitmap does _not_ need to be an exact multiple of BITS_PER_LONG.", some of the bitmap operations do indeed access BITS_PER_LONG portions of the provided bitmap no matter the size of the provided bitmap. For example, if bitmap_intersects() is provided with an 8 bit bitmap the operation will access BITS_PER_LONG bits from the provided bitmap. While the operation ensures that these extra bits do not affect the result, the memory is still accessed. The capacity bitmasks (CBMs) are typically stored in u32 since they can never exceed 32 bits. A few instances exist where a bitmap_* operation is performed on a CBM by simply pointing the bitmap operation to the stored u32 value. The consequence of this pattern is that some bitmap_* operations will access out-of-bounds memory when interacting with the provided CBM. This is confirmed with a KASAN test that reports: BUG: KASAN: stack-out-of-bounds in __bitmap_intersects+0xa2/0x100 and BUG: KASAN: stack-out-of-bounds in __bitmap_weight+0x58/0x90 Fix this by moving any CBM provided to a bitmap operation needing BITS_PER_LONG to an 'unsigned long' variable. [ tglx: Changed related function arguments to unsigned long and got rid of the _cbm extra step ] Fixes: 72d505056604 ("x86/intel_rdt: Add utilities to test pseudo-locked region possibility") Fixes: 49f7b4efa110 ("x86/intel_rdt: Enable setting of exclusive mode") Fixes: d9b48c86eb38 ("x86/intel_rdt: Display resource groups' allocations' size in bytes") Fixes: 95f0b77efa57 ("x86/intel_rdt: Initialize new resource group with sane defaults") Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: fenghua.yu@intel.com Cc: tony.luck@intel.com Cc: gavin.hindman@intel.com Cc: jithu.joseph@intel.com Cc: dave.hansen@intel.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/69a428613a53f10e80594679ac726246020ff94f.1538686926.git.reinette.chatre@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09Merge branch 'fixes' into nextMichael Ellerman
Merge our fixes branch. It has a few important fixes that are needed for futher testing and also some commits that will conflict with content in next.
2018-10-09KVM: PPC: Book3S HV: Add NO_HASH flag to GET_SMMU_INFO ioctl resultPaul Mackerras
This adds a KVM_PPC_NO_HASH flag to the flags field of the kvm_ppc_smmu_info struct, and arranges for it to be set when running as a nested hypervisor, as an unambiguous indication to userspace that HPT guests are not supported. Reporting the KVM_CAP_PPC_MMU_HASH_V3 capability as false could be taken as indicating only that the new HPT features in ISA V3.0 are not supported, leaving it ambiguous whether pre-V3.0 HPT features are supported. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2018-10-09KVM: PPC: Book3S HV: Add a VM capability to enable nested virtualizationPaul Mackerras
With this, userspace can enable a KVM-HV guest to run nested guests under it. The administrator can control whether any nested guests can be run; setting the "nested" module parameter to false prevents any guests becoming nested hypervisors (that is, any attempt to enable the nested capability on a guest will fail). Guests which are already nested hypervisors will continue to be so. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2018-10-09Merge remote-tracking branch 'remotes/powerpc/topic/ppc-kvm' into kvm-ppc-nextPaul Mackerras
This merges in the "ppc-kvm" topic branch of the powerpc tree to get a series of commits that touch both general arch/powerpc code and KVM code. These commits will be merged both via the KVM tree and the powerpc tree. Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2018-10-09KVM: PPC: Book3S HV: Add nested shadow page tables to debugfsPaul Mackerras
This adds a list of valid shadow PTEs for each nested guest to the 'radix' file for the guest in debugfs. This can be useful for debugging. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Allow HV module to load without hypervisor modePaul Mackerras
With this, the KVM-HV module can be loaded in a guest running under KVM-HV, and if the hypervisor supports nested virtualization, this guest can now act as a nested hypervisor and run nested guests. This also adds some checks to inform userspace that HPT guests are not supported by nested hypervisors (by returning false for the KVM_CAP_PPC_MMU_HASH_V3 capability), and to prevent userspace from configuring a guest to use HPT mode. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Handle differing endianness for H_ENTER_NESTEDSuraj Jitindar Singh
The hcall H_ENTER_NESTED takes two parameters: the address in L1 guest memory of a hv_regs struct and the address of a pt_regs struct. The hcall requests the L0 hypervisor to use the register values in these structs to run a L2 guest and to return the exit state of the L2 guest in these structs. These are in the endianness of the L1 guest, rather than being always big-endian as is usually the case for PAPR hypercalls. This is convenient because it means that the L1 guest can pass the address of the regs field in its kvm_vcpu_arch struct. This also improves performance slightly by avoiding the need for two copies of the pt_regs struct. When reading/writing these structures, this patch handles the case where the endianness of the L1 guest differs from that of the L0 hypervisor, by byteswapping the structures after reading and before writing them back. Since all the fields of the pt_regs are of the same type, i.e., unsigned long, we treat it as an array of unsigned longs. The fields of struct hv_guest_state are not all the same, so its fields are byteswapped individually. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Sanitise hv_regs on nested guest entrySuraj Jitindar Singh
restore_hv_regs() is used to copy the hv_regs L1 wants to set to run the nested (L2) guest into the vcpu structure. We need to sanitise these values to ensure we don't let the L1 guest hypervisor do things we don't want it to. We don't let data address watchpoints or completed instruction address breakpoints be set to match in hypervisor state. We also don't let L1 enable features in the hypervisor facility status and control register (HFSCR) for L2 which we have disabled for L1. That is L2 will get the subset of features which the L0 hypervisor has enabled for L1 and the features L1 wants to enable for L2. This could mean we give L1 a hypervisor facility unavailable interrupt for a facility it thinks it has enabled, however it shouldn't have enabled a facility it itself doesn't have for the L2 guest. We sanitise the registers when copying in the L2 hv_regs. We don't need to sanitise when copying back the L1 hv_regs since these shouldn't be able to contain invalid values as they're just what was copied out. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Add one-reg interface to virtual PTCR registerPaul Mackerras
This adds a one-reg register identifier which can be used to read and set the virtual PTCR for the guest. This register identifies the address and size of the virtual partition table for the guest, which contains information about the nested guests under this guest. Migrating this value is the only extra requirement for migrating a guest which has nested guests (assuming of course that the destination host supports nested virtualization in the kvm-hv module). Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Don't access HFSCR, LPIDR or LPCR when running nestedPaul Mackerras
When running as a nested hypervisor, this avoids reading hypervisor privileged registers (specifically HFSCR, LPIDR and LPCR) at startup; instead reasonable default values are used. This also avoids writing LPIDR in the single-vcpu entry/exit path. Also, this removes the check for CPU_FTR_HVMODE in kvmppc_mmu_hv_init() since its only caller already checks this. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Invalidate TLB when nested vcpu moves physical cpuSuraj Jitindar Singh
This is only done at level 0, since only level 0 knows which physical CPU a vcpu is running on. This does for nested guests what L0 already did for its own guests, which is to flush the TLB on a pCPU when it goes to run a vCPU there, and there is another vCPU in the same VM which previously ran on this pCPU and has now started to run on another pCPU. This is to handle the situation where the other vCPU touched a mapping, moved to another pCPU and did a tlbiel (local-only tlbie) on that new pCPU and thus left behind a stale TLB entry on this pCPU. This introduces a limit on the the vcpu_token values used in the H_ENTER_NESTED hcall -- they must now be less than NR_CPUS. [paulus@ozlabs.org - made prev_cpu array be short[] to reduce memory consumption.] Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Use hypercalls for TLB invalidation when nestedPaul Mackerras
This adds code to call the H_TLB_INVALIDATE hypercall when running as a guest, in the cases where we need to invalidate TLBs (or other MMU caches) as part of managing the mappings for a nested guest. Calling H_TLB_INVALIDATE lets the nested hypervisor inform the parent hypervisor about changes to partition-scoped page tables or the partition table without needing to do hypervisor-privileged tlbie instructions. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Implement H_TLB_INVALIDATE hcallSuraj Jitindar Singh
When running a nested (L2) guest the guest (L1) hypervisor will use the H_TLB_INVALIDATE hcall when it needs to change the partition scoped page tables or the partition table which it manages. It will use this hcall in the situations where it would use a partition-scoped tlbie instruction if it were running in hypervisor mode. The H_TLB_INVALIDATE hcall can invalidate different scopes: Invalidate TLB for a given target address: - This invalidates a single L2 -> L1 pte - We need to invalidate any L2 -> L0 shadow_pgtable ptes which map the L2 address space which is being invalidated. This is because a single L2 -> L1 pte may have been mapped with more than one pte in the L2 -> L0 page tables. Invalidate the entire TLB for a given LPID or for all LPIDs: - Invalidate the entire shadow_pgtable for a given nested guest, or for all nested guests. Invalidate the PWC (page walk cache) for a given LPID or for all LPIDs: - We don't cache the PWC, so nothing to do. Invalidate the entire TLB, PWC and partition table for a given/all LPIDs: - Here we re-read the partition table entry and remove the nested state for any nested guest for which the first doubleword of the partition table entry is now zero. The H_TLB_INVALIDATE hcall takes as parameters the tlbie instruction word (of which only the RIC, PRS and R fields are used), the rS value (giving the lpid, where required) and the rB value (giving the IS, AP and EPN values). [paulus@ozlabs.org - adapted to having the partition table in guest memory, added the H_TLB_INVALIDATE implementation, removed tlbie instruction emulation, reworded the commit message.] Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Introduce rmap to track nested guest mappingsSuraj Jitindar Singh
When a host (L0) page which is mapped into a (L1) guest is in turn mapped through to a nested (L2) guest we keep a reverse mapping (rmap) so that these mappings can be retrieved later. Whenever we create an entry in a shadow_pgtable for a nested guest we create a corresponding rmap entry and add it to the list for the L1 guest memslot at the index of the L1 guest page it maps. This means at the L1 guest memslot we end up with lists of rmaps. When we are notified of a host page being invalidated which has been mapped through to a (L1) guest, we can then walk the rmap list for that guest page, and find and invalidate all of the corresponding shadow_pgtable entries. In order to reduce memory consumption, we compress the information for each rmap entry down to 52 bits -- 12 bits for the LPID and 40 bits for the guest real page frame number -- which will fit in a single unsigned long. To avoid a scenario where a guest can trigger unbounded memory allocations, we scan the list when adding an entry to see if there is already an entry with the contents we need. This can occur, because we don't ever remove entries from the middle of a list. A struct nested guest rmap is a list pointer and an rmap entry; ---------------- | next pointer | ---------------- | rmap entry | ---------------- Thus the rmap pointer for each guest frame number in the memslot can be either NULL, a single entry, or a pointer to a list of nested rmap entries. gfn memslot rmap array ------------------------- 0 | NULL | (no rmap entry) ------------------------- 1 | single rmap entry | (rmap entry with low bit set) ------------------------- 2 | list head pointer | (list of rmap entries) ------------------------- The final entry always has the lowest bit set and is stored in the next pointer of the last list entry, or as a single rmap entry. With a list of rmap entries looking like; ----------------- ----------------- ------------------------- | list head ptr | ----> | next pointer | ----> | single rmap entry | ----------------- ----------------- ------------------------- | rmap entry | | rmap entry | ----------------- ------------------------- Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Handle page fault for a nested guestSuraj Jitindar Singh
Consider a normal (L1) guest running under the main hypervisor (L0), and then a nested guest (L2) running under the L1 guest which is acting as a nested hypervisor. L0 has page tables to map the address space for L1 providing the translation from L1 real address -> L0 real address; L1 | | (L1 -> L0) | ----> L0 There are also page tables in L1 used to map the address space for L2 providing the translation from L2 real address -> L1 read address. Since the hardware can only walk a single level of page table, we need to maintain in L0 a "shadow_pgtable" for L2 which provides the translation from L2 real address -> L0 real address. Which looks like; L2 L2 | | | (L2 -> L1) | | | ----> L1 | (L2 -> L0) | | | (L1 -> L0) | | | ----> L0 --------> L0 When a page fault occurs while running a nested (L2) guest we need to insert a pte into this "shadow_pgtable" for the L2 -> L0 mapping. To do this we need to: 1. Walk the pgtable in L1 memory to find the L2 -> L1 mapping, and provide a page fault to L1 if this mapping doesn't exist. 2. Use our L1 -> L0 pgtable to convert this L1 address to an L0 address, or try to insert a pte for that mapping if it doesn't exist. 3. Now we have a L2 -> L0 mapping, insert this into our shadow_pgtable Once this mapping exists we can take rc faults when hardware is unable to automatically set the reference and change bits in the pte. On these we need to: 1. Check the rc bits on the L2 -> L1 pte match, and otherwise reflect the fault down to L1. 2. Set the rc bits in the L1 -> L0 pte which corresponds to the same host page. 3. Set the rc bits in the L2 -> L0 pte. As we reuse a large number of functions in book3s_64_mmu_radix.c for this we also needed to refactor a number of these functions to take an lpid parameter so that the correct lpid is used for tlb invalidations. The functionality however has remained the same. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Handle hypercalls correctly when nestedPaul Mackerras
When we are running as a nested hypervisor, we use a hypercall to enter the guest rather than code in book3s_hv_rmhandlers.S. This means that the hypercall handlers listed in hcall_real_table never get called. There are some hypercalls that are handled there and not in kvmppc_pseries_do_hcall(), which therefore won't get processed for a nested guest. To fix this, we add cases to kvmppc_pseries_do_hcall() to handle those hypercalls, with the following exceptions: - The HPT hypercalls (H_ENTER, H_REMOVE, etc.) are not handled because we only support radix mode for nested guests. - H_CEDE has to be handled specially because the cede logic in kvmhv_run_single_vcpu assumes that it has been processed by the time that kvmhv_p9_guest_entry() returns. Therefore we put a special case for H_CEDE in kvmhv_p9_guest_entry(). For the XICS hypercalls, if real-mode processing is enabled, then the virtual-mode handlers assume that they are being called only to finish up the operation. Therefore we turn off the real-mode flag in the XICS code when running as a nested hypervisor. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-09KVM: PPC: Book3S HV: Use XICS hypercalls when running as a nested hypervisorPaul Mackerras
This adds code to call the H_IPI and H_EOI hypercalls when we are running as a nested hypervisor (i.e. without the CPU_FTR_HVMODE cpu feature) and we would otherwise access the XICS interrupt controller directly or via an OPAL call. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>