summaryrefslogtreecommitdiff
path: root/arch/s390/include/asm/setup.h
AgeCommit message (Collapse)Author
2025-03-04s390: Convert MACHINE_IS_[LPAR|VM|KVM], etc, machine_is_[lpar|vm|kvm]()Heiko Carstens
Move machine type detection to the decompressor and use static branches to implement and use machine_is_[lpar|vm|kvm]() instead of a runtime check via MACHINE_IS_[LPAR|VM|KVM]. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/diag: Convert MACHINE_HAS_DIAG9C to machine_has_diag9c()Heiko Carstens
Use static branch(es) to implement and use machine_has_diag9c() instead of a runtime check via MACHINE_HAS_DIAG9C. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/kvm: Convert MACHINE_HAS_ESOP to machine_has_esop()Heiko Carstens
Use static branch(es) to implement and use machine_has_esop() instead of a runtime check via MACHINE_HAS_ESOP. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/tx: Convert MACHINE_HAS_TE to machine_has_tx()Heiko Carstens
Use static branch(es) to implement and use machine_has_tx() instead of a runtime check with MACHINE_HAS_TE. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/tlb: Convert MACHINE_HAS_TLB_GUEST to machine_has_tlb_guest()Heiko Carstens
Use static branch(es) to implement and use machine_has_tlb_guest() instead of a runtime check via MACHINE_HAS_TLB_GUEST. Also add sclp_early_detect_machine_features() in order to allow for feature detection from the decompressor. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/time: Convert MACHINE_HAS_SCC to machine_has_scc()Heiko Carstens
Use static branch(es) to implement and use machine_has_scc() instead of a runtime check via MACHINE_HAS_SCC. This comes with a cleanup of early time initialization: the initial tod_clock_base value is now passed via the bootdata mechanism, instead of using absolute lowcore as transport vehicle from the decompressor to the kernel. Also the early tod clock initialization is moved to the decompressor which allows to use a static branch with machine_has_scc() within the kernel. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/pci: Get rid of MACHINE_HAS_PCI_MIOHeiko Carstens
Remove MACHINE_FLAG_PCI_MIO/MACHINE_HAS_PCI_MIO and implement the identical functionality with set_machine_feature(), clear_machine_feature() and test_machine_feature(). Acked-by: Niklas Schnelle <schnelle@linux.ibm.com> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_IDTE to cpu_has_idte()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_EDAT2 to cpu_has_edat2()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_EDAT1 to cpu_has_edat1()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_TOPOLOGY to cpu_has_topology()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_TLB_LC to cpu_has_tlb_lc()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_NX to cpu_has_nx()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_GS to cpu_has_gs()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_RDP to cpu_has_rdp()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-04s390/cpufeature: Convert MACHINE_HAS_SEQ_INSN to cpu_has_seq_insn()Heiko Carstens
Convert MACHINE_HAS_... to cpu_has_...() which uses test_facility() instead of testing the machine_flags lowcore member if the feature is present. test_facility() generates better code since it results in a static branch without accessing memory. The branch is patched via alternatives by the decompressor depending on the availability of the required facility. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-08-29s390/setup: Recognize sequential instruction fetching facilityVasily Gorbik
When sequential instruction fetching facility is present, certain guarantees are provided for code patching. In particular, atomic overwrites within 8 aligned bytes is safe from an instruction-fetching point of view. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-08-22s390/early: Dump register contents and call trace for early crashesHeiko Carstens
If the early program check handler cannot resolve a program check dump register contents and a call trace to the console before loading a disabled wait psw. This makes debugging much easier. Emit an extra message with early_printk() for cases where regular printk() via the early console is not yet working so that at least some information is available. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-06-18s390: Replace S390_lowcore by get_lowcore()Sven Schnelle
Replace all S390_lowcore usages in arch/s390/ by get_lowcore(). Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-04-17s390/mm: Move KASLR related to <asm/page.h>Alexander Gordeev
Move everyting KASLR related to <asm/page.h>, similarly to many other architectures. Acked-by: Heiko Carstens <hca@linux.ibm.com> Suggested-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2023-12-11s390/fpu: get rid of MACHINE_HAS_VXHeiko Carstens
Get rid of MACHINE_HAS_VX and replace it with cpu_has_vx() which is a short readable wrapper for "test_facility(129)". Facility bit 129 is set if the vector facility is present. test_facility() returns also true for all bits which are set in the architecture level set of the cpu that the kernel is compiled for. This means that test_facility(129) is a compile time constant which returns true for z13 and later, since the vector facility bit is part of the z13 kernel ALS. In result the compiled code will have less runtime checks, and less code. Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2023-11-05s390/cmma: rework no-dat handlingHeiko Carstens
Rework the way physical pages are set no-dat / dat: The old way is: - Rely on that all pages are initially marked "dat" - Allocate page tables for the kernel mapping - Enable dat - Walk the whole kernel mapping and set PG_arch_1 bit in all struct pages that belong to pages of kernel page tables - Walk all struct pages and test and clear the PG_arch_1 bit. If the bit is not set, set the page state to no-dat - For all subsequent page table allocations, set the page state to dat (remove the no-dat state) on allocation time Change this rather complex logic to a simpler approach: - Set the whole physical memory (all pages) to "no-dat" - Explicitly set those page table pages to "dat" which are part of the kernel image (e.g. swapper_pg_dir) - For all subsequent page table allocations, set the page state to dat (remove the no-dat state) on allocation time In result the code is simpler, and this also allows to get rid of one odd usage of the PG_arch_1 bit. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-11-05s390/cmma: move parsing of cmma kernel parameter to early boot codeHeiko Carstens
The "cmma=" kernel command line parameter needs to be parsed early for upcoming changes. Therefore move the parsing code. Note that EX_TABLE handling of cmma_test_essa() needs to be open-coded, since the early boot code doesn't have infrastructure for handling expected exceptions. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-08-30s390: remove "noexec" optionHeiko Carstens
Do the same like x86 with commit 76ea0025a214 ("x86/cpu: Remove "noexec"") and remove the "noexec" kernel command line option. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-07-29s390/mm: move pfault code to own C fileHeiko Carstens
The pfault code has nothing to do with regular fault handling. Therefore move it to an own C file. Also add an own pfault header file. This way changes to setup.h don't cause a recompile of the pfault code and vice versa. Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-07-24s390/mm: rework arch_get_mappable_range() callbackAlexander Gordeev
As per description in mm/memory_hotplug.c platforms should define arch_get_mappable_range() that provides maximum possible addressable physical memory range for which the linear mapping could be created. The current implementation uses VMEM_MAX_PHYS macro as the maximum mappable physical address and it is simply a cast to vmemmap. Since the address is in physical address space the natural upper limit of MAX_PHYSMEM_BITS is honoured: vmemmap_start = min(vmemmap_start, 1UL << MAX_PHYSMEM_BITS); Further, to make sure the identity mapping would not overlay with vmemmap, the size of identity mapping could be stripped like this: ident_map_size = min(ident_map_size, vmemmap_start); Similarily, any other memory that could be added (e.g DCSS segment) should not overlay with vmemmap as well and that is prevented by using vmemmap (VMEM_MAX_PHYS macro) as the upper limit. However, while the use of VMEM_MAX_PHYS brings the desired result it actually poses two issues: 1. As described, vmemmap is handled as a physical address, although it is actually a pointer to struct page in virtual address space. 2. As vmemmap is a virtual address it could have been located anywhere in the virtual address space. However, the desired necessity to honour MAX_PHYSMEM_BITS limit prevents that. Rework arch_get_mappable_range() callback in a way it does not use VMEM_MAX_PHYS macro and does not confuse the notion of virtual vs physical address spacees as result. That paves the way for moving vmemmap elsewhere and optimizing the virtual address space layout. Introduce max_mappable preserved boot variable and let function setup_kernel_memory_layout() set it up. As result, the rest of the code is does not need to know the virtual memory layout specifics. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-04-13s390/kaslr: provide kaslr_enabled() functionHeiko Carstens
Just like other architectures provide a kaslr_enabled() function, instead of directly accessing a global variable. Also pass the renamed __kaslr_enabled variable from the decompressor to the kernel, so that kalsr_enabled() is available there too. This will be used by a subsequent patch which randomizes the module base load address. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-03-20Merge branch 'decompressor-memory-tracking' into featuresHeiko Carstens
Vasily Gorbik says: =================== Combine and generalize all methods for finding unused memory in decompressor, while decreasing complexity, add memory holes support, while improving error handling (especially in low-memory conditions) and debug-ability. =================== Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-03-20s390/boot: rework decompressor reserved trackingVasily Gorbik
Currently several approaches for finding unused memory in decompressor are utilized. While "safe_addr" grows towards higher addresses, vmem code allocates paging structures top down. The former requires careful ordering. In addition to that ipl report handling code verifies potential intersections with secure boot certificates on its own. Neither of two approaches are memory holes aware and consistent with each other in low memory conditions. To solve that, existing approaches are generalized and combined together, as well as online memory ranges are now taken into consideration. physmem_info has been extended to contain reserved memory ranges. New set of functions allow to handle reserves and find unused memory. All reserves and memory allocations are "typed". In case of out of memory condition decompressor fails with detailed info on current reserved ranges and usable online memory. Linux version 6.2.0 ... Kernel command line: ... mem=100M Our of memory allocating 100000 bytes 100000 aligned in range 0:5800000 Reserved memory ranges: 0000000000000000 0000000003e33000 DECOMPRESSOR 0000000003f00000 00000000057648a3 INITRD 00000000063e0000 00000000063e8000 VMEM 00000000063eb000 00000000063f4000 VMEM 00000000063f7800 0000000006400000 VMEM 0000000005800000 0000000006300000 KASAN Usable online memory ranges (info source: sclp read info [3]): 0000000000000000 0000000006400000 Usable online memory total: 6400000 Reserved: 61b10a3 Free: 24ef5d Call Trace: (sp:000000000002bd58 [<0000000000012a70>] physmem_alloc_top_down+0x60/0x14c) sp:000000000002bdc8 [<0000000000013756>] _pa+0x56/0x6a sp:000000000002bdf0 [<0000000000013bcc>] pgtable_populate+0x45c/0x65e sp:000000000002be90 [<00000000000140aa>] setup_vmem+0x2da/0x424 sp:000000000002bec8 [<0000000000011c20>] startup_kernel+0x428/0x8b4 sp:000000000002bf60 [<00000000000100f4>] startup_normal+0xd4/0xd4 physmem_alloc_range allows to find free memory in specified range. It should be used for one time allocations only like finding position for amode31 and vmlinux. physmem_alloc_top_down can be used just like physmem_alloc_range, but it also allows multiple allocations per type and tries to merge sequential allocations together. Which is useful for paging structures allocations. If sequential allocations cannot be merged together they are "chained", allowing easy per type reserved ranges enumeration and migration to memblock later. Extra "struct reserved_range" allocated for chaining are not tracked or reserved but rely on the fact that both physmem_alloc_range and physmem_alloc_top_down search for free memory only below current top down allocator position. All reserved ranges should be transferred to memblock before memblock allocations are enabled. The startup code has been reordered to delay any memory allocations until online memory ranges are detected and occupied memory ranges are marked as reserved to be excluded from follow-up allocations. Ipl report certificates are a special case, ipl report certificates list is checked together with other memory reserves until certificates are saved elsewhere. KASAN required memory for shadow memory allocation and mapping is reserved as 1 large chunk which is later passed to KASAN early initialization code. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-03-20s390/boot: remove non-functioning image bootable checkVasily Gorbik
check_image_bootable() has been introduced with commit 627c9b62058e ("s390/boot: block uncompressed vmlinux booting attempts") to make sure that users don't try to boot uncompressed vmlinux ELF image in qemu. It used to be possible quite some time ago. That commit prevented confusion with uncompressed vmlinux image starting to boot and even printing kernel messages until it crashed. Users might have tried to report the problem without realizing they are doing something which was not intended. Since commit f1d3c5323772 ("s390/boot: move sclp early buffer from fixed address in asm to C") check_image_bootable() doesn't function properly anymore, as well as booting uncompressed vmlinux image in qemu doesn't really produce any output and crashes. Moving forward it doesn't make sense to fix check_image_bootable() anymore, so simply remove it. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-03-13s390/setup: always inline gen_lpswe()Heiko Carstens
gen_lpswe() contains a BUILD_BUG_ON() statement which depends on a function parameter. If the compiler decides to generate a not inlined function this will lead to a build error, even if all call sites pass a valid parameter. To avoid this always inline gen_lpswe(). Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-02-14s390/mm: add support for RDP (Reset DAT-Protection)Gerald Schaefer
RDP instruction allows to reset DAT-protection bit in a PTE, with less CPU synchronization overhead than IPTE instruction. In particular, IPTE can cause machine-wide synchronization overhead, and excessive IPTE usage can negatively impact machine performance. RDP can be used instead of IPTE, if the new PTE only differs in SW bits and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW. SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none of the other HW-defined part of the PTE must change. This is because the architecture forbids such changes to an active and valid PTE, which is why invalidation with IPTE is always used first, before writing a new entry. The RDP optimization helps mainly for fault-driven SW dirty-bit tracking. Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set, to allow SW dirty-bit accounting on first write protection fault, where the DAT-protection would then be reset. The reset is now done with RDP instead of IPTE, if RDP instruction is available. RDP cannot always guarantee that the DAT-protection reset is propagated to all CPUs immediately. This means that spurious TLB protection faults on other CPUs can now occur. For this, common code provides a flush_tlb_fix_spurious_fault() handler, which will now be used to do a CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and not just the affected entry. For more fine-grained flushing, by simply doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to also provide the PTE pointer. Note that spurious TLB protection faults cannot really be distinguished from racing pagetable updates, where another thread already installed the correct PTE. In such a case, the local TLB flush would be unnecessary overhead, but overall reduction of CPU synchronization overhead by not using IPTE is still expected to be beneficial. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-01-13s390: move __amode31_base declaration to proper header fileHeiko Carstens
Move __amode31_base declaration to proper header file to get rid of arch/s390/boot/startup.c:24:15: warning: symbol '__amode31_base' was not declared. Should it be static? Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-01-13s390/mm: start kernel with DAT enabledAlexander Gordeev
The setup of the kernel virtual address space is spread throughout the sources, boot stages and config options like this: 1. The available physical memory regions are queried and stored as mem_detect information for later use in the decompressor. 2. Based on the physical memory availability the virtual memory layout is established in the decompressor; 3. If CONFIG_KASAN is disabled the kernel paging setup code populates kernel pgtables and turns DAT mode on. It uses the information stored at step [1]. 4. If CONFIG_KASAN is enabled the kernel early boot kasan setup populates kernel pgtables and turns DAT mode on. It uses the information stored at step [1]. The kasan setup creates early_pg_dir directory and directly overwrites swapper_pg_dir entries to make shadow memory pages available. Move the kernel virtual memory setup to the decompressor and start the kernel with DAT turned on right from the very first istruction. That completely eliminates the boot phase when the kernel runs in DAT-off mode, simplies the overall design and consolidates pgtables setup. The identity mapping is created in the decompressor, while kasan shadow mappings are still created by the early boot kernel code. Share with decompressor the existing kasan memory allocator. It decreases the size of a newly requested memory block from pgalloc_pos and ensures that kernel image is not overwritten. pgalloc_low and pgalloc_pos pointers are made preserved boot variables for that. Use the bootdata infrastructure to setup swapper_pg_dir and invalid_pg_dir directories used by the kernel later. The interim early_pg_dir directory established by the kasan initialization code gets eliminated as result. As the kernel runs in DAT-on mode only the PSW_KERNEL_BITS define gets PSW_MASK_DAT bit by default. Additionally, the setup_lowcore_dat_off() and setup_lowcore_dat_on() routines get merged, since there is no DAT-off mode stage anymore. The memory mappings are created with RW+X protection that allows the early boot code setting up all necessary data and services for the kernel being booted. Just before the paging is enabled the memory protection is changed to RO+X for text, RO+NX for read-only data and RW+NX for kernel data and the identity mapping. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-10-26s390: make command line configurableSven Schnelle
Allow to configure the command line to an arbitrary length, with a default of 4096 bytes. Also remove COMMAND_LINE_SIZE from include/uapi/asm/setup.h as this is dynamic now and doesn't tell anything about the command line size limitations of a new kernel that might be loaded. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2021-10-26s390: support command lines longer than 896 bytesSven Schnelle
Currently s390 supports a fixed maximum command line length of 896 bytes. This isn't enough as some installers are trying to pass all configuration data via kernel command line, and even with zfcp alone it is easy to generate really long command lines. Therefore extend the command line to 4 kbytes. In the parm area where the command line is stored there is no indication of the maximum allowed length, so a new field which contains the maximum length is added. The parm area has always been initialized to zero, so with old kernels this field would read zero. This is important because tools like zipl could read this field. If it contains a number larger than zero zipl knows the maximum length that can be stored in the parm area, otherwise it must assume that it is booting a legacy kernel and only 896 bytes are available. The removing of trailing whitespace in head.S is also removed because code to do this is already present in setup_boot_command_line(). Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2021-10-26s390/kexec_file: move kernel image size checkSven Schnelle
In preparation of adding support for command lines with variable sizes on s390, the check whether the new kernel image is at least HEAD_END bytes long isn't correct. Move the check to kexec_file_add_components() so we can get the size of the parm area and check the size there. The '.org HEAD_END' directive can now also be removed from head.S. This was used in the past to reserve space for the early sccb buffer, but with commit 9a5131b87cac1 ("s390/boot: move sclp early buffer from fixed address in asm to C") this is no longer required. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2021-07-27s390: make PCI mio support a machine flagNiklas Schnelle
Kernel support for the newer PCI mio instructions can be toggled off with the pci=nomio command line option which needs to integrate with common code PCI option parsing. However this option then toggles static branches which can't be toggled yet in an early_param() call. Thus commit 9964f396f1d0 ("s390: fix setting of mio addressing control") moved toggling the static branches to the PCI init routine. With this setup however we can't check for mio support outside the PCI code during early boot, i.e. before switching the static branches, which we need to be able to export this as an ELF HWCAP. Improve on this by turning mio availability into a machine flag that gets initially set based on CONFIG_PCI and the facility bit and gets toggled off if pci=nomio is found during PCI option parsing allowing simple access to this machine flag after early init. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-27s390/boot: move EP_OFFSET and EP_STRING to head.SAlexander Egorenkov
Both macros are used only in decompressor's head.S, unnecessary to put them in a global header used in many places like setup.h is. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-27s390/setup: generate asm offsets from struct parmareaAlexander Egorenkov
To reduce duplication, replace error-prone and hard-coded parameter area offsets with auto-generated ones. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-27s390/setup: drop _OFFSET macrosAlexander Egorenkov
The macros * IPL_DEVICE_OFFSET * INITRD_START_OFFSET * INITRD_SIZE_OFFSET * OLDMEM_BASE_OFFSET * OLDMEM_SIZE_OFFSET * KERNEL_VERSION_OFFSET * COMMAND_LINE_OFFSET are no longer necessary and used only to define another set of macros with the same names but w/o the suffix _OFFSET. Therefore, drop this unnecessary indirection. Drop the macro KERNEL_VERSION_OFFSET w/o renaming it to KERNEL_VERSION because it is used nowhere. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-27s390/setup: remove unused symbolic constants for C code from setup.hAlexander Egorenkov
These symbolic constants are used only by assembler code now: * COMMAND_LINE * IPL_DEVICE C code of the decompressed kernel should use boot data passed by the decompressor instead. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-27s390/dump: introduce boot data 'oldmem_data'Alexander Egorenkov
The new boot data struct shall replace global variables OLDMEM_BASE and OLDMEM_SIZE. It is initialized in the decompressor and passed to the decompressed kernel. In comparison to the old solution, this one doesn't access data at fixed physical addresses which will become important when the decompressor becomes relocatable. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-27s390/boot: introduce boot data 'initrd_data'Alexander Egorenkov
The new boot data struct shall replace global variables INITRD_START and INITRD_SIZE. It is initialized in the decompressor and passed to the decompressed kernel. In comparison to the old solution, this one doesn't access data at fixed physical addresses which will become important when the decompressor becomes relocatable. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-27s390/boot: move sclp early buffer from fixed address in asm to CAlexander Egorenkov
To make the decompressor relocatable, the early SCLP buffer with a fixed address must be replaced with a relocatable C buffer of the according size and alignment as required by SCLP. Introduce a new function sclp_early_set_buffer() into the SCLP driver which enables the decompressor to change the SCLP early buffer at any time. This will be useful when the decompressor becomes fully relocatable and might need to change the SCLP early buffer to one with an address < 2G as required by SCLP because it was loaded at an address >= 2G. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-27s390/sclp: use only one sclp early buffer to send commandsAlexander Egorenkov
A buffer that can be used for communication with SCLP is required to lie below 2GB memory address. Therefore, both sclp_info_sccb and sclp_early_sccb must fulfill this requirement if passed directly to the sclp_early_cmd() function. Instead, use only sclp_early_sccb for communication with SCLP. This allows the buffer sclp_info_sccb to be placed anywhere in the memory address space and, therefore, simplifies the process of making the decompressor relocatable later on, one thing less to relocate. And make sure that the length of the new unified early SCLP buffer is no less than the length of the removed sclp_info_sccb buffer which might be larger than the length of the sclp_early_sccb buffer. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-07-05s390/boot: replace magic string check with a bootdata flagAlexander Egorenkov
The magic string "S390EP" at offset 0x10008 indicated to the decompressed kernel that it was booted by the decompressor. Introduce a new bootdata flag instead which conveys the same information in an explicit and a cleaner way. But keep the magic string because it is a kernel ABI. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2021-06-18s390: setup kernel memory layout earlyVasily Gorbik
Currently there are two separate places where kernel memory layout has to be known and adjusted: 1. early kasan setup. 2. paging setup later. Those 2 places had to be kept in sync and adjusted to reflect peculiar technical details of one another. With additional factors which influence kernel memory layout like ultravisor secure storage limit, complexity of keeping two things in sync grew up even more. Besides that if we look forward towards creating identity mapping and enabling DAT before jumping into uncompressed kernel - that would also require full knowledge of and control over kernel memory layout. So, de-duplicate and move kernel memory layout setup logic into the decompressor. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2021-06-07s390/ipl: make parameter area accessible via struct parmareaHeiko Carstens
Since commit 9a965ea95135 ("s390/kexec_file: Simplify parmarea access") we have struct parmarea which describes the layout of the kernel parameter area. Make the kernel parameter area available as global variable parmarea of type struct parmarea, which allows to easily access its members. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-11-20s390: unify identity mapping limits handlingVasily Gorbik
Currently we have to consider too many different values which in the end only affect identity mapping size. These are: 1. max_physmem_end - end of physical memory online or standby. Always <= end of the last online memory block (get_mem_detect_end()). 2. CONFIG_MAX_PHYSMEM_BITS - the maximum size of physical memory the kernel is able to support. 3. "mem=" kernel command line option which limits physical memory usage. 4. OLDMEM_BASE which is a kdump memory limit when the kernel is executed as crash kernel. 5. "hsa" size which is a memory limit when the kernel is executed during zfcp/nvme dump. Through out kernel startup and run we juggle all those values at once but that does not bring any amusement, only confusion and complexity. Unify all those values to a single one we should really care, that is our identity mapping size. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>