summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-10-25s390/mm: add missing arch_set_page_dat() call to gmap allocationsHeiko Carstens
If the cmma no-dat feature is available all pages that are not used for dynamic address translation are marked as "no-dat" with the ESSA instruction. This information is visible to the hypervisor, so that the hypervisor can optimize purging of guest TLB entries. This also means that pages which are used for dynamic address translation must not be marked as "no-dat", since the hypervisor may then incorrectly not purge guest TLB entries. Region, segment, and page tables allocated within the gmap code are incorrectly marked as "no-dat", since an explicit call to arch_set_page_dat() is missing, which would remove the "no-dat" mark. In order to fix this add a new gmap_alloc_crst() function which should be used to allocate region and segment tables, and which also calls arch_set_page_dat(). Also add the arch_set_page_dat() call to page_table_alloc_pgste(). Cc: <stable@vger.kernel.org> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm: add missing arch_set_page_dat() call to vmem_crst_alloc()Heiko Carstens
If the cmma no-dat feature is available all pages that are not used for dynamic address translation are marked as "no-dat" with the ESSA instruction. This information is visible to the hypervisor, so that the hypervisor can optimize purging of guest TLB entries. This also means that pages which are used for dynamic address translation must not be marked as "no-dat", since the hypervisor may then incorrectly not purge guest TLB entries. Region and segment tables allocated via vmem_crst_alloc() are incorrectly marked as "no-dat", as soon as slab_is_available() returns true. Such tables are allocated e.g. when kernel page tables are split, memory is hotplugged, or a DCSS segment is loaded. Fix this by adding the missing arch_set_page_dat() call. Cc: <stable@vger.kernel.org> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/cmma: fix initial kernel address space page table walkHeiko Carstens
If the cmma no-dat feature is available the kernel page tables are walked to identify and mark all pages which are used for address translation (all region, segment, and page tables). In a subsequent loop all other pages are marked as "no-dat" pages with the ESSA instruction. This information is visible to the hypervisor, so that the hypervisor can optimize purging of guest TLB entries. The initial loop however does not cover the complete kernel address space. This can result in pages being marked as not being used for dynamic address translation, even though they are. In turn guest TLB entries incorrectly may not be purged. Fix this by adjusting the end address of the kernel address range being walked. Cc: <stable@vger.kernel.org> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/diag: add missing virt_to_phys() translation to diag224()Heiko Carstens
Diagnose 224 expects a physical address, but all users pass a virtual address. Translate the address to fix this. Reported-by: Mete Durlu <meted@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: move VM_FAULT_ERROR handling to do_exception()Heiko Carstens
Get rid of do_fault_error() and move its contents to do_exception(), which makes do_exception(). With removing do_fault_error() it is also possible to get rid of the handle_fault_error_nolock() wrapper. Instead rename do_no_context() to handle_fault_error_nolock(). In result the whole fault handling looks much more like on other architectures. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: remove VM_FAULT_BADMAP and VM_FAULT_BADACCESSHeiko Carstens
Remove the last two private vm_fault reasons: VM_FAULT_BADMAP and VM_FAULT_BADACCESS. In order to achieve this add an si_code parameter to do_no_context() and it's wrappers and directly call the wrappers instead of relying on do_fault_error() handling. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: remove VM_FAULT_SIGNALHeiko Carstens
Remove VM_FAULT_SIGNAL and open-code it at the only two locations where it is used. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: remove VM_FAULT_BADCONTEXTHeiko Carstens
Remove VM_FAULT_BADCONTEXT and instead call do_no_context() via wrappers. This adds two new wrappers similar to what x86 has: handle_fault_error() and handle_fault_error_nolock(). Both of them simply call do_no_context(), while handle_fault_error() also unlocks mmap lock, which avoids adding lots of mmap_read_unlock() calls with this and subsequent patches. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: simplify kfence fault handlingHeiko Carstens
do_no_context() can be simplified by removing its fault parameter, which is only used to decide if kfence_handle_page_fault() should be called. If the fault happened within the kernel space it is ok to always check if this happened on a page which was unmapped because of the kfence feature. Limiting the check to the VM_FAULT_BADCONTEXT case doesn't add any value. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: call do_fault_error() only from do_exception()Heiko Carstens
Remove duplicated fault error handling and handle it only once within do_exception(). Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: get rid of do_low_address()Heiko Carstens
There is only one caller of do_low_address(). Given that this code is quite special just get rid of do_low_address, and add it to do_protection_exception() in order to make the code a bit more readable. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: remove VM_FAULT_PFAULTHeiko Carstens
Handling of VM_FAULT_PFAULT and VM_FAULT_BADCONTEXT is nearly identical; the only difference is within do_no_context() where however the fault_type (KERNEL_FAULT vs GMAP_FAULT) makes sure that both types will be handled differently. Therefore it is possible to get rid of VM_FAULT_PFAULT and use VM_FAULT_BADCONTEXT instead. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: use get_kernel_nofault() to dereference in dump_pagetable()Heiko Carstens
The page table dumper uses get_kernel_nofault() to test if dereferencing page table entries is possible. Use the result, which is the required page table entry, instead of throwing it away and dereferencing a second time without any safe guard. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: improve readability by using teid unionHeiko Carstens
Get rid of some magic numbers, and use the teid union and also some ptrace PSW defines to improve readability. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm: move translation-exception identification structure to fault.hHeiko Carstens
Move translation-exception identification structure to new fault.h header file, change it to a union, and change existing kvm code accordingly. The new union will be used by subsequent patches. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: use static key for store indicationHeiko Carstens
Generate slightly better code by using a static key to implement store indication. This allows to get rid of a memory access on the hot path. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: use get_fault_address() everywhereHeiko Carstens
Use the get_fault_address() helper function instead of open-coding it at many locations. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: replace WARN_ON_ONCE() with unreachable()Heiko Carstens
do_secure_storage_access() contains a switch statements which handles all possible return values from get_fault_type(). Therefore remove the pointless default case error handling and replace it with unreachable(). Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: remove noinline attribute from all functionsHeiko Carstens
Remove all noinline attribute from all functions and leave the inlining decisions up to the compiler. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: remove line breakHeiko Carstens
chechpatch reports: CHECK: Alignment should match open parenthesis + if (IS_ENABLED(CONFIG_PGSTE) && gmap && + (flags & FAULT_FLAG_RETRY_NOWAIT)) { Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: include linux/mmu_context.hHeiko Carstens
Include linux/mmu_context.h instead asm/mmu_context.h. checkpatch reports: CHECK: Consider using #include <linux/mmu_context.h> instead of <asm/mmu_context.h> +#include <asm/mmu_context.h> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: have balanced braces, remove unnecessary blanksHeiko Carstens
Remove unnecessary braces and also blanks after casts. Add braces to have balanced braces where missing. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: use pr_warn(), pr_cont(), ... instead of open-codingHeiko Carstens
Use pr_warn() and friends instead of open-coding with printk(). Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: use pr_warn_ratelimited()Heiko Carstens
Use pr_warn_ratelimited() instead of printk_ratelimited(). checkpatch reports: WARNING: Prefer ... pr_warn_ratelimited(... to printk_ratelimited(KERN_WARNING ... + printk_ratelimited(KERN_WARNING Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: use __ratelimit() instead of printk_ratelimit()Heiko Carstens
Just like other architectures make use __ratelimit() instead of printk_ratelimit(). Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: reverse x-mas tree coding styleHeiko Carstens
Have reverse x-mas tree coding style for variables everywhere. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-23s390/mm,fault: remove and improve comments, adjust whitespaceHeiko Carstens
Remove wrong, outdated, and pointless comments. Adjust wording for some comments, and adjust whitespace at some places. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-19s390/pai_crypto: dynamically allocate percpu pai crypto map data structureThomas Richter
Struct paicrypt_map is a data structure and is statically defined for each possible CPU. Rework this and replace it by dynamically allocated data structures created when a perf_event_open() system call is invoked. It is replaced by an array of pointers to all possible CPUs and reference counting. The array of pointers is allocated when the first event is created. For each online CPU an event is installed on, a struct paicrypt_map is allocated and a pointer to struct cpu_cf_events is stored in the array: CPU 0 1 2 3 ... N +---+---+---+---+---+---+ paicrypt_root::mapptr--> | * | | | |...| | +-|-+---+---+---+---+---+ | | \|/ +--------------+ | paicrypt_map | +--------------+ With this approach the large data structure is only allocated when an event is actually installed and used. Also implement proper reference counting for allocation and removal. PAI crypto counter events can not be created when a CPU hot plug add is processed. This means a CPU hot plug add does not get the necessary PAI event to record PAI cryptography counter increments on the newly added CPU. There is no possibility to notify user space of a new CPU and the necessary event infrastructure assoiciated with the file descriptor returned by perf_event_open() system call. However system call perf_event_open() can use the newly added CPU when issued after the CPU hot plug add. Kernel CPU hot plug remove deletes the CPU and stops the PAI counters on that CPU. When the process closes the file descriptor associated with that event, the event's destroy() function removes any allocated data structures and adjusts the reference counts. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-19s390/mm: make vmemmap_free() only for CONFIG_MEMORY_HOTPLUG availableHeiko Carstens
Get rid of this W=1 compile warning: arch/s390/mm/vmem.c:502:6: warning: no previous prototype for ‘vmemmap_free’ [-Wmissing-prototypes] 502 | void vmemmap_free(unsigned long start, unsigned long end, | ^~~~~~~~~~~~ Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-19s390/mm: remove __GFP_HIGHMEM maskingHeiko Carstens
Remove unnecessary __GFP_HIGHMEM masking, which was introduced with commit 6326c26c1514 ("s390: convert various pgalloc functions to use ptdescs"). Also remove a whitespace change which was introduced with the same commit. Link: https://lore.kernel.org/all/CAOzc2px-SFSnmjcPriiB3cm1fNj3+YC8S0VSp4t1QvDR0f4E2A@mail.gmail.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390/vmem: remove unused variableVasily Gorbik
Fix the follow warning reported by sparse: arch/s390/boot/vmem.c:170:15: warning: unused variable ‘entry’ [-Wunused-variable] 170 | pte_t entry; | ^~~~~ Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390: add support for DCACHE_WORD_ACCESSHeiko Carstens
Implement load_unaligned_zeropad() and enable DCACHE_WORD_ACCESS to speed up string operations in fs/dcache.c and fs/namei.c. Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390: provide word-at-a-time implementationHeiko Carstens
Provide an s390 specific word-at-a-time implementation. Compared to the generic variant the generated code for has_zero() is slightly better. However find_zero() is much simpler since it reuses the result of __fls() aka flogr() and now comes without any conditional branches, while the generic variant has three of them. Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390/extable: reduce number of extable macrosHeiko Carstens
Get rid of __EX_TABLE() macro, rename __EX_TABLE_UA() to __EX_TABLE() and convert users of old __EX_TABLE() macro so they pass more parameters to the changed __EX_TABLE() semantics. Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390/crash: fix virtual vs physical address confusionAlexander Gordeev
Fix virtual vs physical address confusion (which currently are the same). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390/crash: remove unused parameterAlexander Gordeev
Funciton loads_init() does not use loads_offset parameter. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390/ap: show APFS value on error reply 0x8BHarald Freudenberger
With Secure Execution the error reply RX 0x8B now carries an APFS value indicating why the request has been filtered by a lower layer of the firmware. So display this value as a warning line in the s390 debug feature trace. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390/zcrypt: introduce new internal AP queue se_bound attributeHarald Freudenberger
This patch introduces a new AP queue internal attribute se_bound which reflects the bound state of an APQN within a Secure Execution environment. With introduction of Secure Execution guests now an AP firmware queue needs to be bound to the guest before usage. This patch introduces a new internal attribute reflecting this bound state and some glue code to handle this new field during lifetime of an AP queue device. Together with that now the zcrypt scheduler considers the state of the AP queues when a message is about to be distributed among the existing queues. There is a new function ap_queue_usable() which returns true only when all conditions for using this AP queue device are fulfilled. In details this means: the AP queue needs to be configured, not checkstopped and within an SE environment it needs to be bound. So the new function gives and indication if the AP queue device is ready to serve requests or not. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-10-16s390/ap: re-init AP queues on config onHarald Freudenberger
On a state toggle from config off to config on and on the state toggle from checkstop to not checkstop the queue's internal states was set but the state machine was not nudged. This did not care as on the first enqueue of a request the state machine kick ran. However, within an Secure Execution guest a queue is only chosen by the scheduler when it has been bound. But to bind a queue, it needs to run through the initial states (reset, enable interrupts, ...). So this is like a chicken-and-egg problem and the result was in fact that a queue was unusable after a config off/on toggle. With some slight rework of the handling of these states now the new function _ap_queue_init_state() is called which is the core of the ap_queue_init_state() function but without locking handling. This has the benefit that it can be called on all the places where a (re-)init of the AP queue's state machine is needed. Fixes: 2d72eaf036d2 ("s390/ap: implement SE AP bind, unbind and associate") Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390: use control register bit definesHeiko Carstens
Use control register bit defines instead of plain numbers where possible. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/ctlreg: add control register bitsHeiko Carstens
Instead of having only masks for specific bit locations within control registers, also define the bit numbers and use them to define the masks. The bit defines can be used to convert plain numbers to defines. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/irq: use CR0 defines to define CR0_IRQ_SUBCLASS_MASKHeiko Carstens
Use existing CR0 defines to define CR0_IRQ_SUBCLASS_MASK instead of open-coding the defines again. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/ctlreg: add missing definesHeiko Carstens
Add a couple of missing control register defines which otherwise would prevent to convert other open-coded usages. Acked-by: Thomas Richter <tmricht@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/setup: make use of system_ctl_load()Heiko Carstens
Use system_ctl_load() instead of local_ctl_load() to reflect that control register changes are supposed to be global. Even though setup_cr() was ok, note that the usage of local_ctl_load() would have been wrong, if it would have happened after the control register save area was initialized: only local control register contents would have been changed, but wouldn't be used for new CPUs. With using system_ctl_load() the caller doesn't need to take care. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/ctlreg: add system_ctl_load()Heiko Carstens
Add system_ctl_load() which can be used to load a value to a control register system wide. Refactor system_ctl_set_clear_bit() so it can handle all different request types, and also rename it to system_ctlreg_modify() Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/early: use system_ctl_set_bit() instead of local_ctl_set_bit()Heiko Carstens
Use system_ctl_set_bit() instead of local_ctl_set_bit() to reflect that the control register changes are supposed to be global. This change is just for documentation purposes, since it still results only in local control register contents being changed. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/ctlreg: allow to call system_ctl_set/clear_bit() earlyHeiko Carstens
Allow to call system_ctl_set_bit() and system_clt_clear_bit() early, so that users do not have to take care when the control register save area has been initialized. Users are supposed to use system_ctl_set_bit() and system:clt_clear_bit() for all control register changes which are supposed to be seen globally. Depending on the system state such calls will change: - local control register contents - save area and local control register contents - save area and global control register contents Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/ctltreg: make initialization of control register save area explicitHeiko Carstens
Commit e1b9c2749af0 ("s390/smp: ensure global control register contents are in sync") made the control register save area contained within the lowcore at absolute address zero a resource which is used when initializing CPUs. However this is anything but obvious from the code. Add an ctlreg_init_save_area() function in order to make this explicit. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/ctlreg: add struct ctlregHeiko Carstens
Add struct ctlreg to enforce strict type checking / usage for control register functions. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-09-19s390/ctlreg: add type checking to __local_ctl_load() and __local_ctl_store()Heiko Carstens
Add type checking to __local_ctl_load() and __local_ctl_store(). For both functions enforce to pass an array consisting of unsigned longs. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>