summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2019-06-11x86/MCE: Make the number of MCA banks a per-CPU variableYazen Ghannam
The number of MCA banks is provided per logical CPU. Historically, this number has been the same across all CPUs, but this is not an architectural guarantee. Future AMD systems may have MCA bank counts that vary between logical CPUs in a system. This issue was partially addressed in 006c077041dc ("x86/mce: Handle varying MCA bank counts") by allocating structures using the maximum number of MCA banks and by saving the maximum MCA bank count in a system as the global count. This means that some extra structures are allocated. Also, this means that CPUs will spend more time in the #MC and other handlers checking extra MCA banks. Thus, define the number of MCA banks as a per-CPU variable. [ bp: Make mce_num_banks an unsigned int. ] Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "linux-edac@vger.kernel.org" <linux-edac@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: "x86@kernel.org" <x86@kernel.org> Link: https://lkml.kernel.org/r/20190607201752.221446-5-Yazen.Ghannam@amd.com
2019-06-11x86/MCE/AMD: Don't cache block addresses on SMCA systemsYazen Ghannam
On legacy systems, the addresses of the MCA_MISC* registers need to be recursively discovered based on a Block Pointer field in the registers. On Scalable MCA systems, the register space is fixed, and particular addresses can be derived by regular offsets for bank and register type. This fixed address space includes the MCA_MISC* registers. MCA_MISC0 is always available for each MCA bank. MCA_MISC1 through MCA_MISC4 are considered available if MCA_MISC0[BlkPtr]=1. Cache the value of MCA_MISC0[BlkPtr] for each bank and per CPU. This needs to be done only during init. The values should be saved per CPU to accommodate heterogeneous SMCA systems. Redo smca_get_block_address() to directly return the block addresses. Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "linux-edac@vger.kernel.org" <linux-edac@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: "x86@kernel.org" <x86@kernel.org> Link: https://lkml.kernel.org/r/20190607201752.221446-4-Yazen.Ghannam@amd.com
2019-06-11x86/MCE: Make mce_banks a per-CPU arrayYazen Ghannam
Current AMD systems have unique MCA banks per logical CPU even though the type of the banks may all align to the same bank number. Each CPU will have control of a set of MCA banks in the hardware and these are not shared with other CPUs. For example, bank 0 may be the Load-Store Unit on every logical CPU, but each bank 0 is a unique structure in the hardware. In other words, there isn't a *single* Load-Store Unit at MCA bank 0 that all logical CPUs share. This idea extends even to non-core MCA banks. For example, CPU0 and CPU4 may see a Unified Memory Controller at bank 15, but each CPU is actually seeing a unique hardware structure that is not shared with other CPUs. Because the MCA banks are all unique hardware structures, it would be good to control them in a more granular way. For example, if there is a known issue with the Floating Point Unit on CPU5 and a user wishes to disable an error type on the Floating Point Unit, then it would be good to do this only for CPU5 rather than all CPUs. Also, future AMD systems may have heterogeneous MCA banks. Meaning the bank numbers may not necessarily represent the same types between CPUs. For example, bank 20 visible to CPU0 may be a Unified Memory Controller and bank 20 visible to CPU4 may be a Coherent Slave. So granular control will be even more necessary should the user wish to control specific MCA banks. Split the device attributes from struct mce_bank leaving only the MCA bank control fields. Make struct mce_banks[] per_cpu in order to have more granular control over individual MCA banks in the hardware. Allocate the device attributes statically based on the maximum number of MCA banks supported. The sysfs interface will use as many as needed per CPU. Currently, this is set to mca_cfg.banks, but will be changed to a per_cpu bank count in a future patch. Allocate the MCA control bits statically. This is in order to avoid locking warnings when memory is allocated during secondary CPUs' init sequences. Also, remove the now unnecessary return values from __mcheck_cpu_mce_banks_init() and __mcheck_cpu_cap_init(). Redo the sysfs store/show functions to handle the per_cpu mce_banks[]. [ bp: s/mce_banks_percpu/mce_banks_array/g ] [ Locking issue reported by ] Reported-by: kernel test robot <rong.a.chen@intel.com> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "linux-edac@vger.kernel.org" <linux-edac@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: "x86@kernel.org" <x86@kernel.org> Link: https://lkml.kernel.org/r/20190607201752.221446-3-Yazen.Ghannam@amd.com
2019-06-11x86/MCE: Make struct mce_banks[] staticYazen Ghannam
The struct mce_banks[] array is only used in mce/core.c so move its definition there and make it static. Also, change the "init" field to bool type. Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: "x86@kernel.org" <x86@kernel.org> Link: https://lkml.kernel.org/r/20190607201752.221446-2-Yazen.Ghannam@amd.com
2019-06-11s390/kdump: get rid of compile warningHeiko Carstens
Move the CONFIG_CRASH_DUMP ifdef to get rid of this: arch/s390/kernel/machine_kexec.c:146:22: warning: 'do_start_kdump' defined but not used [-Wunused-function] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11s390: include/asm/debug.h add kerneldoc markupsMauro Carvalho Chehab
Instead of keeping the documentation inside s390dbf.rst, move them to arch/s390/include/asm/debug.h, using standard kernel-doc markups. Keeping the documentation close to the code helps to keep it updated. It also makes easier to document other stuff inside debug.h, as all it needs is to add kernel-doc markups inside it, as the file will be already be included at the produced documentation. - Those were converted to kerneldoc using this script specially designed to parse ths file, and manually editted: <script> use strict; my $mode = ""; my $parameter = ""; my $ret = ""; my $descr = ""; sub add_var($) { my $ln = shift; $ln =~ s/^\s+//; $ln =~ s/\s+$//; return if ($ln eq ""); $ln =~ s/^(\S+)\s+/$1\t/; print " * \@$ln\n"; } sub add_return($) { my $ln = shift; print " *\n * Return:\n" if ($mode ne "Return Value:"); $ln =~ s/^\s+//; $ln =~ s/\s+$//; return if ($ln eq ""); print " * - $ln\n"; } sub add_description($) { my $ln = shift; print " *\n * \n" if ($mode ne "Description:"); $ln =~ s/^\s+//; $ln =~ s/\s+$//; return if ($ln eq ""); print " * $ln\n"; } sub flush_results() { print " */\n\n"; } while (<>) { if (m/^[\-]+$/) { flush_results(); $mode = ""; $parameter = ""; $ret = ""; $descr = ""; next; } if (m/(Parameter:)(.*)/) { print " *\n" if ($mode eq "func"); add_var($2); $mode = $1; next; } if (m/(Return Value:)(.*)/) { add_return($2); $mode = $1; next; } if (m/(Description:)(.*)/) { add_description($2); $mode = $1; next; } if ($mode eq "Parameter:") { add_var($_); next; } if ($mode eq "Return Value:") { add_return($_); next; } if ($mode eq "Description:") { add_description($_); next; } next if (m/^\s*$/); if (m/^\S+.*\s\*?(\S+)\s*\(/) { if ($mode eq "") { print "/**\n * $1()\n"; } else { print " * $1()\n"; } $mode="func"; } } flush_results(); </script> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11docs: s390: convert docs to ReST and rename to *.rstMauro Carvalho Chehab
Convert all text files with s390 documentation to ReST format. Tried to preserve as much as possible the original document format. Still, some of the files required some work in order for it to be visible on both plain text and after converted to html. The conversion is actually: - add blank lines and identation in order to identify paragraphs; - fix tables markups; - add some lists markups; - mark literal blocks; - adjust title markups. At its new index.rst, let's add a :orphan: while this is not linked to the main index.rst file, in order to avoid build warnings. Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11s390/ctl_reg: mark __ctl_set_bit and __ctl_clear_bit as __always_inlineGuenter Roeck
s390:tinyconfig fails to build with gcc 8.3.0. arch/s390/include/asm/ctl_reg.h:52:2: error: impossible constraint in 'asm' asm volatile( \ ^~~ arch/s390/include/asm/ctl_reg.h:62:2: note: in expansion of macro '__ctl_store' __ctl_store(reg, cr, cr); ^~~~~~~~~~~ s390/include/asm/ctl_reg.h:41:2: error: impossible constraint in 'asm' asm volatile( \ ^~~ arch/s390/include/asm/ctl_reg.h:64:2: note: in expansion of macro '__ctl_load' __ctl_load(reg, cr, cr); ^~~~~~~~~~ Marking __ctl_set_bit and __ctl_clear_bit as __always_inline fixes the problem. Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-11s390/boot: disable address-of-packed-member warningHeiko Carstens
Get rid of gcc9 warnings like this: arch/s390/boot/ipl_report.c: In function 'find_bootdata_space': arch/s390/boot/ipl_report.c:42:26: warning: taking address of packed member of 'struct ipl_rb_components' may result in an unaligned pointer value [-Waddress-of-packed-member] 42 | for_each_rb_entry(comp, comps) | ^~~~~ This is effectively the s390 variant of commit 20c6c1890455 ("x86/boot: Disable the address-of-packed-member compiler warning"). Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-10x86/resctrl: Use _ASM_BX to avoid ifdefferyUros Bizjak
Use the _ASM_BX macro which expands to either %rbx or %ebx, depending on the 32-bit or 64-bit config selected. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Reinette Chatre <reinette.chatre@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190606200044.5730-1-ubizjak@gmail.com
2019-06-10arm64: mm: avoid redundant READ_ONCE(*ptep)Mark Rutland
In set_pte_at(), we read the old pte value so that it can be passed into checks for racy hw updates. These checks are only performed for CONFIG_DEBUG_VM, and the value is not used otherwise. Since we read the pte value with READ_ONCE(), the compiler cannot elide the redundant read for !CONFIG_DEBUG_VM kernels. Let's ameliorate matters by moving the read and the checks into a helper, __check_racy_pte_update(), which only performs the read when the value will be used. This also allows us to reformat the conditions for clarity. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-10ARM: dts: am335x phytec boards: Fix cd-gpios active levelTeresa Remmet
Active level of the mmc1 cd gpio needs to be low instead of high. Fix PCM-953 and phyBOARD-WEGA. Signed-off-by: Teresa Remmet <t.remmet@phytec.de> Signed-off-by: Tony Lindgren <tony@atomide.com>
2019-06-10ARM: dts: dra72x: Disable usb4_tm target moduleKeerthy
usb4_tm is unsed on dra72 and accessing the module with ti,sysc is causing a boot crash hence disable its target module. Fixes: 549fce068a3112 ("ARM: dts: dra7: Add l4 interconnect hierarchy and ti-sysc data") Reported-by: Vignesh Raghavendra <vigneshr@ti.com> Signed-off-by: Keerthy <j-keerthy@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2019-06-08Merge tag 's390-5.2-4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 fixes from Heiko Carstens: - fix stack unwinder: the stack unwinder rework has on off-by-one bug which prevents following stack backchains over more than one context (e.g. irq -> process). - fix address space detection in exception handler: if user space switches to access register mode, which is not supported anymore, the exception handler may resolve to the wrong address space. * tag 's390-5.2-4' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/unwind: correct stack switching during unwind s390/mm: fix address space detection in exception handling
2019-06-08Merge tag 'mips_fixes_5.2_1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: - Declare ginvt() __always_inline due to its use of an argument as an inline asm immediate. - A VDSO build fix following Kbuild changes made this cycle. - A fix for boot failures on txx9 systems following memory initialization changes made this cycle. - Bounds check virt_addr_valid() to prevent it spuriously indicating that bogus addresses are valid, in turn fixing hardened usercopy failures that have been present since v4.12. - Build uImage.gz for pistachio systems by default, since this is the image we need in order to actually boot on a board. - Remove an unused variable in our uprobes code. * tag 'mips_fixes_5.2_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: uprobes: remove set but not used variable 'epc' MIPS: pistachio: Build uImage.gz by default MIPS: Make virt_addr_valid() return bool MIPS: Bounds check virt_addr_valid MIPS: TXx9: Fix boot crash in free_initmem() MIPS: remove a space after -I to cope with header search paths for VDSO MIPS: mark ginvt() as __always_inline
2019-06-08Merge tag 'spdx-5.2-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull yet more SPDX updates from Greg KH: "Another round of SPDX header file fixes for 5.2-rc4 These are all more "GPL-2.0-or-later" or "GPL-2.0-only" tags being added, based on the text in the files. We are slowly chipping away at the 700+ different ways people tried to write the license text. All of these were reviewed on the spdx mailing list by a number of different people. We now have over 60% of the kernel files covered with SPDX tags: $ ./scripts/spdxcheck.py -v 2>&1 | grep Files Files checked: 64533 Files with SPDX: 40392 Files with errors: 0 I think the majority of the "easy" fixups are now done, it's now the start of the longer-tail of crazy variants to wade through" * tag 'spdx-5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (159 commits) treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 450 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 449 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 448 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 446 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 445 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 444 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 443 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 442 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 441 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 440 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 438 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 437 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 436 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 435 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 434 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 433 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 432 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 431 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 430 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 429 ...
2019-06-08parisc: Fix module loading error with JUMP_LABEL featureHelge Deller
Commit 62217beb394e ("parisc: Add static branch and JUMP_LABEL feature") missed to add code to handle PCREL64 relocations which are generated when creating a jump label on a 64-bit kernel. This patch fixes module load errors like this one: # modprobe -v ipv6 insmod /lib/modules/5.2.0-rc1-JeR/kernel/net/ipv6/ipv6.ko modprobe: ERROR: could not insert 'ipv6': Exec format error dmesg reports: module ipv6: Unknown relocation: 72 Reported-by: Jeroen Roovers <jer@gentoo.org> Tested-by: Jeroen Roovers <jer@gentoo.org> Fixes: 62217beb394e ("parisc: Add static branch and JUMP_LABEL feature") Signed-off-by: Helge Deller <deller@gmx.de>
2019-06-08RAS/CEC: Add CONFIG_RAS_CEC_DEBUG and move CEC debug features thereTony Luck
The pfn and array files in (debugfs)/ras/cec are intended for debugging the CEC code itself. They are not needed on production systems, so the default setting for this CONFIG option is "n". [ bp: Have it with less ifdeffery by using IS_ENABLED(). ] Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de>
2019-06-08x86/fpu: Update kernel's FPU state before using for the fsave headerSebastian Andrzej Siewior
In commit 39388e80f9b0c ("x86/fpu: Don't save fxregs for ia32 frames in copy_fpstate_to_sigframe()") I removed the statement | if (ia32_fxstate) | copy_fxregs_to_kernel(fpu); and argued that it was wrongly merged because the content was already saved in kernel's state. This was wrong: It is required to write it back because it is only saved on the user-stack and save_fsave_header() reads it from task's FPU-state. I missed that part… Save x87 FPU state unless thread's FPU registers are already up to date. Fixes: 39388e80f9b0c ("x86/fpu: Don't save fxregs for ia32 frames in copy_fpstate_to_sigframe()") Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Eric Biggers <ebiggers@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com> Cc: kvm ML <kvm@vger.kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190607142915.y52mfmgk5lvhll7n@linutronix.de
2019-06-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf 2019-06-07 The following pull-request contains BPF updates for your *net* tree. The main changes are: 1) Fix several bugs in riscv64 JIT code emission which forgot to clear high 32-bits for alu32 ops, from Björn and Luke with selftests covering all relevant BPF alu ops from Björn and Jiong. 2) Two fixes for UDP BPF reuseport that avoid calling the program in case of __udp6_lib_err and UDP GRO which broke reuseport_select_sock() assumption that skb->data is pointing to transport header, from Martin. 3) Two fixes for BPF sockmap: a use-after-free from sleep in psock's backlog workqueue, and a missing restore of sk_write_space when psock gets dropped, from Jakub and John. 4) Fix unconnected UDP sendmsg hook API which is insufficient as-is since it breaks standard applications like DNS if reverse NAT is not performed upon receive, from Daniel. 5) Fix an out-of-bounds read in __bpf_skc_lookup which in case of AF_INET6 fails to verify that the length of the tuple is long enough, from Lorenz. 6) Fix libbpf's libbpf__probe_raw_btf to return an fd instead of 0/1 (for {un,}successful probe) as that is expected to be propagated as an fd to load_sk_storage_btf() and thus closing the wrong descriptor otherwise, from Michal. 7) Fix bpftool's JSON output for the case when a lookup fails, from Krzesimir. 8) Minor misc fixes in docs, samples and selftests, from various others. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-07gpio: pass lookup and descriptor flags to request_ownLinus Walleij
When a gpio_chip wants to request a descriptor from itself using gpiochip_request_own_desc() it needs to be able to specify fully how to use the descriptor, notably line inversion semantics. The workaround in the gpiolib.c can be removed and cases (such as SPI CS) where we need at times to request a GPIO with line inversion semantics directly on a chip for workarounds, can be fully supported with this call. Fix up some users of the API that weren't really using the last flag to set up the line as input or output properly but instead just calling direction setting explicitly after requesting the line. Cc: Martin Sperl <kernel@martin.sperl.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-06-07x86/mm/KASLR: Compute the size of the vmemmap section properlyBaoquan He
The size of the vmemmap section is hardcoded to 1 TB to support the maximum amount of system RAM in 4-level paging mode - 64 TB. However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level, 64 TB of vmemmap area is needed: 4 * 1000^5 PB / 4096 bytes page size * 64 bytes per page struct / 1000^4 TB = 62.5 TB. This hardcoding may cause vmemmap to corrupt the following cpu_entry_area section, if KASLR puts vmemmap very close to it and the actual vmemmap size is bigger than 1 TB. So calculate the actual size of the vmemmap region needed and then align it up to 1 TB boundary. In 4-level paging mode it is always 1 TB. In 5-level it's adjusted on demand. The current code reserves 0.5 PB for vmemmap on 5-level. With this change, the space can be saved and thus used to increase entropy for the randomization. [ bp: Spell out how the 64 TB needed for vmemmap is computed and massage commit message. ] Fixes: eedb92abb9bb ("x86/mm: Make virtual memory layout dynamic for CONFIG_X86_5LEVEL=y") Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Kirill A. Shutemov <kirill@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: kirill.shutemov@linux.intel.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable <stable@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190523025744.3756-1-bhe@redhat.com
2019-06-07Merge tag 'xtensa-20190607' of git://github.com/jcmvbkbc/linux-xtensaLinus Torvalds
Pull xtensa fix from Max Filippov: "Fix a section mismatch between memblock_reserve and mem_reserve. This fixes tinyconfig xtensa builds" * tag 'xtensa-20190607' of git://github.com/jcmvbkbc/linux-xtensa: xtensa: Fix section mismatch between memblock_reserve and mem_reserve
2019-06-07Merge tag 'pm-5.2-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull power management fixes from Rafael Wysocki: "These fix a crash during resume from hibernation introduced during the 4.19 cycle, cause the new Performance and Energy Bias Hint (EPB) code to be built only if CONFIG_PM is set and add a few missing kerneldoc comments. Specifics: - Fix a crash that occurs when a kernel with 'nosmt' in the command line is used to resume the system from hibernation (as the "restore" kernel), because memory mapping differences between the restore and image kernels cause SMT siblings to be woken up from idle states and subsequently they try to fetch instructions from incorrect memory locations (Jiri Kosina). - Cause the new Performance and Energy Bias Hint (EPB) code to be built only if CONFIG_PM is set, because that code is not really necessary otherwise (Rafael Wysocki). - Add kerneldoc comments to documents some helper functions related to system-wide suspend to avoid possible confusion regarding their purpose (Rafael Wysocki)" * tag 'pm-5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: x86/power: Fix 'nosmt' vs hibernation triple fault during resume PM: sleep: Add kerneldoc comments to some functions x86: intel_epb: Do not build when CONFIG_PM is unset
2019-06-07x86/insn-eval: Fix use-after-free access to LDT entryJann Horn
get_desc() computes a pointer into the LDT while holding a lock that protects the LDT from being freed, but then drops the lock and returns the (now potentially dangling) pointer to its caller. Fix it by giving the caller a copy of the LDT entry instead. Fixes: 670f928ba09b ("x86/insn-eval: Add utility function to get segment descriptor") Cc: stable@vger.kernel.org Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-06-07Merge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "Another round of mostly-benign fixes, the exception being a boot crash on SVE2-capable CPUs (although I don't know where you'd find such a thing, so maybe it's benign too). We're in the process of resolving some big-endian ptrace breakage, so I'll probably have some more for you next week. Summary: - Fix boot crash on platforms with SVE2 due to missing register encoding - Fix architected timer accessors when CONFIG_OPTIMIZE_INLINING=y - Move cpu_logical_map into smp.h for use by upcoming irqchip drivers - Trivial typo fix in comment - Disable some useless, noisy warnings from GCC 9" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Silence gcc warnings about arch ABI drift ARM64: trivial: s/TIF_SECOMP/TIF_SECCOMP/ comment typo fix arm64: arch_timer: mark functions as __always_inline arm64: smp: Moved cpu_logical_map[] to smp.h arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()
2019-06-07arm64/mm: Refactor __do_page_fault()Anshuman Khandual
__do_page_fault() is over complicated with multiple goto statements. This cleans up the code flow and while there drops local variable vm_fault_t. Reviewed-by: Mark Rutland <Mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-07arm64/mm: Document write abort detection from ESRAnshuman Khandual
This patch adds an is_write_abort() wrapper and documents the detection of the abort type on cache maintenance operations. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Andrey Konovalov <andreyknvl@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [catalin.marinas@arm.com: only keep the is_write_abort() wrapper] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-07s390/unwind: correct stack switching during unwindVasily Gorbik
Adjust conditions in on_stack function. That fixes backchain unwinder which was unable to read pt_regs at the very bottom of the stack and hence couldn't follow stacks (e.g. from async stack to a task stack). Fixes: 78c98f907413 ("s390/unwind: introduce stack unwind API") Reported-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07arm64: Fix comment after #endifOdin Ugedal
The config value used in the if was changed in b433dce056d3814dc4b33e5a8a533d6401ffcfb0, but the comment on the corresponding end was not changed. Signed-off-by: Odin Ugedal <odin@ugedal.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-07powerpc/32s: fix booting with CONFIG_PPC_EARLY_DEBUG_BOOTXChristophe Leroy
When booting through OF, setup_disp_bat() does nothing because disp_BAT are not set. By change, it used to work because BOOTX buffer is mapped 1:1 at address 0x81000000 by the bootloader, and btext_setup_display() sets virt addr same as phys addr. But since commit 215b823707ce ("powerpc/32s: set up an early static hash table for KASAN."), a temporary page table overrides the bootloader mapping. This 0x81000000 is also problematic with the newly implemented Kernel Userspace Access Protection (KUAP) because it is within user address space. This patch fixes those issues by properly setting disp_BAT through a call to btext_prepare_BAT(), allowing setup_disp_bat() to properly setup BAT3 for early bootx screen buffer access. Reported-by: Mathieu Malaterre <malat@debian.org> Fixes: 215b823707ce ("powerpc/32s: set up an early static hash table for KASAN.") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Tested-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-07Merge branch 'pm-x86'Rafael J. Wysocki
* pm-x86: x86/power: Fix 'nosmt' vs hibernation triple fault during resume x86: intel_epb: Do not build when CONFIG_PM is unset
2019-06-07s390/jump_label: remove unused structure definitionHeiko Carstens
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390/boot: disable address-of-packed-member warningHeiko Carstens
Get rid of gcc9 warnings like this: arch/s390/boot/ipl_report.c: In function 'find_bootdata_space': arch/s390/boot/ipl_report.c:42:26: warning: taking address of packed member of 'struct ipl_rb_components' may result in an unaligned pointer value [-Waddress-of-packed-member] 42 | for_each_rb_entry(comp, comps) | ^~~~~ This is effectively the s390 variant of commit 20c6c1890455 ("x86/boot: Disable the address-of-packed-member compiler warning"). Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390: fix unrecognized __aligned() in uapi headerMasahiro Yamada
__aligned() is a shorthand that is only available in the kernel space because it is defined in include/linux/compiler_attributes.h, which is not exported to the user space. Detected by compile-testing exported headers. ./usr/include/asm/runtime_instr.h:60:37: error: expected declaration specifiers or ‘...’ before numeric constant } __attribute__((packed)) __aligned(8); ^ Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390/configs: remove useless UEVENT_HELPER_PATHKrzysztof Kozlowski
Remove the CONFIG_UEVENT_HELPER_PATH because: 1. It is disabled since commit 1be01d4a5714 ("driver: base: Disable CONFIG_UEVENT_HELPER by default") as its dependency (UEVENT_HELPER) was made default to 'n', 2. It is not recommended (help message: "This should not be used today [...] creates a high system load") and was kept only for ancient userland, 3. Certain userland specifically requests it to be disabled (systemd README: "Legacy hotplug slows down the system and confuses udev"). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390: enforce CONFIG_HOTPLUG_CPUHeiko Carstens
x86 and powerpc (partially) enforce already CONFIG_HOTPLUG_CPU. On s390 it is enabled on all distributions by default since ages. The only exception is our zfcpdump kernel. However to simplify testing, enforce HOTPLUG_CPU. This was suggested by Paul McKenney, since his rcutorture test environments for CONFIG_SMP=y only support HOTPLUG_CPU=y. Suggested-by: Paul E. McKenney <paulmck@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07s390: enforce CONFIG_SMPHeiko Carstens
There never have been distributions that shiped with CONFIG_SMP=n for s390. In addition the kernel currently doesn't even compile with CONFIG_SMP=n for s390. Most likely it wouldn't even work, even if we fix the compile error, since nobody tests it, since there is no use case that I can think of. Therefore simply enforce CONFIG_SMP and get rid of some more or less unused code. Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-07powerpc/64s: __find_linux_pte() synchronization vs pmdp_invalidate()Nicholas Piggin
The change to pmdp_invalidate() to mark the pmd with _PAGE_INVALID broke the synchronisation against lock free lookups, __find_linux_pte()'s pmd_none() check no longer returns true for such cases. Fix this by adding a check for this condition as well. Fixes: da7ad366b497 ("powerpc/mm/book3s: Update pmd_present to look at _PAGE_PRESENT bit") Cc: stable@vger.kernel.org # v4.20+ Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-07powerpc/64s: Fix THP PMD collapse serialisationNicholas Piggin
Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in pte helpers") changed the actual bitwise tests in pte_access_permitted by using pte_write() and pte_present() helpers rather than raw bitwise testing _PAGE_WRITE and _PAGE_PRESENT bits. The pte_present() change now returns true for PTEs which are !_PAGE_PRESENT and _PAGE_INVALID, which is the combination used by pmdp_invalidate() to synchronize access from lock-free lookups. pte_access_permitted() is used by pmd_access_permitted(), so allowing GUP lock free access to proceed with such PTEs breaks this synchronisation. This bug has been observed on a host using the hash page table MMU, with random crashes and corruption in guests, usually together with bad PMD messages in the host. Fix this by adding an explicit check in pmd_access_permitted(), and documenting the condition explicitly. The pte_write() change should be okay, and would prevent GUP from falling back to the slow path when encountering savedwrite PTEs, which matches what x86 (that does not implement savedwrite) does. Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in pte helpers") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-07powerpc: Fix kexec failure on book3s/32Christophe Leroy
In the old days, _PAGE_EXEC didn't exist on 6xx aka book3s/32. Therefore, allthough __mapin_ram_chunk() was already mapping kernel text with PAGE_KERNEL_TEXT and the rest with PAGE_KERNEL, the entire memory was executable. Part of the memory (first 512kbytes) was mapped with BATs instead of page table, but it was also entirely mapped as executable. In commit 385e89d5b20f ("powerpc/mm: add exec protection on powerpc 603"), we started adding exec protection to some 6xx, namely the 603, for pages mapped via pagetables. Then, in commit 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX"), the exec protection was extended to BAT mapped memory, so that really only the kernel text could be executed. The problem here is that kexec is based on copying some code into upper part of memory then executing it from there in order to install a fresh new kernel at its definitive location. However, the code is position independant and first part of it is just there to deactivate the MMU and jump to the second part. So it is possible to run this first part inplace instead of running the copy. Once the MMU is off, there is no protection anymore and the second part of the code will just run as before. Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi> Fixes: 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX") Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-06Merge branch 'parisc-5.2-3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux Pull parisc fixes from Helge Deller: - Fix crashes when accessing PCI devices on some machines like C240 and J5000. The crashes were triggered because we replaced cache flushes by nops in the alternative coding where we shouldn't for some machines. - Dave fixed a race in the usage of the sr1 space register when used to load the coherence index. - Use the hardware lpa instruction to to load the physical address of kernel virtual addresses in the iommu driver code. - The kernel may fail to link when CONFIG_MLONGCALLS isn't set. Solve that by rearranging functions in the final vmlinux executeable. - Some defconfig cleanups and removal of compiler warnings. * 'parisc-5.2-3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: Fix crash due alternative coding for NP iopdir_fdc bit parisc: Use lpa instruction to load physical addresses in driver code parisc: configs: Remove useless UEVENT_HELPER_PATH parisc: Use implicit space register selection for loading the coherence index of I/O pdirs parisc: Fix compiler warnings in float emulation code parisc/slab: cleanup after /proc/slab_allocators removal parisc: Allow building 64-bit kernel without -mlong-calls compiler option parisc: Kconfig: remove ARCH_DISCARD_MEMBLOCK
2019-06-06x86/fpu: Use fault_in_pages_writeable() for pre-faultingHugh Dickins
Since commit d9c9ce34ed5c8 ("x86/fpu: Fault-in user stack if copy_fpstate_to_sigframe() fails") get_user_pages_unlocked() pre-faults user's memory if a write generates a page fault while the handler is disabled. This works in general and uncovered a bug as reported by Mike Rapoport¹. It has been pointed out that this function may be fragile and a simple pre-fault as in fault_in_pages_writeable() would be a better solution. Better as in taste and simplicity: that write (as performed by the alternative function) performs exactly the same faulting of memory as before. This was suggested by Hugh Dickins and Andrew Morton. Use fault_in_pages_writeable() for pre-faulting user's stack. [ bigeasy: Write commit message. ] [ bp: Massage some. ] ¹ https://lkml.kernel.org/r/1557844195-18882-1-git-send-email-rppt@linux.ibm.com Fixes: d9c9ce34ed5c8 ("x86/fpu: Fault-in user stack if copy_fpstate_to_sigframe() fails") Suggested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: linux-mm <linux-mm@kvack.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190529072540.g46j4kfeae37a3iu@linutronix.de Link: https://lkml.kernel.org/r/1557844195-18882-1-git-send-email-rppt@linux.ibm.com
2019-06-06arm64: Silence gcc warnings about arch ABI driftDave Martin
Since GCC 9, the compiler warns about evolution of the platform-specific ABI, in particular relating for the marshaling of certain structures involving bitfields. The kernel is a standalone binary, and of course nobody would be so stupid as to expose structs containing bitfields as function arguments in ABI. (Passing a pointer to such a struct, however inadvisable, should be unaffected by this change. perf and various drivers rely on that.) So these warnings do more harm than good: turn them off. We may miss warnings about future ABI drift, but that's too bad. Future ABI breaks of this class will have to be debugged and fixed the traditional way unless the compiler evolves finer-grained diagnostics. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-06parisc: Fix crash due alternative coding for NP iopdir_fdc bitHelge Deller
According to the found documentation, data cache flushes and sync instructions are needed on the PCX-U+ (PA8200, e.g. C200/C240) platforms, while PCX-W (PA8500, e.g. C360) platforms aparently don't need those flushes when changing the IO PDIR data structures. We have no documentation for PCX-W+ (PA8600) and PCX-W2 (PA8700) CPUs, but Carlo Pisani reported that his C3600 machine (PA8600, PCX-W+) fails when the fdc instructions were removed. His firmware didn't set the NIOP bit, so one may assume it's a firmware bug since other C3750 machines had the bit set. Even if documentation (as mentioned above) states that PCX-W (PA8500, e.g. J5000) does not need fdc flushes, Sven could show that an Adaptec 29320A PCI-X SCSI controller reliably failed on a dd command during the first five minutes in his J5000 when fdc flushes were missing. Going forward, we will now NOT replace the fdc and sync assembler instructions by NOPS if: a) the NP iopdir_fdc bit was set by firmware, or b) we find a CPU up to and including a PCX-W+ (PA8600). This fixes the HPMC crashes on a C240 and C36XX machines. For other machines we rely on the firmware to set the bit when needed. In case one finds HPMC issues, people could try to boot their machines with the "no-alternatives" kernel option to turn off any alternative patching. Reported-by: Sven Schnelle <svens@stackframe.org> Reported-by: Carlo Pisani <carlojpisani@gmail.com> Tested-by: Sven Schnelle <svens@stackframe.org> Fixes: 3847dab77421 ("parisc: Add alternative coding infrastructure") Signed-off-by: Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org # 5.0+
2019-06-06parisc: Use lpa instruction to load physical addresses in driver codeJohn David Anglin
Most I/O in the kernel is done using the kernel offset mapping. However, there is one API that uses aliased kernel address ranges: > The final category of APIs is for I/O to deliberately aliased address > ranges inside the kernel. Such aliases are set up by use of the > vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O > subsystem assumes that the user mapping and kernel offset mapping are > the only aliases. This isn't true for vmap aliases, so anything in > the kernel trying to do I/O to vmap areas must manually manage > coherency. It must do this by flushing the vmap range before doing > I/O and invalidating it after the I/O returns. For this reason, we should use the hardware lpa instruction to load the physical address of kernel virtual addresses in the driver code. I believe we only use the vmap/vmalloc API with old PA 1.x processors which don't have a sba, so we don't hit this problem. Tested on c3750, c8000 and rp3440. Signed-off-by: John David Anglin <dave.anglin@bell.net> Signed-off-by: Helge Deller <deller@gmx.de>
2019-06-06parisc: configs: Remove useless UEVENT_HELPER_PATHKrzysztof Kozlowski
Remove the CONFIG_UEVENT_HELPER_PATH because: 1. It is disabled since commit 1be01d4a5714 ("driver: base: Disable CONFIG_UEVENT_HELPER by default") as its dependency (UEVENT_HELPER) was made default to 'n', 2. It is not recommended (help message: "This should not be used today [...] creates a high system load") and was kept only for ancient userland, 3. Certain userland specifically requests it to be disabled (systemd README: "Legacy hotplug slows down the system and confuses udev"). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Helge Deller <deller@gmx.de>
2019-06-06ARM64: trivial: s/TIF_SECOMP/TIF_SECCOMP/ comment typo fixGeorge G. Davis
Fix a s/TIF_SECOMP/TIF_SECCOMP/ comment typo Cc: Jiri Kosina <trivial@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org Signed-off-by: George G. Davis <george_davis@mentor.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-06x86/CPU: Add more Icelake model numbersKan Liang
Add the CPUID model numbers of Icelake (ICL) desktop and server processors to the Intel family list. [ Qiuxu: Sort the macros by model number. ] Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Cc: Rajneesh Bhardwaj <rajneesh.bhardwaj@linux.intel.com> Cc: rui.zhang@intel.com Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190603134122.13853-1-kan.liang@linux.intel.com
2019-06-06crypto: arm64/sha2-ce - correct digest for empty data in finupElena Petrova
The sha256-ce finup implementation for ARM64 produces wrong digest for empty input (len=0). Expected: the actual digest, result: initial value of SHA internal state. The error is in sha256_ce_finup: for empty data `finalize` will be 1, so the code is relying on sha2_ce_transform to make the final round. However, in sha256_base_do_update, the block function will not be called when len == 0. Fix it by setting finalize to 0 if data is empty. Fixes: 03802f6a80b3a ("crypto: arm64/sha2-ce - move SHA-224/256 ARMv8 implementation to base layer") Cc: stable@vger.kernel.org Signed-off-by: Elena Petrova <lenaptr@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>