summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2020-08-02KVM: SVM: Fix sev_pin_memory() error handlingDan Carpenter
The sev_pin_memory() function was modified to return error pointers instead of NULL but there are two problems. The first problem is that if "npages" is zero then it still returns NULL. Secondly, several of the callers were not updated to check for error pointers instead of NULL. Either one of these issues will lead to an Oops. Fixes: a8d908b5873c ("KVM: x86: report sev_pin_memory errors with PTR_ERR") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Message-Id: <20200714142351.GA315374@mwanda> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-01Merge branch 'kcsan' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into locking/core Pull v5.9 KCSAN bits from Paul E. McKenney. Perhaps the most important change is that GCC 11 now has all fixes in place to support KCSAN, so GCC support can be enabled again. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-31Merge branch 'linus' into locking/core, to resolve conflictIngo Molnar
Conflicts: arch/arm/include/asm/percpu.h As Stephen Rothwell noted, there's a conflict between this commit in locking/core: a21ee6055c30 ("lockdep: Change hardirq{s_enabled,_context} to per-cpu variables") and this fresh upstream commit: aa54ea903abb ("ARM: percpu.h: fix build error") a21ee6055c30 is a simpler solution to the dependency problem and doesn't further increase header hell - so this conflict resolution effectively reverts aa54ea903abb and uses the a21ee6055c30 solution. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-31Merge branch 'WIP.x86/entry' into x86/entry, to merge the latest generic ↵Ingo Molnar
code and resolve conflicts Pick up and resolve the NMI entry code changes from the locking tree, and also pick up the latest two fixes from tip:core/entry. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-31x86: Add support for ZSTD compressed kernelNick Terrell
- Add support for zstd compressed kernel - Define __DISABLE_EXPORTS in Makefile - Remove __DISABLE_EXPORTS definition from kaslr.c - Bump the heap size for zstd. - Update the documentation. Integrates the ZSTD decompression code to the x86 pre-boot code. Zstandard requires slightly more memory during the kernel decompression on x86 (192 KB vs 64 KB), and the memory usage is independent of the window size. __DISABLE_EXPORTS is now defined in the Makefile, which covers both the existing use in kaslr.c, and the use needed by the zstd decompressor in misc.c. This patch has been boot tested with both a zstd and gzip compressed kernel on i386 and x86_64 using buildroot and QEMU. Additionally, this has been tested in production on x86_64 devices. We saw a 2 second boot time reduction by switching kernel compression from xz to zstd. Signed-off-by: Nick Terrell <terrelln@fb.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200730190841.2071656-7-nickrterrell@gmail.com
2020-07-31x86: Bump ZO_z_extra_bytes margin for zstdNick Terrell
Bump the ZO_z_extra_bytes margin for zstd. Zstd needs 3 bytes per 128 KB, and has a 22 byte fixed overhead. Zstd needs to maintain 128 KB of space at all times, since that is the maximum block size. See the comments regarding in-place decompression added in lib/decompress_unzstd.c for details. The existing code is written so that all the compression algorithms use the same ZO_z_extra_bytes. It is taken to be the maximum of the growth rate plus the maximum fixed overhead. The comments just above this diff state that: Signed-off-by: Nick Terrell <terrelln@fb.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200730190841.2071656-6-nickrterrell@gmail.com
2020-07-31x86/kaslr: Add a check that the random address is in rangeArvind Sankar
Check in find_random_phys_addr() that the chosen address is inside the range that was required. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-22-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Make local variables 64-bitArvind Sankar
Change the type of local variables/fields that store mem_vector addresses to u64 to make it less likely that 32-bit overflow will cause issues on 32-bit. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-21-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Replace 'unsigned long long' with 'u64'Arvind Sankar
No functional change. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-20-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Make minimum/image_size 'unsigned long'Arvind Sankar
Change type of minimum/image_size arguments in process_mem_region to 'unsigned long'. These actually can never be above 4G (even on x86_64), and they're 'unsigned long' in every other function except this one. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-19-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Small cleanup of find_random_phys_addr()Arvind Sankar
Just a trivial rearrangement to do all the processing together, and only have one call to slots_fetch_random() in the source. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-18-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop unnecessary alignment in find_random_virt_addr()Arvind Sankar
Drop unnecessary alignment of image_size to CONFIG_PHYSICAL_ALIGN in find_random_virt_addr, it cannot change the result: the largest valid slot is the largest n that satisfies minimum + n * CONFIG_PHYSICAL_ALIGN + image_size <= KERNEL_IMAGE_SIZE (since minimum is already aligned) and so n is equal to (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN even if image_size is not aligned to CONFIG_PHYSICAL_ALIGN. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-17-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop redundant check in store_slot_info()Arvind Sankar
Drop unnecessary check that number of slots is not zero in store_slot_info, it's guaranteed to be at least 1 by the calculation. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-16-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Make the type of number of slots/slot areas consistentArvind Sankar
The number of slots can be 'unsigned int', since on 64-bit, the maximum amount of memory is 2^52, the minimum alignment is 2^21, so the slot number cannot be greater than 2^31. But in case future processors have more than 52 physical address bits, make it 'unsigned long'. The slot areas are limited by MAX_SLOT_AREA, currently 100. It is indexed by an int, but the number of areas is stored as 'unsigned long'. Change both to 'unsigned int' for consistency. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-15-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop test for command-line parameters before parsingArvind Sankar
This check doesn't save anything. In the case when none of the parameters are present, each strstr will scan args twice (once to find the length and then for searching), six scans in total. Just going ahead and parsing the arguments only requires three scans: strlen, memcpy, and parsing. This will be the first malloc, so free will actually free up the memory, so the check doesn't save heap space either. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-14-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Simplify process_gb_huge_pages()Arvind Sankar
Replace the loop to determine the number of 1Gb pages with arithmetic. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-13-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Short-circuit gb_huge_pages on x86-32Arvind Sankar
32-bit does not have GB pages, so don't bother checking for them. Using the IS_ENABLED() macro allows the compiler to completely remove the gb_huge_pages code. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-12-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Fix off-by-one error in process_gb_huge_pages()Arvind Sankar
If the remaining size of the region is exactly 1Gb, there is still one hugepage that can be reserved. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-11-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop some redundant checks from __process_mem_region()Arvind Sankar
Clip the start and end of the region to minimum and mem_limit prior to the loop. region.start can only increase during the loop, so raising it to minimum before the loop is enough. A region that becomes empty due to this will get checked in the first iteration of the loop. Drop the check for overlap extending beyond the end of the region. This will get checked in the next loop iteration anyway. Rename end to region_end for symmetry with region.start. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-10-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop redundant variable in __process_mem_region()Arvind Sankar
region.size can be trimmed to store the portion of the region before the overlap, instead of a separate mem_vector variable. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-9-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Eliminate 'start_orig' local variable from __process_mem_region()Arvind Sankar
Set the region.size within the loop, which removes the need for start_orig. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-8-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop redundant cur_entry from __process_mem_region()Arvind Sankar
cur_entry is only used as cur_entry.start + cur_entry.size, which is always equal to end. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-7-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Fix off-by-one error in __process_mem_region()Arvind Sankar
In case of an overlap, the beginning of the region should be used even if it is exactly image_size, not just strictly larger. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-6-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Initialize mem_limit to the real maximum addressArvind Sankar
On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level paging or 4PiB with 5-level paging). This is currently not enforced by KASLR, which thus implicitly relies on physical memory being limited to less than 64TiB. On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by special checks in __process_mem_region(). Initialize mem_limit to the maximum (depending on architecture), instead of ULLONG_MAX, and make sure the command-line arguments can only decrease it. This makes the enforcement explicit on 64-bit, and eliminates the 32-bit specific checks to keep the kernel below 512M. Check upfront to make sure the minimum address is below the limit before doing any work. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727230801.3468620-5-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Fix process_efi_entries commentArvind Sankar
Since commit: 0982adc74673 ("x86/boot/KASLR: Work around firmware bugs by excluding EFI_BOOT_SERVICES_* and EFI_LOADER_* from KASLR's choice") process_efi_entries() will return true if we have an EFI memmap, not just if it contained EFI_MEMORY_MORE_RELIABLE regions. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727230801.3468620-4-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Remove bogus warning and unnecessary gotoArvind Sankar
Drop the warning on seeing "--" in handle_mem_options(). This will trigger whenever one of the memory options is present in the command line together with "--", but there's no problem if that is the case. Replace goto with break. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727230801.3468620-3-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Make command line handling saferArvind Sankar
Handle the possibility that the command line is NULL. Replace open-coded strlen with a function call. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727230801.3468620-2-nivedita@alum.mit.edu
2020-07-31crypto: x86/curve25519 - Remove unused carry variablesHerbert Xu
The carry variables are assigned but never used, which upsets the compiler. This patch removes them. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Karthikeyan Bhargavan <karthik.bhargavan@gmail.com> Acked-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-31KVM: LAPIC: Set the TDCR settable bitsWanpeng Li
It is a little different between Intel and AMD, Intel's bit 2 is 0 and AMD is reserved. On bare-metal, Intel will refuse to set APIC_TDCR once bits except 0, 1, 3 are setting, however, AMD will accept bits 0, 1, 3 and ignore other bits setting as patch does. Before the patch, we can get back anything what we set to the APIC_TDCR, this patch improves it. Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1596165141-28874-2-git-send-email-wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-31KVM: SVM: Fix disable pause loop exit/pause filtering capability on SVMWanpeng Li
'Commit 8566ac8b8e7c ("KVM: SVM: Implement pause loop exit logic in SVM")' drops disable pause loop exit/pause filtering capability completely, I guess it is a merge fault by Radim since disable vmexits capabilities and pause loop exit for SVM patchsets are merged at the same time. This patch reintroduces the disable pause loop exit/pause filtering capability support. Reported-by: Haiwei Li <lihaiwei@tencent.com> Tested-by: Haiwei Li <lihaiwei@tencent.com> Fixes: 8566ac8b ("KVM: SVM: Implement pause loop exit logic in SVM") Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1596165141-28874-3-git-send-email-wanpengli@tencent.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-31KVM: LAPIC: Prevent setting the tscdeadline timer if the lapic is hw disabledWanpeng Li
Prevent setting the tscdeadline timer if the lapic is hw disabled. Fixes: bce87cce88 (KVM: x86: consolidate different ways to test for in-kernel LAPIC) Cc: <stable@vger.kernel.org> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1596165141-28874-1-git-send-email-wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: x86: Specify max TDP level via kvm_configure_mmu()Sean Christopherson
Capture the max TDP level during kvm_configure_mmu() instead of using a kvm_x86_ops hook to do it at every vCPU creation. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-10-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: x86/mmu: Rename max_page_level to max_huge_page_levelSean Christopherson
Rename max_page_level to explicitly call out that it tracks the max huge page level so as to avoid confusion when a future patch moves the max TDP level, i.e. max root level, into the MMU and kvm_configure_mmu(). Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-9-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: x86: Dynamically calculate TDP level from max level and MAXPHYADDRSean Christopherson
Calculate the desired TDP level on the fly using the max TDP level and MAXPHYADDR instead of doing the same when CPUID is updated. This avoids the hidden dependency on cpuid_maxphyaddr() in vmx_get_tdp_level() and also standardizes the "use 5-level paging iff MAXPHYADDR > 48" behavior across x86. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-8-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: VXM: Remove temporary WARN on expected vs. actual EPTP level mismatchSean Christopherson
Remove the WARN in vmx_load_mmu_pgd() that was temporarily added to aid bisection/debug in the event the current MMU's shadow root level didn't match VMX's computed EPTP level. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-7-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: x86: Pull the PGD's level from the MMU instead of recalculating itSean Christopherson
Use the shadow_root_level from the current MMU as the root level for the PGD, i.e. for VMX's EPTP. This eliminates the weird dependency between VMX and the MMU where both must independently calculate the same root level for things to work correctly. Temporarily keep VMX's calculation of the level and use it to WARN if the incoming level diverges. Opportunistically refactor kvm_mmu_load_pgd() to avoid indentation hell, and rename a 'cr3' param in the load_mmu_pgd prototype that managed to survive the cr3 purge. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-6-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: VMX: Make vmx_load_mmu_pgd() staticSean Christopherson
Make vmx_load_mmu_pgd() static as it is no longer invoked directly by nested VMX (or any code for that matter). No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-5-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: x86/mmu: Add separate helper for shadow NPT root page role calcSean Christopherson
Refactor the shadow NPT role calculation into a separate helper to better differentiate it from the non-nested shadow MMU, e.g. the NPT variant is never direct and derives its root level from the TDP level. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-3-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: VMX: Drop a duplicate declaration of construct_eptp()Sean Christopherson
Remove an extra declaration of construct_eptp() from vmx.h. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-4-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30KVM: nSVM: Correctly set the shadow NPT root level in its MMU roleSean Christopherson
Move the initialization of shadow NPT MMU's shadow_root_level into kvm_init_shadow_npt_mmu() and explicitly set the level in the shadow NPT MMU's role to be the TDP level. This ensures the role and MMU levels are synchronized and also initialized before __kvm_mmu_new_pgd(), which consumes the level when attempting a fast PGD switch. Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Fixes: 9fa72119b24db ("kvm: x86: Introduce kvm_mmu_calc_root_page_role()") Fixes: a506fdd223426 ("KVM: nSVM: implement nested_svm_load_cr3() and use it for host->guest switch") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-2-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30x86/kvm: Use __xfer_to_guest_mode_work_pending() in kvm_run_vcpu()Thomas Gleixner
The comments explicitely explain that the work flags check and handling in kvm_run_vcpu() is done with preemption and interrupts enabled as KVM invokes the check again right before entering guest mode with interrupts disabled which guarantees that the work flags are observed and handled before VMENTER. Nevertheless the flag pending check in kvm_run_vcpu() uses the helper variant which requires interrupts to be disabled triggering an instant lockdep splat. This was caught in testing before and then not fixed up in the patch before applying. :( Use the relaxed and intentionally racy __xfer_to_guest_mode_work_pending() instead. Fixes: 72c3c0fe54a3 ("x86/kvm: Use generic xfer to guest work function") Reported-by: Qian Cai <cai@lca.pw> writes: Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/87bljxa2sa.fsf@nanos.tec.linutronix.de
2020-07-30initrd: remove support for multiple floppiesChristoph Hellwig
Remove the special handling for multiple floppies in the initrd code. No one should be using floppies for booting these days. (famous last words..) Includes a spelling fix from Colin Ian King <colin.king@canonical.com>. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-07-29x86/i8259: Use printk_deferred() to prevent deadlockThomas Gleixner
0day reported a possible circular locking dependency: Chain exists of: &irq_desc_lock_class --> console_owner --> &port_lock_key Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&port_lock_key); lock(console_owner); lock(&port_lock_key); lock(&irq_desc_lock_class); The reason for this is a printk() in the i8259 interrupt chip driver which is invoked with the irq descriptor lock held, which reverses the lock operations vs. printk() from arbitrary contexts. Switch the printk() to printk_deferred() to avoid that. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87365abt2v.fsf@nanos.tec.linutronix.de
2020-07-29Merge branch 'locking/header'Peter Zijlstra
2020-07-29locking/atomic: Move ATOMIC_INIT into linux/types.hHerbert Xu
This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h. This allows users of atomic_t to use ATOMIC_INIT without having to include atomic.h as that way may lead to header loops. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Link: https://lkml.kernel.org/r/20200729123105.GB7047@gondor.apana.org.au
2020-07-29Merge branches 'arm/renesas', 'arm/qcom', 'arm/mediatek', 'arm/omap', ↵Joerg Roedel
'arm/exynos', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd' and 'core' into next
2020-07-28perf/x86/rapl: Add Hygon Fam18h RAPL supportPu Wen
Hygon Family 18h(Dhyana) support RAPL in bit 14 of CPUID 0x80000007 EDX, and has MSRs RAPL_PWR_UNIT/CORE_ENERGY_STAT/PKG_ENERGY_STAT. So add Hygon Dhyana Family 18h support for RAPL. The output is available via the energy-pkg pseudo event: $ perf stat -a -I 1000 --per-socket -e power/energy-pkg/ [ mingo: Tidied up the initializers. ] Signed-off-by: Pu Wen <puwen@hygon.cn> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200720082205.1307-1-puwen@hygon.cn
2020-07-28Merge tag 'v5.8-rc7' into perf/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-27x86: switch to ->regset_get()Al Viro
All instances of ->get() in arch/x86 switched; that might or might not be worth splitting up. Notes: * for xstateregs_get() the amount we want to store is determined at the boot time; see init_xstate_size() and update_regset_xstate_info() for details. task->thread.fpu.state.xsave ends with a flexible array member and the amount of data in it depends upon the FPU features supported/enabled. * fpregs_get() writes slightly less than full ->thread.fpu.state.fsave (the last word is not copied); we pass the full size of state.fsave and let membuf_write() trim to the amount declared by regset - __regset_get() will make sure that the space in buffer is no more than that. * copy_xstate_to_user() and its helpers are gone now. * fpregs_soft_get() was getting user_regset_copyout() arguments wrong. Since "x86: x86 user_regset math_emu" back in 2008... I really doubt that it's worth splitting out for -stable, though - you need a 486SX box for that to trigger... [Kevin's braino fix for copy_xstate_to_kernel() essentially duplicated here] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-07-27kill elf_fpxregs_tAl Viro
all uses are conditional upon ELF_CORE_COPY_XFPREGS, which has not been defined on any architecture since 2010 Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>