summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2018-11-19x86/microcode/AMD: Clean up per-family patch size checksBorislav Petkov
Starting with family 0x15, the patch size verification is not needed anymore. Thus get rid of the need to update this checking function with each new family. Keep the check for older families. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86@kernel.org Link: https://lkml.kernel.org/r/20181107170218.7596-5-bp@alien8.de
2018-11-19x86/microcode/AMD: Move verify_patch_size() up in the fileBorislav Petkov
... to enable later improvements. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: x86@kernel.org Link: https://lkml.kernel.org/r/20181107170218.7596-4-bp@alien8.de
2018-11-19x86/microcode/AMD: Add microcode container verificationMaciej S. Szmigiero
Add container and patch verification functions to the AMD microcode update driver. These functions check whether a passed buffer contains the relevant structure, whether it isn't truncated and (for actual microcode patches) whether the size of a patch is not too large for a particular CPU family. By adding these checks as separate functions the actual microcode loading code won't get interspersed with a lot of checks and so will be more readable. [ bp: Make all pr_err() calls into pr_debug() and drop the verify_patch() bits. ] Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/3014e96c82cd90761b4601bd2cfe59c4119e46a7.1529424596.git.mail@maciej.szmigiero.name
2018-11-19x86/microcode/AMD: Subtract SECTION_HDR_SIZE from file leftover lengthMaciej S. Szmigiero
verify_patch_size() verifies whether the remaining size of the microcode container file is large enough to contain a patch of the indicated size. However, the section header length is not included in this indicated size but it is present in the leftover file length so it should be subtracted from the leftover file length before passing this value to verify_patch_size(). [ bp: Split comment. ] Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/6df43f4f6a28186a13a66e8d7e61143c5e1a2324.1529424596.git.mail@maciej.szmigiero.name
2018-11-17x86/platform/olpc: Do not call of_platform_bus_probe()Rob Herring
The DT core will probe the DT by default now, so the OLPC platform code calling of_platform_bus_probe() is not necessary. The algorithm for what nodes are probed is a little different in how compatible is handled, but since OLPC uses compatible strings for matching it is not affected by this difference. Also, only the battery node located at the root level gets a device created as the dcon is a PCI device and the RTC device is created in olpc-xo1-rtc.c. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: Lubomir Rintel <lkundrak@v3.sk> Cc: Thomas Gleixner <tglx@linutronix.de> CC: devicetree@vger.kernel.org CC: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/20181116201820.10065-1-robh@kernel.org
2018-11-16crypto: x86/chacha20 - Add a 4-block AVX2 variantMartin Willi
This variant builds upon the idea of the 2-block AVX2 variant that shuffles words after each round. The shuffling has a rather high latency, so the arithmetic units are not optimally used. Given that we have plenty of registers in AVX, this version parallelizes the 2-block variant to do four blocks. While the first two blocks are shuffling, the CPU can do the XORing on the second two blocks and vice-versa, which makes this version much faster than the SSSE3 variant for four blocks. The latter is now mostly for systems that do not have AVX2, but there it is the work-horse, so we keep it in place. The partial XORing function trailer is very similar to the AVX2 2-block variant. While it could be shared, that code segment is rather short; profiling is also easier with the trailer integrated, so we keep it per function. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16crypto: x86/chacha20 - Add a 2-block AVX2 variantMartin Willi
This variant uses the same principle as the single block SSSE3 variant by shuffling the state matrix after each round. With the wider AVX registers, we can do two blocks in parallel, though. This function can increase performance and efficiency significantly for lengths that would otherwise require a 4-block function. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16crypto: x86/chacha20 - Use larger block functions more aggressivelyMartin Willi
Now that all block functions support partial lengths, engage the wider block sizes more aggressively. This prevents using smaller block functions multiple times, where the next larger block function would have been faster. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16crypto: x86/chacha20 - Support partial lengths in 8-block AVX2 variantMartin Willi
Add a length argument to the eight block function for AVX2, so the block function may XOR only a partial length of eight blocks. To avoid unnecessary operations, we integrate XORing of the first four blocks in the final lane interleaving; this also avoids some work in the partial lengths path. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16crypto: x86/chacha20 - Support partial lengths in 4-block SSSE3 variantMartin Willi
Add a length argument to the quad block function for SSSE3, so the block function may XOR only a partial length of four blocks. As we already have the stack set up, the partial XORing does not need to. This gives a slightly different function trailer, so we keep that separate from the 1-block function. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16crypto: x86/chacha20 - Support partial lengths in 1-block SSSE3 variantMartin Willi
Add a length argument to the single block function for SSSE3, so the block function may XOR only a partial length of the full block. Given that the setup code is rather cheap, the function does not process more than one block; this allows us to keep the block function selection in the C glue code. The required branching does not negatively affect performance for full block sizes. The partial XORing uses simple "rep movsb" to copy the data before and after doing XOR in SSE. This is rather efficient on modern processors; movsw can be slightly faster, but the additional complexity is probably not worth it. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-14x86/traps: Complete prototype declarationsBorislav Petkov
... with proper variable names. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: <x86@kernel.org> Link: https://lkml.kernel.org/r/20181110141647.GA20073@zn.tnic
2018-11-14x86/mce: Fix -Wmissing-prototypes warningsBorislav Petkov
Add the proper includes and make smca_get_name() static. Fix an actual bug too which the warning triggered: arch/x86/kernel/cpu/mcheck/therm_throt.c:395:39: error: conflicting \ types for ‘smp_thermal_interrupt’ asmlinkage __visible void __irq_entry smp_thermal_interrupt(struct pt_regs *r) ^~~~~~~~~~~~~~~~~~~~~ In file included from arch/x86/kernel/cpu/mcheck/therm_throt.c:29: ./arch/x86/include/asm/traps.h:107:17: note: previous declaration of \ ‘smp_thermal_interrupt’ was here asmlinkage void smp_thermal_interrupt(void); Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Yi Wang <wang.yi59@zte.com.cn> Cc: Michael Matz <matz@suse.de> Cc: x86@kernel.org Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1811081633160.1549@nanos.tec.linutronix.de
2018-11-13x86/ima: define arch_ima_get_securebootNayna Jain
Distros are concerned about totally disabling the kexec_load syscall. As a compromise, the kexec_load syscall will only be disabled when CONFIG_KEXEC_VERIFY_SIG is configured and the system is booted with secureboot enabled. This patch defines the new arch specific function called arch_ima_get_secureboot() to retrieve the secureboot state of the system. Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Suggested-by: Seth Forshee <seth.forshee@canonical.com> Cc: David Howells <dhowells@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Peter Jones <pjones@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2018-11-12iommu/vtd: Cleanup dma_remapping.h headerLu Baolu
Commit e61d98d8dad00 ("x64, x2apic/intr-remap: Intel vt-d, IOMMU code reorganization") moved dma_remapping.h from drivers/pci/ to current place. It is entirely VT-d specific, but uses a generic name. This merges dma_remapping.h with include/linux/intel-iommu.h and removes dma_remapping.h as the result. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Sohil Mehta <sohil.mehta@intel.com> Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Liu, Yi L <yi.l.liu@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-11-12x86/mm/fault: Allow stack access below %rspWaiman Long
The current x86 page fault handler allows stack access below the stack pointer if it is no more than 64k+256 bytes. Any access beyond the 64k+ limit will cause a segmentation fault. The gcc -fstack-check option generates code to probe the stack for large stack allocation to see if the stack is accessible. The newer gcc does that while updating the %rsp simultaneously. Older gcc's like gcc4 doesn't do that. As a result, an application compiled with an old gcc and the -fstack-check option may fail to start at all: $ cat test.c int main() { char tmp[1024*128]; printf("### ok\n"); return 0; } $ gcc -fstack-check -g -o test test.c $ ./test Segmentation fault The old binary was working in older kernels where expand_stack() was somehow called before the check. But it is not working in newer kernels. Besides, the 64k+ limit check is kind of crude and will not catch a lot of mistakes that userspace applications may be misbehaving anyway. I think the kernel isn't the right place for this kind of tests. We should leave it to userspace instrumentation tools to perform them. The 64k+ limit check is now removed to just let expand_stack() decide if a segmentation fault should happen, when the RLIMIT_STACK limit is exceeded, for example. Signed-off-by: Waiman Long <longman@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1541535149-31963-1-git-send-email-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-12perf/x86/intel/uncore: Support CoffeeLake 8th CBOXKan Liang
Coffee Lake has 8 core products which has 8 Cboxes. The 8th CBOX is mapped into different MSR space. Increase the num_boxes to 8 to handle the new products. It will not impact the previous platforms, SkyLake, KabyLake and earlier CoffeeLake. Because the num_boxes will be recalculated in uncore_cpu_init and doesn't exceed the x86_max_cores. Introduce a new box flag bit to indicate the 8th CBOX. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/20181019170419.378-2-kan.liang@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-12perf/x86/intel/uncore: Add more IMC PCI IDs for KabyLake and CoffeeLake CPUsKan Liang
KabyLake and CoffeeLake CPUs have the same client uncore events as SkyLake. Add the PCI IDs for the KabyLake Y, U, S processor lines and CoffeeLake U, H, S processor lines. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/20181019170419.378-1-kan.liang@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-11Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "A set of x86 fixes: - Cure the LDT remapping to user space on 5 level paging which ended up in the KASLR space - Remove LDT mapping before freeing the LDT pages - Make NFIT MCE handling more robust - Unbreak the VSMP build by removing the dependency on paravirt ops - Support broken PIT emulation on Microsoft hyperV - Don't trace vmware_sched_clock() to avoid tracer recursion - Remove -pipe from KBUILD CFLAGS which breaks clang and is also slower on GCC - Trivial coding style and typo fixes" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu/vmware: Do not trace vmware_sched_clock() x86/vsmp: Remove dependency on pv_irq_ops x86/ldt: Remove unused variable in map_ldt_struct() x86/ldt: Unmap PTEs for the slot before freeing LDT pages x86/mm: Move LDT remap out of KASLR region on 5-level paging acpi/nfit, x86/mce: Validate a MCE's address before using it acpi/nfit, x86/mce: Handle only uncorrectable machine checks x86/build: Remove -pipe from KBUILD_CFLAGS x86/hyper-v: Fix indentation in hv_do_fast_hypercall16() Documentation/x86: Fix typo in zero-page.txt x86/hyper-v: Enable PIT shutdown quirk clockevents/drivers/i8253: Add support for PIT shutdown quirk
2018-11-11Merge branch 'locking-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking build fix from Thomas Gleixner: "A single fix for a build fail with CONFIG_PROFILE_ALL_BRANCHES=y in the qspinlock code" * 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/qspinlock: Fix compile error
2018-11-10Merge tag 'for-linus-4.20a-rc2-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: "Several fixes, mostly for rather recent regressions when running under Xen" * tag 'for-linus-4.20a-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen: remove size limit of privcmd-buf mapping interface xen: fix xen_qlock_wait() x86/xen: fix pv boot xen-blkfront: fix kernel panic with negotiate_mq error path xen/grant-table: Fix incorrect gnttab_dma_free_pages() pr_debug message CONFIG_XEN_PV breaks xen_create_contiguous_region on ARM
2018-11-09x86/cpu/vmware: Do not trace vmware_sched_clock()Steven Rostedt (VMware)
When running function tracing on a Linux guest running on VMware Workstation, the guest would crash. This is due to tracing of the sched_clock internal call of the VMware vmware_sched_clock(), which causes an infinite recursion within the tracing code (clock calls must not be traced). Make vmware_sched_clock() not traced by ftrace. Fixes: 80e9a4f21fd7c ("x86/vmware: Add paravirt sched clock") Reported-by: GwanYeong Kim <gy741.kim@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Borislav Petkov <bp@suse.de> CC: Alok Kataria <akataria@vmware.com> CC: GwanYeong Kim <gy741.kim@gmail.com> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org CC: Thomas Gleixner <tglx@linutronix.de> CC: virtualization@lists.linux-foundation.org CC: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/20181109152207.4d3e7d70@gandalf.local.home
2018-11-09xen: fix xen_qlock_wait()Juergen Gross
Commit a856531951dc80 ("xen: make xen_qlock_wait() nestable") introduced a regression for Xen guests running fully virtualized (HVM or PVH mode). The Xen hypervisor wouldn't return from the poll hypercall with interrupts disabled in case of an interrupt (for PV guests it does). So instead of disabling interrupts in xen_qlock_wait() use a nesting counter to avoid calling xen_clear_irq_pending() in case xen_qlock_wait() is nested. Fixes: a856531951dc80 ("xen: make xen_qlock_wait() nestable") Cc: stable@vger.kernel.org Reported-by: Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by: Juergen Gross <jgross@suse.com>
2018-11-09x86/xen: fix pv bootJuergen Gross
Commit 9da3f2b7405440 ("x86/fault: BUG() when uaccess helpers fault on kernel addresses") introduced a regression for booting Xen PV guests. Xen PV guests are using __put_user() and __get_user() for accessing the p2m map (physical to machine frame number map) as accesses might fail in case of not populated areas of the map. With above commit using __put_user() and __get_user() for accessing kernel pages is no longer valid. So replace the Xen hack by adding appropriate p2m access functions using the default fixup handler. Fixes: 9da3f2b7405440 ("x86/fault: BUG() when uaccess helpers fault on kernel addresses") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2018-11-08x86/PCI: Replace spin_is_locked() with lockdepLance Roy
lockdep_assert_held() is better suited to checking locking requirements, since it only checks if the current thread holds the lock regardless of whether someone else does. This is also a step towards possibly removing spin_is_locked(). Signed-off-by: Lance Roy <ldr709@gmail.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: <x86@kernel.org> Cc: <linux-pci@vger.kernel.org> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2018-11-08x86/PCI: Fix Broadcom CNB20LE unintended sign extension (redux)Colin Ian King
In the expression "word1 << 16", word1 starts as u16, but is promoted to a signed int, then sign-extended to resource_size_t, which is probably not what was intended. Cast to resource_size_t to avoid the sign extension. This fixes an identical issue as fixed by commit 0b2d70764bb3 ("x86/PCI: Fix Broadcom CNB20LE unintended sign extension") back in 2014. Detected by CoverityScan, CID#138749, 138750 ("Unintended sign extension") Fixes: 3f6ea84a3035 ("PCI: read memory ranges out of Broadcom CNB20LE host bridge") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Bjorn Helgaas <helgaas@kernel.org>
2018-11-07x86/cpufeatures: Add WBNOINVD feature definitionJanakarajan Natarajan
Add a new cpufeature definition for the WBNOINVD instruction. The WBNOINVD instruction writes all modified cache lines in all levels of the cache associated with a processor to main memory while retaining the cached values. Both AMD and Intel support this instruction. Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> CC: David Woodhouse <dwmw@amazon.co.uk> CC: Fenghua Yu <fenghua.yu@intel.com> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: Rudolf Marek <r.marek@assembler.cz> CC: Thomas Gleixner <tglx@linutronix.de> CC: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/1541624211-32196-1-git-send-email-Janakarajan.Natarajan@amd.com
2018-11-07x86/amd_nb: Add PCI device IDs for family 17h, model 30hWoods, Brian
Add the PCI device IDs for family 17h model 30h, since they are needed for accessing various registers via the data fabric/SMN interface. Signed-off-by: Brian Woods <brian.woods@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> CC: Bjorn Helgaas <bhelgaas@google.com> CC: Clemens Ladisch <clemens@ladisch.de> CC: Guenter Roeck <linux@roeck-us.net> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: Jean Delvare <jdelvare@suse.com> CC: Jia Zhang <qianyue.zj@alibaba-inc.com> CC: <linux-hwmon@vger.kernel.org> CC: <linux-pci@vger.kernel.org> CC: Pu Wen <puwen@hygon.cn> CC: Thomas Gleixner <tglx@linutronix.de> CC: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/20181106200754.60722-4-brian.woods@amd.com
2018-11-07x86/amd_nb: Add support for newer PCI topologiesWoods, Brian
Add support for new processors which have multiple PCI root complexes per data fabric/system management network interface. If there are (N) multiple PCI roots per DF/SMN interface, then the PCI roots are redundant (as far as SMN/DF access goes). For each DF/SMN interface: map to the first available PCI root and skip the next N-1 PCI roots so the following DF/SMN interface get mapped to a correct PCI root. Ex: DF/SMN 0 -> 60 40 20 00 DF/SMN 1 -> e0 c0 a0 80 Signed-off-by: Brian Woods <brian.woods@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> CC: Bjorn Helgaas <bhelgaas@google.com> CC: Clemens Ladisch <clemens@ladisch.de> CC: Guenter Roeck <linux@roeck-us.net> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: Jean Delvare <jdelvare@suse.com> CC: Jia Zhang <qianyue.zj@alibaba-inc.com> CC: <linux-hwmon@vger.kernel.org> CC: <linux-pci@vger.kernel.org> CC: Pu Wen <puwen@hygon.cn> CC: Thomas Gleixner <tglx@linutronix.de> CC: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/20181106200754.60722-3-brian.woods@amd.com
2018-11-07hwmon/k10temp, x86/amd_nb: Consolidate shared device IDsWoods, Brian
Consolidate shared PCI_DEVICE_IDs that were scattered through k10temp and amd_nb, and move them into pci_ids. Signed-off-by: Brian Woods <brian.woods@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Guenter Roeck <linux@roeck-us.net> CC: Bjorn Helgaas <bhelgaas@google.com> CC: Clemens Ladisch <clemens@ladisch.de> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: Jean Delvare <jdelvare@suse.com> CC: Jia Zhang <qianyue.zj@alibaba-inc.com> CC: <linux-hwmon@vger.kernel.org> CC: <linux-pci@vger.kernel.org> CC: Pu Wen <puwen@hygon.cn> CC: Thomas Gleixner <tglx@linutronix.de> CC: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/20181106200754.60722-2-brian.woods@amd.com
2018-11-06x86/tsc: Make calibration refinement more robustDaniel Vacek
The threshold in tsc_read_refs() is constant which may favor slower CPUs but may not be optimal for simple reading of reference on faster ones. Hence make it proportional to tsc_khz when available to compensate for this. The threshold guards against any disturbance like IRQs, NMIs, SMIs or CPU stealing by host on guest systems so rename it accordingly and fix comments as well. Also on some systems there is noticeable DMI bus contention at some point during boot keeping the readout failing (observed with about one in ~300 boots when testing). In that case retry also the second readout instead of simply bailing out unrefined. Usually the next second the readout returns fast just fine without any issues. Signed-off-by: Daniel Vacek <neelx@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/1541437840-29293-1-git-send-email-neelx@redhat.com
2018-11-06x86/vsmp: Remove dependency on pv_irq_opsEial Czerwacki
vSMP dependency on pv_irq_ops has been removed some years ago, but the code still deals with pv_irq_ops. In short, "cap & ctl & (1 << 4)" is always returning 0, so all PARAVIRT/PARAVIRT_XXL code related to that can be removed. However, the rest of the code depends on CONFIG_PCI, so fix it accordingly. Rename set_vsmp_pv_ops to set_vsmp_ctl as the original name does not make sense anymore. Signed-off-by: Eial Czerwacki <eial@scalemp.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Shai Fultheim <shai@scalemp.com> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/1541439114-28297-1-git-send-email-eial@scalemp.com
2018-11-06x86/ldt: Remove unused variable in map_ldt_struct()Kirill A. Shutemov
Splitting out the sanity check in map_ldt_struct() moved page table syncing into a separate function, which made the pgd variable unused. Remove it. [ tglx: Massaged changelog ] Fixes: 9bae3197e15d ("x86/ldt: Split out sanity check in map_ldt_struct()") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: bp@alien8.de Cc: hpa@zytor.com Cc: dave.hansen@linux.intel.com Cc: peterz@infradead.org Cc: boris.ostrovsky@oracle.com Cc: jgross@suse.com Cc: bhe@redhat.com Cc: willy@infradead.org Cc: linux-mm@kvack.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181026122856.66224-4-kirill.shutemov@linux.intel.com
2018-11-06x86/ldt: Unmap PTEs for the slot before freeing LDT pagesKirill A. Shutemov
modify_ldt(2) leaves the old LDT mapped after switching over to the new one. The old LDT gets freed and the pages can be re-used. Leaving the mapping in place can have security implications. The mapping is present in the userspace page tables and Meltdown-like attacks can read these freed and possibly reused pages. It's relatively simple to fix: unmap the old LDT and flush TLB before freeing the old LDT memory. This further allows to avoid flushing the TLB in map_ldt_struct() as the slot is unmapped and flushed by unmap_ldt_struct() or has never been mapped at all. [ tglx: Massaged changelog and removed the needless line breaks ] Fixes: f55f0501cbf6 ("x86/pti: Put the LDT in its own PGD if PTI is on") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: bp@alien8.de Cc: hpa@zytor.com Cc: dave.hansen@linux.intel.com Cc: luto@kernel.org Cc: peterz@infradead.org Cc: boris.ostrovsky@oracle.com Cc: jgross@suse.com Cc: bhe@redhat.com Cc: willy@infradead.org Cc: linux-mm@kvack.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181026122856.66224-3-kirill.shutemov@linux.intel.com
2018-11-06x86/mm: Move LDT remap out of KASLR region on 5-level pagingKirill A. Shutemov
On 5-level paging the LDT remap area is placed in the middle of the KASLR randomization region and it can overlap with the direct mapping, the vmalloc or the vmap area. The LDT mapping is per mm, so it cannot be moved into the P4D page table next to the CPU_ENTRY_AREA without complicating PGD table allocation for 5-level paging. The 4 PGD slot gap just before the direct mapping is reserved for hypervisors, so it cannot be used. Move the direct mapping one slot deeper and use the resulting gap for the LDT remap area. The resulting layout is the same for 4 and 5 level paging. [ tglx: Massaged changelog ] Fixes: f55f0501cbf6 ("x86/pti: Put the LDT in its own PGD if PTI is on") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: bp@alien8.de Cc: hpa@zytor.com Cc: dave.hansen@linux.intel.com Cc: peterz@infradead.org Cc: boris.ostrovsky@oracle.com Cc: jgross@suse.com Cc: bhe@redhat.com Cc: willy@infradead.org Cc: linux-mm@kvack.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181026122856.66224-2-kirill.shutemov@linux.intel.com
2018-11-06x86/boot: Simplify the detect_memory*() control flowJordan Borgner
The return values of these functions are not used - so simplify the functions. No change in functionality. [ mingo: Simplified the changelog. ] Suggested: Ingo Molnar <mingo@kernel.org> Signed-off-by: Jordan Borgner <mail@jordan-borgner.de> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181102145622.zjx2t3mdu3rv6sgy@JordanDesktop Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-06acpi/nfit, x86/mce: Validate a MCE's address before using itVishal Verma
The NFIT machine check handler uses the physical address from the mce structure, and compares it against information in the ACPI NFIT table to determine whether that location lies on an NVDIMM. The mce->addr field however may not always be valid, and this is indicated by the MCI_STATUS_ADDRV bit in the status field. Export mce_usable_address() which already performs validation for the address, and use it in the NFIT handler. Fixes: 6839a6d96f4e ("nfit: do an ARS scrub on hitting a latent media error") Reported-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> CC: Arnd Bergmann <arnd@arndb.de> Cc: Dan Williams <dan.j.williams@intel.com> CC: Dave Jiang <dave.jiang@intel.com> CC: elliott@hpe.com CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: Len Brown <lenb@kernel.org> CC: linux-acpi@vger.kernel.org CC: linux-edac <linux-edac@vger.kernel.org> CC: linux-nvdimm@lists.01.org CC: Qiuxu Zhuo <qiuxu.zhuo@intel.com> CC: "Rafael J. Wysocki" <rjw@rjwysocki.net> CC: Ross Zwisler <zwisler@kernel.org> CC: stable <stable@vger.kernel.org> CC: Thomas Gleixner <tglx@linutronix.de> CC: Tony Luck <tony.luck@intel.com> CC: x86-ml <x86@kernel.org> CC: Yazen Ghannam <yazen.ghannam@amd.com> Link: http://lkml.kernel.org/r/20181026003729.8420-2-vishal.l.verma@intel.com
2018-11-06acpi/nfit, x86/mce: Handle only uncorrectable machine checksVishal Verma
The MCE handler for nfit devices is called for memory errors on a Non-Volatile DIMM and adds the error location to a 'badblocks' list. This list is used by the various NVDIMM drivers to avoid consuming known poison locations during IO. The MCE handler gets called for both corrected and uncorrectable errors. Until now, both kinds of errors have been added to the badblocks list. However, corrected memory errors indicate that the problem has already been fixed by hardware, and the resulting interrupt is merely a notification to Linux. As far as future accesses to that location are concerned, it is perfectly fine to use, and thus doesn't need to be included in the above badblocks list. Add a check in the nfit MCE handler to filter out corrected mce events, and only process uncorrectable errors. Fixes: 6839a6d96f4e ("nfit: do an ARS scrub on hitting a latent media error") Reported-by: Omar Avelar <omar.avelar@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> CC: Arnd Bergmann <arnd@arndb.de> CC: Dan Williams <dan.j.williams@intel.com> CC: Dave Jiang <dave.jiang@intel.com> CC: elliott@hpe.com CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: Len Brown <lenb@kernel.org> CC: linux-acpi@vger.kernel.org CC: linux-edac <linux-edac@vger.kernel.org> CC: linux-nvdimm@lists.01.org CC: Qiuxu Zhuo <qiuxu.zhuo@intel.com> CC: "Rafael J. Wysocki" <rjw@rjwysocki.net> CC: Ross Zwisler <zwisler@kernel.org> CC: stable <stable@vger.kernel.org> CC: Thomas Gleixner <tglx@linutronix.de> CC: Tony Luck <tony.luck@intel.com> CC: x86-ml <x86@kernel.org> CC: Yazen Ghannam <yazen.ghannam@amd.com> Link: http://lkml.kernel.org/r/20181026003729.8420-1-vishal.l.verma@intel.com
2018-11-05x86/gart: Rewrite early_gart_iommu_check() commentBorislav Petkov
... to actually explain what the function is trying to do. Reported-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: <x86@kernel.org> Link: http://lkml.kernel.org/r/20181101155314.30690-1-bp@alien8.de
2018-11-05x86/cpufeatures: Remove get_scattered_cpuid_leaf()Sean Christopherson
get_scattered_cpuid_leaf() was added[1] to help KVM rebuild hardware- defined leafs that are rearranged by Linux to avoid bloating the x86_capability array. Eventually, the last consumer of the function was removed[2], but the function itself was kept, perhaps even intentionally as a form of documentation. Remove get_scattered_cpuid_leaf() as it is currently not used by KVM. Furthermore, simply rebuilding the "real" leaf does not resolve all of KVM's woes when it comes to exposing a scattered CPUID feature, i.e. keeping the function as documentation may be counter-productive in some scenarios, e.g. when KVM needs to do more than simply expose the leaf. [1] 47bdf3378d62 ("x86/cpuid: Provide get_scattered_cpuid_leaf()") [2] b7b27aa011a1 ("KVM/x86: Update the reverse_cpuid list to include CPUID_7_EDX") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> CC: Thomas Gleixner <tglx@linutronix.de> CC: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/20181105185725.18679-1-sean.j.christopherson@intel.com
2018-11-05x86/build: Remove -pipe from KBUILD_CFLAGSNathan Chancellor
Commit 77b0bf55bc67 ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") added -Wa,- to KBUILD_CFLAGS, which breaks compiling with Clang (hangs indefinitely at compiling init/main.o). This happens because while Clang accepts -pipe (and has it documented in its list of supported flags), it silently ignores it after this 2010 commit (thanks to Nick Desaulniers for tracking this down), meaning that gas just infinitely waits for stdin and never receives it. https://github.com/llvm-mirror/clang/commit/c19a12dc3d441bec62eed55e312b76c12d6d9022 Initially, I had suggested just add -Wa,- to KBUILD_CFLAGS when GCC was being used but that was before realizing it is because Clang doesn't do anything with -pipe. H. Peter Anvin suggested checking to see if -pipe gives us any gains out of GCC. Turns out it might actually be hurting: With -pipe: real 3m40.813s real 3m44.449s real 3m39.648s Without -pipe: real 3m38.492s real 3m38.335s real 3m38.975s The issue of -Wa,- being passed along to gas without -pipe being supported should still probably be fixed on the LLVM side (open issue: https://bugs.llvm.org/show_bug.cgi?id=39410) but this is not as much of a workaround anymore since it helps both GCC and Clang. Suggested-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Nadav Amit <namit@vmware.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Kees Cook <keescook@chromium.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Link: https://github.com/ClangBuiltLinux/linux/issues/213 Link: https://lkml.kernel.org/r/20181023231125.27976-1-natechancellor@gmail.com
2018-11-05x86/hyper-v: Fix indentation in hv_do_fast_hypercall16()Yi Wang
Remove the surplus TAB in hv_do_fast_hypercall16(). Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: kys@microsoft.com Cc: haiyangz@microsoft.com Cc: sthemmin@microsoft.com Cc: bp@alien8.de Cc: hpa@zytor.com Cc: devel@linuxdriverproject.org Cc: zhong.weidong@zte.com.cn Link: https://lkml.kernel.org/r/1540797451-2792-1-git-send-email-wang.yi59@zte.com.cn
2018-11-05x86: Use POPCNT mnemonics in arch_hweight.hUros Bizjak
Recently, the minimum required version of binutils was changed to 2.20, which supports POPCNT instruction mnemonics. Replace the byte-wise specification of POPCNT with those proper mnemonics. [ bp: massage commit message and remove line breaks. ] Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181014202354.21281-1-ubizjak@gmail.com
2018-11-04x86/hyper-v: Enable PIT shutdown quirkMichael Kelley
Hyper-V emulation of the PIT has a quirk such that the normal PIT shutdown path doesn't work, because clearing the counter register restarts the timer. Disable the counter clearing on PIT shutdown. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org> Cc: "devel@linuxdriverproject.org" <devel@linuxdriverproject.org> Cc: "daniel.lezcano@linaro.org" <daniel.lezcano@linaro.org> Cc: "virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org> Cc: "jgross@suse.com" <jgross@suse.com> Cc: "akataria@vmware.com" <akataria@vmware.com> Cc: "olaf@aepfle.de" <olaf@aepfle.de> Cc: "apw@canonical.com" <apw@canonical.com> Cc: vkuznets <vkuznets@redhat.com> Cc: "jasowang@redhat.com" <jasowang@redhat.com> Cc: "marcelo.cerri@canonical.com" <marcelo.cerri@canonical.com> Cc: KY Srinivasan <kys@microsoft.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1541303219-11142-3-git-send-email-mikelley@microsoft.com
2018-11-03Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "A number of fixes and some late updates: - make in_compat_syscall() behavior on x86-32 similar to other platforms, this touches a number of generic files but is not intended to impact non-x86 platforms. - objtool fixes - PAT preemption fix - paravirt fixes/cleanups - cpufeatures updates for new instructions - earlyprintk quirk - make microcode version in sysfs world-readable (it is already world-readable in procfs) - minor cleanups and fixes" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: compat: Cleanup in_compat_syscall() callers x86/compat: Adjust in_compat_syscall() to generic code under !COMPAT objtool: Support GCC 9 cold subfunction naming scheme x86/numa_emulation: Fix uniform-split numa emulation x86/paravirt: Remove unused _paravirt_ident_32 x86/mm/pat: Disable preemption around __flush_tlb_all() x86/paravirt: Remove GPL from pv_ops export x86/traps: Use format string with panic() call x86: Clean up 'sizeof x' => 'sizeof(x)' x86/cpufeatures: Enumerate MOVDIR64B instruction x86/cpufeatures: Enumerate MOVDIRI instruction x86/earlyprintk: Add a force option for pciserial device objtool: Support per-function rodata sections x86/microcode: Make revision and processor flags world-readable
2018-11-04x86/qspinlock: Fix compile errorPeter Zijlstra
With a compiler that has asm-goto but not asm-cc-output and CONFIG_PROFILE_ALL_BRANCHES=y we get a compiler error: arch/x86/include/asm/rmwcc.h:23:17: error: jump into statement expression Fix this by writing the if() as a boolean multiplication instead. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-kernel@vger.kernel.org Fixes: 7aa54be29765 ("locking/qspinlock, x86: Provide liveness guarantee") Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-03Merge branch 'core/urgent' into x86/urgent, to pick up objtool fixIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-01Merge tag 'stackleak-v4.20-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull stackleak gcc plugin from Kees Cook: "Please pull this new GCC plugin, stackleak, for v4.20-rc1. This plugin was ported from grsecurity by Alexander Popov. It provides efficient stack content poisoning at syscall exit. This creates a defense against at least two classes of flaws: - Uninitialized stack usage. (We continue to work on improving the compiler to do this in other ways: e.g. unconditional zero init was proposed to GCC and Clang, and more plugin work has started too). - Stack content exposure. By greatly reducing the lifetime of valid stack contents, exposures via either direct read bugs or unknown cache side-channels become much more difficult to exploit. This complements the existing buddy and heap poisoning options, but provides the coverage for stacks. The x86 hooks are included in this series (which have been reviewed by Ingo, Dave Hansen, and Thomas Gleixner). The arm64 hooks have already been merged through the arm64 tree (written by Laura Abbott and reviewed by Mark Rutland and Will Deacon). With VLAs having been removed this release, there is no need for alloca() protection, so it has been removed from the plugin" * tag 'stackleak-v4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: arm64: Drop unneeded stackleak_check_alloca() stackleak: Allow runtime disabling of kernel stack erasing doc: self-protection: Add information about STACKLEAK feature fs/proc: Show STACKLEAK metrics in the /proc file system lkdtm: Add a test for STACKLEAK gcc-plugins: Add STACKLEAK plugin for tracking the kernel stack x86/entry: Add STACKLEAK erasing the kernel stack at the end of syscalls
2018-11-01x86/compat: Adjust in_compat_syscall() to generic code under !COMPATDmitry Safonov
The result of in_compat_syscall() can be pictured as: x86 platform: --------------------------------------------------- | Arch\syscall | 64-bit | ia32 | x32 | |-------------------------------------------------| | x86_64 | false | true | true | |-------------------------------------------------| | i686 | | <true> | | --------------------------------------------------- Other platforms: ------------------------------------------- | Arch\syscall | 64-bit | compat | |-----------------------------------------| | 64-bit | false | true | |-----------------------------------------| | 32-bit(?) | | <false> | ------------------------------------------- As seen, the result of in_compat_syscall() on generic 32-bit platform differs from i686. There is no reason for in_compat_syscall() == true on native i686. It also easy to misread code if the result on native 32-bit platform differs between arches. Because of that non arch-specific code has many places with: if (IS_ENABLED(CONFIG_COMPAT) && in_compat_syscall()) in different variations. It looks-like the only non-x86 code which uses in_compat_syscall() not under CONFIG_COMPAT guard is in amd/amdkfd. But according to the commit a18069c132cb ("amdkfd: Disable support for 32-bit user processes"), it actually should be disabled on native i686. Rename in_compat_syscall() to in_32bit_syscall() for x86-specific code and make in_compat_syscall() false under !CONFIG_COMPAT. A follow on patch will clean up generic users which were forced to check IS_ENABLED(CONFIG_COMPAT) with in_compat_syscall(). Signed-off-by: Dmitry Safonov <dima@arista.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: John Stultz <john.stultz@linaro.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Stephen Boyd <sboyd@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: linux-efi@vger.kernel.org Cc: netdev@vger.kernel.org Link: https://lkml.kernel.org/r/20181012134253.23266-2-dima@arista.com
2018-10-31Merge branch 'for-linus-4.20-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml Pull UML updates from Richard Weinberger: - removal of old and dead code - a bug fix for our tty driver - other minor cleanups across the code base * 'for-linus-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml: um: Make line/tty semantics use true write IRQ um: trap: fix spelling mistake, EACCESS -> EACCES um: Don't hardcode path as it is architecture dependent um: NULL check before kfree is not needed um: remove unused AIO code um: Give start_idle_thread() a return code um: Remove update_debugregs() um: Drop own definition of PTRACE_SYSEMU/_SINGLESTEP