summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2020-10-30crypto: hash - Use memzero_explicit() for clearing stateArvind Sankar
Without the barrier_data() inside memzero_explicit(), the compiler may optimize away the state-clearing if it can tell that the state is not used afterwards. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-10-30crypto: x86/aes - remove unused file aes_glue.cEric Biggers
Commit 1d2c3279311e ("crypto: x86/aes - drop scalar assembler implementations") was meant to remove aes_glue.c, but it actually left it as an unused one-line file. Remove this unused file. Cc: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-10-29x86/build: Fix vmlinux size check on 64-bitArvind Sankar
Commit b4e0409a36f4 ("x86: check vmlinux limits, 64-bit") added a check that the size of the 64-bit kernel is less than KERNEL_IMAGE_SIZE. The check uses (_end - _text), but this is not enough. The initial PMD used in startup_64() (level2_kernel_pgt) can only map upto KERNEL_IMAGE_SIZE from __START_KERNEL_map, not from _text, and the modules area (MODULES_VADDR) starts at KERNEL_IMAGE_SIZE. The correct check is what is currently done for 32-bit, since LOAD_OFFSET is defined appropriately for the two architectures. Just check (_end - LOAD_OFFSET) against KERNEL_IMAGE_SIZE unconditionally. Note that on 32-bit, the limit is not strict: KERNEL_IMAGE_SIZE is not really used by the main kernel. The higher the kernel is located, the less the space available for the vmalloc area. However, it is used by KASLR in the compressed stub to limit the maximum address of the kernel to a safe value. Clean up various comments to clarify that despite the name, KERNEL_IMAGE_SIZE is not a limit on the size of the kernel image, but a limit on the maximum virtual address that the image can occupy. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201029161903.2553528-1-nivedita@alum.mit.edu
2020-10-29x86/sev-es: Do not support MMIO to/from encrypted memoryJoerg Roedel
MMIO memory is usually not mapped encrypted, so there is no reason to support emulated MMIO when it is mapped encrypted. Prevent a possible hypervisor attack where a RAM page is mapped as an MMIO page in the nested page-table, so that any guest access to it will trigger a #VC exception and leak the data on that page to the hypervisor via the GHCB (like with valid MMIO). On the read side this attack would allow the HV to inject data into the guest. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20201028164659.27002-6-joro@8bytes.org
2020-10-29x86/head/64: Check SEV encryption before switching to kernel page-tableJoerg Roedel
When SEV is enabled, the kernel requests the C-bit position again from the hypervisor to build its own page-table. Since the hypervisor is an untrusted source, the C-bit position needs to be verified before the kernel page-table is used. Call sev_verify_cbit() before writing the CR3. [ bp: Massage. ] Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20201028164659.27002-5-joro@8bytes.org
2020-10-29x86/boot/compressed/64: Check SEV encryption in 64-bit boot-pathJoerg Roedel
Check whether the hypervisor reported the correct C-bit when running as an SEV guest. Using a wrong C-bit position could be used to leak sensitive data from the guest to the hypervisor. The check function is in a separate file: arch/x86/kernel/sev_verify_cbit.S so that it can be re-used in the running kernel image. [ bp: Massage. ] Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20201028164659.27002-4-joro@8bytes.org
2020-10-29x86/boot/compressed/64: Sanity-check CPUID results in the early #VC handlerJoerg Roedel
The early #VC handler which doesn't have a GHCB can only handle CPUID exit codes. It is needed by the early boot code to handle #VC exceptions raised in verify_cpu() and to get the position of the C-bit. But the CPUID information comes from the hypervisor which is untrusted and might return results which trick the guest into the no-SEV boot path with no C-bit set in the page-tables. All data written to memory would then be unencrypted and could leak sensitive data to the hypervisor. Add sanity checks to the early #VC handler to make sure the hypervisor can not pretend that SEV is disabled. [ bp: Massage a bit. ] Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20201028164659.27002-3-joro@8bytes.org
2020-10-29x86: Wire up TIF_NOTIFY_SIGNALJens Axboe
The generic entry code has support for TIF_NOTIFY_SIGNAL already. Just provide the TIF bit. [ tglx: Adopted to other TIF changes in x86 ] Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201026203230.386348-4-axboe@kernel.dk
2020-10-29perf/x86/intel: Add event constraint for CYCLE_ACTIVITY.STALLS_MEM_ANYKan Liang
The event CYCLE_ACTIVITY.STALLS_MEM_ANY (0x14a3) should be available on all 8 GP counters on ICL, but it's only scheduled on the first four counters due to the current ICL constraint table. Add a line for the CYCLE_ACTIVITY.STALLS_MEM_ANY event in the ICL constraint table. Correct the comments for the CYCLE_ACTIVITY.CYCLES_MEM_ANY event. Fixes: 6017608936c1 ("perf/x86/intel: Add Icelake support") Reported-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20201019164529.32154-1-kan.liang@linux.intel.com
2020-10-29perf/x86/intel/uncore: Add Rocket Lake supportKan Liang
For Rocket Lake, the MSR uncore, e.g., CBOX, ARB and CLOCKBOX, are the same as Tiger Lake. Share the perf code with it. For Rocket Lake and Tiger Lake, the 8th CBOX is not mapped into a different MSR space anymore. Add rkl_uncore_msr_init_box() to replace skl_uncore_msr_init_box(). The IMC uncore is the similar to Ice Lake. Add new PCIIDs of IMC for Rocket Lake. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20201019153528.13850-4-kan.liang@linux.intel.com
2020-10-29perf/x86/msr: Add Rocket Lake CPU supportKan Liang
Like Ice Lake and Tiger Lake, PPERF and SMI_COUNT MSRs are also supported by Rocket Lake. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20201019153528.13850-3-kan.liang@linux.intel.com
2020-10-29perf/x86/cstate: Add Rocket Lake CPU supportKan Liang
From the perspective of Intel cstate residency counters, Rocket Lake is the same as Ice Lake and Tiger Lake. Share the code with them. Update the comments for Rocket Lake. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20201019153528.13850-2-kan.liang@linux.intel.com
2020-10-29perf/x86/intel: Add Rocket Lake CPU supportKan Liang
From the perspective of Intel PMU, Rocket Lake is the same as Ice Lake and Tiger Lake. Share the perf code with them. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20201019153528.13850-1-kan.liang@linux.intel.com
2020-10-29perf/core: Add support for PERF_SAMPLE_CODE_PAGE_SIZEStephane Eranian
When studying code layout, it is useful to capture the page size of the sampled code address. Add a new sample type for code page size. The new sample type requires collecting the ip. The code page size can be calculated from the NMI-safe perf_get_page_size(). For large PEBS, it's very unlikely that the mapping is gone for the earlier PEBS records. Enable the feature for the large PEBS. The worst case is that page-size '0' is returned. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20201001135749.2804-5-kan.liang@linux.intel.com
2020-10-29perf/x86/intel: Support PERF_SAMPLE_DATA_PAGE_SIZEKan Liang
The new sample type, PERF_SAMPLE_DATA_PAGE_SIZE, requires the virtual address. Update the data->addr if the sample type is set. The large PEBS is disabled with the sample type, because perf doesn't support munmap tracking yet. The PEBS buffer for large PEBS cannot be flushed for each munmap. Wrong page size may be calculated. The large PEBS can be enabled later separately when munmap tracking is supported. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20201001135749.2804-3-kan.liang@linux.intel.com
2020-10-29x86/boot/compressed/64: Introduce sev_statusJoerg Roedel
Introduce sev_status and initialize it together with sme_me_mask to have an indicator which SEV features are enabled. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20201028164659.27002-2-joro@8bytes.org
2020-10-29entry: Add support for TIF_NOTIFY_SIGNALJens Axboe
Add TIF_NOTIFY_SIGNAL handling in the generic entry code, which if set, will return true if signal_pending() is used in a wait loop. That causes an exit of the loop so that notify_signal tracehooks can be run. If the wait loop is currently inside a system call, the system call is restarted once task_work has been processed. In preparation for only having arch_do_signal() handle syscall restarts if _TIF_SIGPENDING isn't set, rename it to arch_do_signal_or_restart(). Pass in a boolean that tells the architecture specific signal handler if it should attempt to get a signal, or just process a potential syscall restart. For !CONFIG_GENERIC_ENTRY archs, add the TIF_NOTIFY_SIGNAL handling to get_signal(). This is done to minimize the needed architecture changes to support this feature. Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20201026203230.386348-3-axboe@kernel.dk
2020-10-28x86/kvm: Enable 15-bit extension when KVM_FEATURE_MSI_EXT_DEST_ID detectedDavid Woodhouse
This allows the host to indicate that MSI emulation supports 15-bit destination IDs, allowing up to 32768 CPUs without interrupt remapping. cf. https://patchwork.kernel.org/patch/11816693/ for qemu Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20201024213535.443185-36-dwmw2@infradead.org
2020-10-28x86/apic: Support 15 bits of APIC ID in MSI where availableDavid Woodhouse
Some hypervisors can allow the guest to use the Extended Destination ID field in the MSI address to address up to 32768 CPUs. This applies to all downstream devices which generate MSI cycles, including HPET, I/O-APIC and PCI MSI. HPET and PCI MSI use the same __irq_msi_compose_msg() function, while I/O-APIC generates its own and had support for the extended bits added in a previous commit. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-33-dwmw2@infradead.org
2020-10-28x86/ioapic: Handle Extended Destination ID field in RTEDavid Woodhouse
Bits 63-48 of the I/OAPIC Redirection Table Entry map directly to bits 19-4 of the address used in the resulting MSI cycle. Historically, the x86 MSI format only used the top 8 of those 16 bits as the destination APIC ID, and the "Extended Destination ID" in the lower 8 bits was unused. With interrupt remapping, the lowest bit of the Extended Destination ID (bit 48 of RTE, bit 4 of MSI address) is now used to indicate a remappable format MSI. A hypervisor can use the other 7 bits of the Extended Destination ID to permit guests to address up to 15 bits of APIC IDs, thus allowing 32768 vCPUs before having to expose a vIOMMU and interrupt remapping to the guest. No behavioural change in this patch, since nothing yet permits APIC IDs above 255 to be used with the non-IR I/OAPIC domain. [ tglx: Converted it to the cleaned up entry/msi_msg format and added commentry ] Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-32-dwmw2@infradead.org
2020-10-28x86: Kill all traces of irq_remapping_get_irq_domain()David Woodhouse
All users are converted to use the fwspec based parent domain lookup. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-30-dwmw2@infradead.org
2020-10-28x86/ioapic: Use irq_find_matching_fwspec() to find remapping irqdomainDavid Woodhouse
All possible parent domains have a select method now. Make use of it. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-29-dwmw2@infradead.org
2020-10-28x86/hpet: Use irq_find_matching_fwspec() to find remapping irqdomainDavid Woodhouse
All possible parent domains have a select method now. Make use of it. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-28-dwmw2@infradead.org
2020-10-28x86/apic: Add select() method on vector irqdomainDavid Woodhouse
This will be used to select the irqdomain for I/O-APIC and HPET. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-24-dwmw2@infradead.org
2020-10-28x86/ioapic: Generate RTE directly from parent irqchip's MSI messageDavid Woodhouse
The I/O-APIC generates an MSI cycle with address/data bits taken from its Redirection Table Entry in some combination which used to make sense, but now is just a bunch of bits which get passed through in some seemingly arbitrary order. Instead of making IRQ remapping drivers directly frob the I/OA-PIC RTE, let them just do their job and generate an MSI message. The bit swizzling to turn that MSI message into the I/O-APIC's RTE is the same in all cases, since it's a function of the I/O-APIC hardware. The IRQ remappers have no real need to get involved with that. The only slight caveat is that the I/OAPIC is interpreting some of those fields too, and it does want the 'vector' field to be unique to make EOI work. The AMD IOMMU happens to put its IRTE index in the bits that the I/O-APIC thinks are the vector field, and accommodates this requirement by reserving the first 32 indices for the I/O-APIC. The Intel IOMMU doesn't actually use the bits that the I/O-APIC thinks are the vector field, so it fills in the 'pin' value there instead. [ tglx: Replaced the unreadably macro maze with the cleaned up RTE/msi_msg bitfields and added commentry to explain the mapping magic ] Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-22-dwmw2@infradead.org
2020-10-28x86/ioapic: Cleanup IO/APIC route entry structsThomas Gleixner
Having two seperate structs for the I/O-APIC RTE entries (non-remapped and DMAR remapped) requires type casts and makes it hard to map. Combine them in IO_APIC_routing_entry by defining a union of two 64bit bitfields. Use naming which reflects which bits are shared and which bits are actually different for the operating modes. [dwmw2: Fix it up and finish the job, pulling the 32-bit w1,w2 words for register access into the same union and eliminating a few more places where bits were accessed through masks and shifts.] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-21-dwmw2@infradead.org
2020-10-28x86/io_apic: Cleanup trigger/polarity helpersThomas Gleixner
'trigger' and 'polarity' are used throughout the I/O-APIC code for handling the trigger type (edge/level) and the active low/high configuration. While there are defines for initializing these variables and struct members, they are not used consequently and the meaning of 'trigger' and 'polarity' is opaque and confusing at best. Rename them to 'is_level' and 'active_low' and make them boolean in various structs so it's entirely clear what the meaning is. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-20-dwmw2@infradead.org
2020-10-28x86/msi: Remove msidef.hThomas Gleixner
Nothing uses the macro maze anymore. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-19-dwmw2@infradead.org
2020-10-28x86/pci/xen: Use msi_msg shadow structsThomas Gleixner
Use the msi_msg shadow structs and compose the message with named bitfields instead of the unreadable macro maze. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-18-dwmw2@infradead.org
2020-10-28x86/kvm: Use msi_msg shadow structsThomas Gleixner
Use the bitfields in the x86 shadow structs instead of decomposing the 32bit value with macros. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-17-dwmw2@infradead.org
2020-10-28x86/msi: Provide msi message shadow structsThomas Gleixner
Create shadow structs with named bitfields for msi_msg data, address_lo and address_hi and use them in the MSI message composer. Provide a function to retrieve the destination ID. This could be inline, but that'd create a circular header dependency. [dwmw2: fix bitfields not all to be a union] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-13-dwmw2@infradead.org
2020-10-28x86/hpet: Move MSI support into hpet.cDavid Woodhouse
This isn't really dependent on PCI MSI; it's just generic MSI which is now supported by the generic x86_vector_domain. Move the HPET MSI support back into hpet.c with the rest of the HPET support. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-11-dwmw2@infradead.org
2020-10-28x86/apic: Always provide irq_compose_msi_msg() method for vector domainDavid Woodhouse
This shouldn't be dependent on PCI_MSI. HPET and I/O-APIC can deliver interrupts through MSI without having any PCI in the system at all. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-10-dwmw2@infradead.org
2020-10-28x86/apic: Cleanup destination modeThomas Gleixner
apic::irq_dest_mode is actually a boolean, but defined as u32 and named in a way which does not explain what it means. Make it a boolean and rename it to 'dest_mode_logical' Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-9-dwmw2@infradead.org
2020-10-28x86/apic: Get rid of apic:: Dest_logicalThomas Gleixner
struct apic has two members which store information about the destination mode: dest_logical and irq_dest_mode. dest_logical contains a mask which was historically used to set the destination mode in IPI messages. Over time the usage was reduced and the logical/physical functions were seperated. There are only a few places which still use 'dest_logical' but they can use 'irq_dest_mode' instead. irq_dest_mode is actually a boolean where 0 means physical destination mode and 1 means logical destination mode. Of course the name does not reflect the functionality. This will be cleaned up in a subsequent change. Remove apic::dest_logical and fixup the remaining users. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-8-dwmw2@infradead.org
2020-10-28x86/apic: Replace pointless apic:: Dest_logical usageThomas Gleixner
All these functions are only used for logical destination mode. So reading the destination mode mask from the apic structure is a pointless exercise. Just hand in the proper constant: APIC_DEST_LOGICAL. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-7-dwmw2@infradead.org
2020-10-28x86/apic: Cleanup delivery mode definesThomas Gleixner
The enum ioapic_irq_destination_types and the enumerated constants starting with 'dest_' are gross misnomers because they describe the delivery mode. Rename then enum and the constants so they actually make sense. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-6-dwmw2@infradead.org
2020-10-28x86/devicetree: Fix the ioapic interrupt type tableThomas Gleixner
The ioapic interrupt type table is wrong as it assumes that polarity in IO/APIC context means active high when set. But the IO/APIC polarity is working the other way round. This works because the ordering of the entries is consistent with the device tree and the type information is not used by the IO/APIC interrupt chip. The whole trigger and polarity business of IO/APIC is misleading and the corresponding constants which are defined as 0/1 are not used consistently and are going to be removed. Rename the type table members to 'is_level' and 'active_low' and adjust the type information for consistency sake. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-5-dwmw2@infradead.org
2020-10-28x86/apic/uv: Fix inconsistent destination modeThomas Gleixner
The UV x2apic is strictly using physical destination mode, but apic::dest_logical is initialized with APIC_DEST_LOGICAL. This does not matter much because UV does not use any of the generic functions which use apic::dest_logical, but is still inconsistent. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-4-dwmw2@infradead.org
2020-10-28x86/msi: Only use high bits of MSI address for DMAR unitDavid Woodhouse
The Intel IOMMU has an MSI-like configuration for its interrupt, but it isn't really MSI. So it gets to abuse the high 32 bits of the address, and puts the high 24 bits of the extended APIC ID there. This isn't something that can be used in the general case for real MSIs, since external devices using the high bits of the address would be performing writes to actual memory space above 4GiB, not targeted at the APIC. Factor the hack out and allow it only to be used when appropriate, adding a WARN_ON_ONCE() if other MSIs are targeted at an unreachable APIC ID. That should never happen since the compatibility MSI messages are not used when Interrupt Remapping is enabled. The x2apic_enabled() check isn't needed because Linux won't bring up CPUs with higher APIC IDs unless IR and x2apic are enabled anyway. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-3-dwmw2@infradead.org
2020-10-28x86/apic: Fix x2apic enablement without interrupt remappingDavid Woodhouse
Currently, Linux as a hypervisor guest will enable x2apic only if there are no CPUs present at boot time with an APIC ID above 255. Hotplugging a CPU later with a higher APIC ID would result in a CPU which cannot be targeted by external interrupts. Add a filter in x2apic_apic_id_valid() which can be used to prevent such CPUs from coming online, and allow x2apic to be enabled even if they are present at boot time. Fixes: ce69a784504 ("x86/apic: Enable x2APIC without interrupt remapping under KVM") Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-2-dwmw2@infradead.org
2020-10-28x86/kvm: Reserve KVM_FEATURE_MSI_EXT_DEST_IDDavid Woodhouse
No functional change; just reserve the feature bit for now so that VMMs can start to implement it. This will allow the host to indicate that MSI emulation supports 15-bit destination IDs, allowing up to 32768 CPUs without interrupt remapping. cf. https://patchwork.kernel.org/patch/11816693/ for qemu Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <4cd59bed05f4b7410d3d1ffd1e997ab53683874d.camel@infradead.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-10-28x86/setup: Remove unused MCA variablesBorislav Petkov
Commit bb8187d35f82 ("MCA: delete all remaining traces of microchannel bus support.") removed the remaining traces of Micro Channel Architecture support but one trace remained - three variables in setup.c which have been unused since 2012 at least. Drop them finally. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201021165614.23023-1-bp@alien8.de
2020-10-28x86/mm/ident_map: Check for errors from ident_pud_init()Arvind Sankar
Commit ea3b5e60ce80 ("x86/mm/ident_map: Add 5-level paging support") added ident_p4d_init() to support 5-level paging, but this function doesn't check and return errors from ident_pud_init(). For example, the decompressor stub uses this code to create an identity mapping. If it runs out of pages while trying to allocate a PMD pagetable, the error will be currently ignored. Fix this to propagate errors. [ bp: Space out statements for better readability. ] Fixes: ea3b5e60ce80 ("x86/mm/ident_map: Add 5-level paging support") Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Joerg Roedel <jroedel@suse.de> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lkml.kernel.org/r/20201027230648.1885111-1-nivedita@alum.mit.edu
2020-10-27x86/debug: Fix DR_STEP vs ptrace_get_debugreg(6)Peter Zijlstra
Commit d53d9bc0cf78 ("x86/debug: Change thread.debugreg6 to thread.virtual_dr6") changed the semantics of the variable from random collection of bits, to exactly only those bits that ptrace() needs. Unfortunately this lost DR_STEP for PTRACE_{BLOCK,SINGLE}STEP. Furthermore, it turns out that userspace expects DR_STEP to be unconditionally available, even for manual TF usage outside of PTRACE_{BLOCK,SINGLE}_STEP. Fixes: d53d9bc0cf78 ("x86/debug: Change thread.debugreg6 to thread.virtual_dr6") Reported-by: Kyle Huey <me@kylehuey.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Kyle Huey <me@kylehuey.com> Link: https://lore.kernel.org/r/20201027183330.GM2628@hirez.programming.kicks-ass.net
2020-10-27x86/debug: Only clear/set ->virtual_dr6 for userspace #DBPeter Zijlstra
The ->virtual_dr6 is the value used by ptrace_{get,set}_debugreg(6). A kernel #DB clearing it could mean spurious malfunction of ptrace() expectations. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Kyle Huey <me@kylehuey.com> Link: https://lore.kernel.org/r/20201027093608.028952500@infradead.org
2020-10-27x86/debug: Fix BTF handlingPeter Zijlstra
The SDM states that #DB clears DEBUGCTLMSR_BTF, this means that when the bit is set for userspace (TIF_BLOCKSTEP) and a kernel #DB happens first, the BTF bit meant for userspace execution is lost. Have the kernel #DB handler restore the BTF bit when it was requested for userspace. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Kyle Huey <me@kylehuey.com> Link: https://lore.kernel.org/r/20201027093607.956147736@infradead.org
2020-10-27Merge tag 'x86-urgent-2020-10-27' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "A couple of x86 fixes which missed rc1 due to my stupidity: - Drop lazy TLB mode before switching to the temporary address space for text patching. text_poke() switches to the temporary mm which clears the lazy mode and restores the original mm afterwards. Due to clearing lazy mode this might restore a already dead mm if exit_mmap() runs in parallel on another CPU. - Document the x32 syscall design fail vs. syscall numbers 512-547 properly. - Fix the ORC unwinder to handle the inactive task frame correctly. This was unearthed due to the slightly different code generation of gcc-10. - Use an up to date screen_info for the boot params of kexec instead of the possibly stale and invalid version which happened to be valid when the kexec kernel was loaded" * tag 'x86-urgent-2020-10-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/alternative: Don't call text_poke() in lazy TLB mode x86/syscalls: Document the fact that syscalls 512-547 are a legacy mistake x86/unwind/orc: Fix inactive tasks with stack pointer in %sp on GCC 10 compiled kernels hyperv_fb: Update screen_info after removing old framebuffer x86/kexec: Use up-to-dated screen_info copy to fill boot params
2020-10-27x86/resctrl: Correct MBM total and local valuesFenghua Yu
Intel Memory Bandwidth Monitoring (MBM) counters may report system memory bandwidth incorrectly on some Intel processors. The errata SKX99 for Skylake server, BDF102 for Broadwell server, and the correction factor table are documented in Documentation/x86/resctrl.rst. Intel MBM counters track metrics according to the assigned Resource Monitor ID (RMID) for that logical core. The IA32_QM_CTR register (MSR 0xC8E) used to report these metrics, may report incorrect system bandwidth for certain RMID values. Due to the errata, system memory bandwidth may not match what is reported. To work around the errata, correct MBM total and local readings using a correction factor table. If rmid > rmid threshold, MBM total and local values should be multiplied by the correction factor. [ bp: Mark mbm_cf_table[] __initdata. ] Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20201014004927.1839452-3-fenghua.yu@intel.com
2020-10-27x86: use asm-generic/mmu_context.h for no-op implementationsNicholas Piggin
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>