summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2020-07-27x86/cpufeatures: Add enumeration for SERIALIZE instructionRicardo Neri
The Intel architecture defines a set of Serializing Instructions (a detailed definition can be found in Vol.3 Section 8.3 of the Intel "main" manual, SDM). However, these instructions do more than what is required, have side effects and/or may be rather invasive. Furthermore, some of these instructions are only available in kernel mode or may cause VMExits. Thus, software using these instructions only to serialize execution (as defined in the manual) must handle the undesired side effects. As indicated in the name, SERIALIZE is a new Intel architecture Serializing Instruction. Crucially, it does not have any of the mentioned side effects. Also, it does not cause VMExit and can be used in user mode. This new instruction is currently documented in the latest "extensions" manual (ISE). It will appear in the "main" manual in the future. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20200727043132.15082-2-ricardo.neri-calderon@linux.intel.com
2020-07-27Merge tag 'v5.8-rc7' into x86/cpu, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-27Merge back cpufreq material for v5.9.Rafael J. Wysocki
2020-07-27x86/mm/64: Make sync_global_pgds() staticJoerg Roedel
The function is only called from within init_64.c and can be static. Also remove it from pgtable_64.h. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-4-joro@8bytes.org
2020-07-27x86/mm/64: Do not sync vmalloc/ioremap mappingsJoerg Roedel
Remove the code to sync the vmalloc and ioremap ranges for x86-64. The page-table pages are all pre-allocated now so that synchronization is no longer necessary. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-3-joro@8bytes.org
2020-07-27x86/mm: Pre-allocate P4D/PUD pages for vmalloc areaJoerg Roedel
Pre-allocate the page-table pages for the vmalloc area at the level which needs synchronization on x86-64, which is P4D for 5-level and PUD for 4-level paging. Doing this at boot makes sure no synchronization of that area is necessary at runtime. The synchronization takes the pgd_lock and iterates over all page-tables in the system, so it can take quite long and is better avoided. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-2-joro@8bytes.org
2020-07-26x86/ioperm: Initialize pointer bitmap with NULL rather than 0Colin Ian King
The pointer bitmap is being initialized with a plain integer 0, fix this by initializing it with a NULL instead. Cleans up sparse warning: arch/x86/xen/enlighten_pv.c:876:27: warning: Using plain integer as NULL pointer Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20200721100217.407975-1-colin.king@canonical.com
2020-07-26Merge branch 'x86/urgent' into x86/cleanupsIngo Molnar
Refresh the branch for a dependent commit. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-26Merge branch 'locking/nmi' into x86/entryIngo Molnar
Resolve conflicts with ongoing lockdep work that fixed the NMI entry code. Conflicts: arch/x86/entry/common.c arch/x86/include/asm/idtentry.h Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-26x86: uv: uv_hub.h: Delete duplicated wordRandy Dunlap
Delete the repeated word "the". [ mingo: While at it, also capitalize CPU properly. ] Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200726004124.20618-4-rdunlap@infradead.org
2020-07-26x86: cmpxchg_32.h: Delete duplicated wordRandy Dunlap
Delete the repeated word "you". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200726004124.20618-3-rdunlap@infradead.org
2020-07-26x86: bootparam.h: Delete duplicated wordRandy Dunlap
Delete the repeated word "for". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200726004124.20618-2-rdunlap@infradead.org
2020-07-25Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
The UDP reuseport conflict was a little bit tricky. The net-next code, via bpf-next, extracted the reuseport handling into a helper so that the BPF sk lookup code could invoke it. At the same time, the logic for reuseport handling of unconnected sockets changed via commit efc6b6f6c3113e8b203b9debfb72d81e0f3dcace which changed the logic to carry on the reuseport result into the rest of the lookup loop if we do not return immediately. This requires moving the reuseport_has_conns() logic into the callers. While we are here, get rid of inline directives as they do not belong in foo.c files. The other changes were cases of more straightforward overlapping modifications. Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-25Merge tag 'x86-urgent-2020-07-25' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master Pull x86 fixes from Ingo Molnar: "Misc fixes: - Fix a section end page alignment assumption that was causing crashes - Fix ORC unwinding on freshly forked tasks which haven't executed yet and which have empty user task stacks - Fix the debug.exception-trace=1 sysctl dumping of user stacks, which was broken by recent maccess changes" * tag 'x86-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/dumpstack: Dump user space code correctly again x86/stacktrace: Fix reliable check for empty user task stacks x86/unwind/orc: Fix ORC for newly forked tasks x86, vmlinux.lds: Page-align end of ..page_aligned sections
2020-07-25Merge tag 'v5.8-rc6' into locking/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-25x86/xen/time: Set the X86_FEATURE_TSC_KNOWN_FREQ flag in xen_tsc_khz()Hayato Ohhashi
If the TSC frequency is known from the pvclock page, the TSC frequency does not need to be recalibrated. We can avoid recalibration by setting X86_FEATURE_TSC_KNOWN_FREQ. Signed-off-by: Hayato Ohhashi <o8@vmm.dev> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20200721161231.6019-1-o8@vmm.dev
2020-07-25x86/split_lock: Enable the split lock feature on Sapphire Rapids and Alder ↵Fenghua Yu
Lake CPUs Add Sapphire Rapids and Alder Lake processors to CPU list to enumerate and enable the split lock feature. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/1595634320-79689-1-git-send-email-fenghua.yu@intel.com
2020-07-25x86/cpu: Add Lakefield, Alder Lake and Rocket Lake models to the to Intel ↵Tony Luck
CPU family Add three new Intel CPU models. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200721043749.31567-1-tony.luck@intel.com
2020-07-25Merge tag 'v5.8-rc6' into x86/cpu, to refresh the branch before adding new ↵Ingo Molnar
commits Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-25x86/defconfigs: Refresh defconfig filesIngo Molnar
Perform a 'make savedefconfig' pass over our main defconfig files, which keeps the defconfig result the same, but compresses the file where defaults were changed, options removed or reordered. Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200724130638.645844-2-mingo@kernel.org
2020-07-25x86/mm: Remove the unused mk_kernel_pgd() #defineIngo Molnar
AFAICS the last uses of directly 'making' kernel PGDs was removed 7 years ago: 8b78c21d72d9: ("x86, 64bit, mm: hibernate use generic mapping_init") Where the explicit PGD walking loop was replaced with kernel_ident_mapping_init() calls. This was then (unnecessarily) carried over through the 5-level paging conversion. Also clean up the 'level' comments a bit, to convey the original, meanwhile somewhat bit-rotten notion, that these are empty comment blocks with no methods to handle any of the levels except the PTE level. Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200724114418.629021-4-mingo@kernel.org
2020-07-25x86/tsc: Remove unused "US_SCALE" and "NS_SCALE" leftover macrosIngo Molnar
Last use of them was removed 13 years ago, when the code was converted to use CYC2NS_SCALE_FACTOR: 53d517cdbaac: ("x86: scale cyc_2_nsec according to CPU frequency") The current TSC code uses the 'struct cyc2ns_data' scaling abstraction, the old fixed scaling approach is long gone. This cleanup also removes the 'arbitralrily' typo from the comment, so win-win. ;-) Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200724114418.629021-3-mingo@kernel.org
2020-07-25x86/ioapic: Remove unused "IOAPIC_AUTO" defineIngo Molnar
Last use was removed more than 5 years ago, in: 5ad274d41c1b: ("x86/irq: Remove unused old IOAPIC irqdomain interfaces") Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200724114418.629021-2-mingo@kernel.org
2020-07-24xen: Remove redundant initialization of irqColin Ian King
The variable irq is being initialized with a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed. Addresses-Coverity: ("Unused value") Link: https://lore.kernel.org/r/20200611123134.922395-1-colin.king@canonical.com Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2020-07-24x86/build: Move max-page-size option to LDFLAGS_vmlinuxArvind Sankar
This option is only required for vmlinux on 64-bit, to enforce 2MiB alignment, so set it in LDFLAGS_vmlinux instead of KBUILD_LDFLAGS. Also drop the ld-option check: this option was added in binutils-2.18 and all the other places that use it already don't have the check. This reduces the size of the intermediate ELF files arch/x86/boot/setup.elf and arch/x86/realmode/rm/realmode.elf by about 2MiB each. The binary versions are unchanged. Move the LDFLAGS settings to all be together and just after CFLAGS settings are done. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <michal.lkml@markovi.net> Link: https://lore.kernel.org/r/20200722184334.3785418-1-nivedita@alum.mit.edu
2020-07-24x86/kvm: Use generic xfer to guest work functionThomas Gleixner
Use the generic infrastructure to check for and handle pending work before transitioning into guest mode. This now handles TIF_NOTIFY_RESUME as well which was ignored so far. Handling it is important as this covers task work and task work will be used to offload the heavy lifting of POSIX CPU timers to thread context. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.979724969@linutronix.de
2020-07-24x86/entry: Cleanup idtentry_enter/exitThomas Gleixner
Remove the temporary defines and fixup all references. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.855839271@linutronix.de
2020-07-24x86/entry: Use generic interrupt entry/exit codeThomas Gleixner
Replace the x86 code with the generic variant. Use temporary defines for idtentry_* which will be cleaned up in the next step. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.711492752@linutronix.de
2020-07-24x86/entry: Cleanup idtentry_entry/exit_userThomas Gleixner
Cleanup the temporary defines and use irqentry_ instead of idtentry_. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.602603691@linutronix.de
2020-07-24x86/entry: Use generic syscall exit functionalityThomas Gleixner
Replace the x86 variant with the generic version. Provide the relevant architecture specific helper functions and defines. Use a temporary define for idtentry_exit_user which will be cleaned up seperately. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.494648601@linutronix.de
2020-07-24x86/entry: Use generic syscall entry functionThomas Gleixner
Replace the syscall entry work handling with the generic version. Provide the necessary helper inlines to handle the real architecture specific parts, e.g. ptrace. Use a temporary define for idtentry_enter_user which will be cleaned up seperately. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.376213694@linutronix.de
2020-07-24x86/ptrace: Provide pt_regs helper for entry/exitThomas Gleixner
As a preparatory step for moving the syscall and interrupt entry/exit handling into generic code, provide a pt_regs helper which retrieves the interrupt state from pt_regs. This is required to check whether interrupts are reenabled by return from interrupt/exception. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.258511584@linutronix.de
2020-07-24x86/entry: Move user return notifier out of loopThomas Gleixner
Guests and user space share certain MSRs. KVM sets these MSRs to guest values once and does not set them back to user space values on every VM exit to spare the costly MSR operations. User return notifiers ensure that these MSRs are set back to the correct values before returning to user space in exit_to_usermode_loop(). There is no reason to evaluate the TIF flag indicating that user return notifiers need to be invoked in the loop. The important point is that they are invoked before returning to user space. Move the invocation out of the loop into the section which does the last preperatory steps before returning to user space. That section is not preemptible and runs with interrupts disabled until the actual return. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.159112003@linutronix.de
2020-07-24x86/entry: Consolidate 32/64 bit syscall entryThomas Gleixner
64bit and 32bit entry code have the same open coded syscall entry handling after the bitwidth specific bits. Move it to a helper function and share the code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.051234096@linutronix.de
2020-07-24x86/entry: Consolidate check_user_regs()Thomas Gleixner
The user register sanity check is sprinkled all over the place. Move it into enter_from_user_mode(). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220519.943016204@linutronix.de
2020-07-24Merge branch 'core/entry' into x86/entryThomas Gleixner
Pick up generic entry code to migrate x86 over.
2020-07-24x86/defconfigs: Remove CONFIG_CRYPTO_AES_586 from i386_defconfigSedat Dilek
Initially CONFIG_CRYPTO_AES_586=y was added to the i386_defconfig file in: c1b362e3b4d3: ("x86: update defconfigs") The code and Kconfig for CONFIG_CRYPTO_AES_586 was removed in: 1d2c3279311e: ("crypto: x86/aes - drop scalar assembler implementations") Remove the leftover from the i386_defconfig file as well. Signed-off-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20200723171119.9881-1-sedat.dilek@gmail.com
2020-07-24compiler.h: Move instrumentation_begin()/end() to new ↵Ingo Molnar
<linux/instrumentation.h> header Linus pointed out that compiler.h - which is a key header that gets included in every single one of the 28,000+ kernel files during a kernel build - was bloated in: 655389666643: ("vmlinux.lds.h: Create section for protection against instrumentation") Linus noted: > I have pulled this, but do we really want to add this to a header file > that is _so_ core that it gets included for basically every single > file built? > > I don't even see those instrumentation_begin/end() things used > anywhere right now. > > It seems excessive. That 53 lines is maybe not a lot, but it pushed > that header file to over 12kB, and while it's mostly comments, it's > extra IO and parsing basically for _every_ single file compiled in the > kernel. > > For what appears to be absolutely zero upside right now, and I really > don't see why this should be in such a core header file! Move these primitives into a new header: <linux/instrumentation.h>, and include that header in the headers that make use of it. Unfortunately one of these headers is asm-generic/bug.h, which does get included in a lot of places, similarly to compiler.h. So the de-bloating effect isn't as good as we'd like it to be - but at least the interfaces are defined separately. No change to functionality intended. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200604071921.GA1361070@gmail.com Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Peter Zijlstra <peterz@infradead.org>
2020-07-24x86: Correct noinstr qualifiersIra Weiny
The noinstr qualifier is to be specified before the return type in the same way inline is used. These 2 cases were missed by previous patches. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200723161405.852613-1-ira.weiny@intel.com
2020-07-24x86/mm: Drop unused MAX_PHYSADDR_BITSArvind Sankar
The macro is not used anywhere, and has an incorrect value (going by the comment) on x86_64 since commit c898faf91b3e ("x86: 46 bit physical address support on 64 bits") To avoid confusion, just remove the definition. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200723231544.17274-2-nivedita@alum.mit.edu
2020-07-24Merge v5.8-rc6 into drm-nextDave Airlie
I've got a silent conflict + two trees based on fixes to merge. Fixes a silent merge with amdgpu Signed-off-by: Dave Airlie <airlied@redhat.com>
2020-07-23x86/uaccess: Make __get_user_size() Clang compliant on 32-bitNick Desaulniers
Clang fails to compile __get_user_size() on 32-bit for the following code: long long val; __get_user(val, usrptr); with: error: invalid output size for constraint '=q' GCC compiles the same code without complaints. The reason is that GCC and Clang are architecturally different, which leads to subtle issues for code that's invalid but clearly dead, i.e. with code that emulates polymorphism with the preprocessor and sizeof. GCC will perform semantic analysis after early inlining and dead code elimination, so it will not warn on invalid code that's dead. Clang strictly performs optimizations after semantic analysis, so it will warn for dead code. Neither Clang nor GCC like this very much with -m32: long long ret; asm ("movb $5, %0" : "=q" (ret)); However, GCC can tolerate this variant: long long ret; switch (sizeof(ret)) { case 1: asm ("movb $5, %0" : "=q" (ret)); break; case 8:; } Clang, on the other hand, won't accept that because it validates the inline asm for the '1' case before the optimisation phase where it realises that it wouldn't have to emit it anyway. If LLVM (Clang's "back end") fails such as during instruction selection or register allocation, it cannot provide accurate diagnostics (warnings / errors) that contain line information, as the AST has been discarded from memory at that point. While there have been early discussions about having C/C++ specific language optimizations in Clang via the use of MLIR, which would enable such earlier optimizations, such work is not scoped and likely a multi-year endeavor. It was discussed to change the asm output constraint for the one byte case from "=q" to "=r". While it works for 64-bit, it fails on 32-bit. With '=r' the compiler could fail to chose a register accessible as high/low which is required for the byte operation. If that happens the assembly will fail. Use a local temporary variable of type 'unsigned char' as output for the byte copy inline asm and then assign it to the real output variable. This prevents Clang from failing the semantic analysis in the above case. The resulting code for the actual one byte copy is not affected as the temporary variable is optimized out. [ tglx: Amended changelog ] Reported-by: Arnd Bergmann <arnd@arndb.de> Reported-by: David Woodhouse <dwmw2@infradead.org> Reported-by: Dmitry Golovin <dima@golovin.in> Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://bugs.llvm.org/show_bug.cgi?id=33587 Link: https://github.com/ClangBuiltLinux/linux/issues/3 Link: https://github.com/ClangBuiltLinux/linux/issues/194 Link: https://github.com/ClangBuiltLinux/linux/issues/781 Link: https://lore.kernel.org/lkml/20180209161833.4605-1-dwmw2@infradead.org/ Link: https://lore.kernel.org/lkml/CAK8P3a1EBaWdbAEzirFDSgHVJMtWjuNt2HGG8z+vpXeNHwETFQ@mail.gmail.com/ Link: https://lkml.kernel.org/r/20200720204925.3654302-12-ndesaulniers@google.com
2020-07-23x86/percpu: Remove unused PER_CPU() macroBrian Gerst
Also remove now unused __percpu_mov_op. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://lkml.kernel.org/r/20200720204925.3654302-11-ndesaulniers@google.com
2020-07-23x86/percpu: Clean up percpu_stable_op()Brian Gerst
Use __pcpu_size_call_return() to simplify this_cpu_read_stable(). Also remove __bad_percpu_size() which is now unused. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://lkml.kernel.org/r/20200720204925.3654302-10-ndesaulniers@google.com
2020-07-23x86/percpu: Clean up percpu_cmpxchg_op()Brian Gerst
The core percpu macros already have a switch on the data size, so the switch in the x86 code is redundant and produces more dead code. Also use appropriate types for the width of the instructions. This avoids errors when compiling with Clang. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://lkml.kernel.org/r/20200720204925.3654302-9-ndesaulniers@google.com
2020-07-23x86/percpu: Clean up percpu_xchg_op()Brian Gerst
The core percpu macros already have a switch on the data size, so the switch in the x86 code is redundant and produces more dead code. Also use appropriate types for the width of the instructions. This avoids errors when compiling with Clang. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://lkml.kernel.org/r/20200720204925.3654302-8-ndesaulniers@google.com
2020-07-23x86/percpu: Clean up percpu_add_return_op()Brian Gerst
The core percpu macros already have a switch on the data size, so the switch in the x86 code is redundant and produces more dead code. Also use appropriate types for the width of the instructions. This avoids errors when compiling with Clang. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://lkml.kernel.org/r/20200720204925.3654302-7-ndesaulniers@google.com
2020-07-23x86/percpu: Remove "e" constraint from XADDBrian Gerst
The "e" constraint represents a constant, but the XADD instruction doesn't accept immediate operands. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://lkml.kernel.org/r/20200720204925.3654302-6-ndesaulniers@google.com
2020-07-23x86/percpu: Clean up percpu_add_op()Brian Gerst
The core percpu macros already have a switch on the data size, so the switch in the x86 code is redundant and produces more dead code. Also use appropriate types for the width of the instructions. This avoids errors when compiling with Clang. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://lkml.kernel.org/r/20200720204925.3654302-5-ndesaulniers@google.com
2020-07-23x86/percpu: Clean up percpu_from_op()Brian Gerst
The core percpu macros already have a switch on the data size, so the switch in the x86 code is redundant and produces more dead code. Also use appropriate types for the width of the instructions. This avoids errors when compiling with Clang. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dennis Zhou <dennis@kernel.org> Link: https://lkml.kernel.org/r/20200720204925.3654302-4-ndesaulniers@google.com