summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-04-03asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATEPeter Zijlstra
Make issuing a TLB invalidate for page-table pages the normal case. The reason is twofold: - too many invalidates is safer than too few, - most architectures use the linux page-tables natively and would thus require this. Make it an opt-out, instead of an opt-in. No change in behavior intended. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03asm-generic/tlb, ia64: Conditionally provide tlb_migrate_finish()Peter Zijlstra
Needed for ia64 -- alternatively we drop the entire hook. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nick Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03asm-generic/tlb: Provide generic tlb_flush() based on flush_tlb_mm()Peter Zijlstra
When an architecture does not have (an efficient) flush_tlb_range(), but instead always uses full TLB invalidates, the current generic tlb_flush() is sub-optimal, for it will generate extra flushes in order to keep the range small. But if we cannot do range flushes, that is a moot concern. Optionally provide this simplified default. No change in behavior intended. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03asm-generic/tlb, arch: Provide generic tlb_flush() based on flush_tlb_range()Peter Zijlstra
Provide a generic tlb_flush() implementation that relies on flush_tlb_range(). This is a little awkward because flush_tlb_range() assumes a VMA for range invalidation, but we no longer have one. Audit of all flush_tlb_range() implementations shows only vma->vm_mm and vma->vm_flags are used, and of the latter only VM_EXEC (I-TLB invalidates) and VM_HUGETLB (large TLB invalidate) are used. Therefore, track VM_EXEC and VM_HUGETLB in two more bits, and create a 'fake' VMA. This allows architectures that have a reasonably efficient flush_tlb_range() to not require any additional effort. No change in behavior intended. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nick Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03asm-generic/tlb, arch: Provide generic VIPT cache flushPeter Zijlstra
The one obvious thing SH and ARM want is a sensible default for tlb_start_vma(). (also: https://lkml.org/lkml/2004/1/15/6 ) Avoid all VIPT architectures providing their own tlb_start_vma() implementation and rely on architectures to provide a no-op flush_cache_range() when it is not relevant. This patch makes tlb_start_vma() default to flush_cache_range(), which should be right and sufficient. The only exceptions that I found where (oddly): - m68k-mmu - sparc64 - unicore Those architectures appear to have flush_cache_range(), but their current tlb_start_vma() does not call it. No change in behavior intended. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Miller <davem@davemloft.net> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nick Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03asm-generic/tlb, arch: Provide CONFIG_HAVE_MMU_GATHER_PAGE_SIZEPeter Zijlstra
Move the mmu_gather::page_size things into the generic code instead of PowerPC specific bits. No change in behavior intended. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nick Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03asm-generic/tlb: Provide a commentPeter Zijlstra
Write a comment explaining some of this.. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nick Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03iwlwifi: add support for quz firmwaresLuca Coelho
Add a new configuration with a new firmware name for quz devices. And, since these devices have the same PCI device and subsystem IDs, we need to add some code to switch from a normal qu firmware to the quz firmware. Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2019-04-03iwlwifi: mvm: update offloaded rate control on changesJohannes Berg
With offloaded rate control, if the station parameters (rates, NSS, bandwidth) change (sta_rc_update method), call iwl_mvm_rs_rate_init() to propagate those change to the firmware. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2019-04-03iwlwifi: mvm: avoid possible deadlock in TX pathJohannes Berg
iwl_mvm_tx_mpdu() may run from iwl_mvm_add_new_dqa_stream_wk(), where soft-IRQs aren't disabled. In this case, it may hold the station lock and be interrupted by a soft-IRQ that also wants to acquire said lock, leading to a deadlock. Fix it by disabling soft-IRQs in iwl_mvm_add_new_dqa_stream_wk(). Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2019-04-03perf/x86/intel: Fix handling of wakeup_events for multi-entry PEBSStephane Eranian
When an event is programmed with attr.wakeup_events=N (N>0), it means the caller is interested in getting a user level notification after N samples have been recorded in the kernel sampling buffer. With precise events on Intel processors, the kernel uses PEBS. The kernel tries minimize sampling overhead by verifying if the event configuration is compatible with multi-entry PEBS mode. If so, the kernel is notified only when the buffer has reached its threshold. Other PEBS operates in single-entry mode, the kenrel is notified for each PEBS sample. The problem is that the current implementation look at frequency mode and event sample_type but ignores the wakeup_events field. Thus, it may not be possible to receive a notification after each precise event. This patch fixes this problem by disabling multi-entry PEBS if wakeup_events is non-zero. Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: kan.liang@intel.com Link: https://lkml.kernel.org/r/20190306195048.189514-1-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03clk: sunxi-ng: nkmp: Avoid GENMASK(-1, 0)Jernej Skrabec
Sometimes one of the nkmp factors is unused. This means that one of the factors shift and width values are set to 0. Current nkmp clock code generates a mask for each factor with GENMASK(width + shift - 1, shift). For unused factor this translates to GENMASK(-1, 0). This code is further expanded by C preprocessor to final version: (((~0UL) - (1UL << (0)) + 1) & (~0UL >> (BITS_PER_LONG - 1 - (-1)))) or a bit simplified: (~0UL & (~0UL >> BITS_PER_LONG)) It turns out that result of the second part (~0UL >> BITS_PER_LONG) is actually undefined by C standard, which clearly specifies: "If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined." Additionally, compiling kernel with aarch64-linux-gnu-gcc 8.3.0 gave different results whether literals or variables with same values as literals were used. GENMASK with literals -1 and 0 gives zero and with variables gives 0xFFFFFFFFFFFFFFF (~0UL). Because nkmp driver uses GENMASK with variables as parameter, expression calculates mask as ~0UL instead of 0. This has further consequences that LSB in register is always set to 1 (1 is neutral value for a factor and shift is 0). For example, H6 pll-de clock is set to 600 MHz by sun4i-drm driver, but due to this bug ends up being 300 MHz. Additionally, 300 MHz seems to be too low because following warning can be found in dmesg: [ 1.752763] WARNING: CPU: 2 PID: 41 at drivers/clk/sunxi-ng/ccu_common.c:41 ccu_helper_wait_for_lock.part.0+0x6c/0x90 [ 1.763378] Modules linked in: [ 1.766441] CPU: 2 PID: 41 Comm: kworker/2:1 Not tainted 5.1.0-rc2-next-20190401 #138 [ 1.774269] Hardware name: Pine H64 (DT) [ 1.778200] Workqueue: events deferred_probe_work_func [ 1.783341] pstate: 40000005 (nZcv daif -PAN -UAO) [ 1.788135] pc : ccu_helper_wait_for_lock.part.0+0x6c/0x90 [ 1.793623] lr : ccu_helper_wait_for_lock.part.0+0x48/0x90 [ 1.799107] sp : ffff000010f93840 [ 1.802422] x29: ffff000010f93840 x28: 0000000000000000 [ 1.807735] x27: ffff800073ce9d80 x26: ffff000010afd1b8 [ 1.813049] x25: ffffffffffffffff x24: 00000000ffffffff [ 1.818362] x23: 0000000000000001 x22: ffff000010abd5c8 [ 1.823675] x21: 0000000010000000 x20: 00000000685f367e [ 1.828987] x19: 0000000000001801 x18: 0000000000000001 [ 1.834300] x17: 0000000000000001 x16: 0000000000000000 [ 1.839613] x15: 0000000000000000 x14: ffff000010789858 [ 1.844926] x13: 0000000000000000 x12: 0000000000000001 [ 1.850239] x11: 0000000000000000 x10: 0000000000000970 [ 1.855551] x9 : ffff000010f936c0 x8 : ffff800074cec0d0 [ 1.860864] x7 : 0000800067117000 x6 : 0000000115c30b41 [ 1.866177] x5 : 00ffffffffffffff x4 : 002c959300bfe500 [ 1.871490] x3 : 0000000000000018 x2 : 0000000029aaaaab [ 1.876802] x1 : 00000000000002e6 x0 : 00000000686072bc [ 1.882114] Call trace: [ 1.884565] ccu_helper_wait_for_lock.part.0+0x6c/0x90 [ 1.889705] ccu_helper_wait_for_lock+0x10/0x20 [ 1.894236] ccu_nkmp_set_rate+0x244/0x2a8 [ 1.898334] clk_change_rate+0x144/0x290 [ 1.902258] clk_core_set_rate_nolock+0x180/0x1b8 [ 1.906963] clk_set_rate+0x34/0xa0 [ 1.910455] sun8i_mixer_bind+0x484/0x558 [ 1.914466] component_bind_all+0x10c/0x230 [ 1.918651] sun4i_drv_bind+0xc4/0x1a0 [ 1.922401] try_to_bring_up_master+0x164/0x1c0 [ 1.926932] __component_add+0xa0/0x168 [ 1.930769] component_add+0x10/0x18 [ 1.934346] sun8i_dw_hdmi_probe+0x18/0x20 [ 1.938443] platform_drv_probe+0x50/0xa0 [ 1.942455] really_probe+0xcc/0x280 [ 1.946032] driver_probe_device+0x54/0xe8 [ 1.950130] __device_attach_driver+0x80/0xb8 [ 1.954488] bus_for_each_drv+0x78/0xc8 [ 1.958326] __device_attach+0xd4/0x130 [ 1.962163] device_initial_probe+0x10/0x18 [ 1.966348] bus_probe_device+0x90/0x98 [ 1.970185] deferred_probe_work_func+0x6c/0xa0 [ 1.974720] process_one_work+0x1e0/0x320 [ 1.978732] worker_thread+0x228/0x428 [ 1.982484] kthread+0x120/0x128 [ 1.985714] ret_from_fork+0x10/0x18 [ 1.989290] ---[ end trace 9babd42e1ca4b84f ]--- This commit solves the issue by first checking value of the factor width. If it is equal to 0 (unused factor), mask is set to 0, otherwise GENMASK() macro is used as before. Fixes: d897ef56faf9 ("clk: sunxi-ng: Mask nkmp factors when setting register") Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
2019-04-03perf/core: Make perf_swevent_init_cpu() staticValdis Kletnieks
'make W=1' causes GCC to complain: kernel/events/core.c:11877:6: warning: no previous prototype for 'perf_swevent_init_cpu' [-Wmissing-prototypes] It's not referenced anywhere else, make it static. Signed-off-by: Valdis Kletnieks <valdis.kletnieks@vt.edu> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/28974.1552377997@turing-police Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03sched/fair: Do not re-read ->h_load_next during hierarchical load calculationMel Gorman
A NULL pointer dereference bug was reported on a distribution kernel but the same issue should be present on mainline kernel. It occured on s390 but should not be arch-specific. A partial oops looks like: Unable to handle kernel pointer dereference in virtual kernel address space ... Call Trace: ... try_to_wake_up+0xfc/0x450 vhost_poll_wakeup+0x3a/0x50 [vhost] __wake_up_common+0xbc/0x178 __wake_up_common_lock+0x9e/0x160 __wake_up_sync_key+0x4e/0x60 sock_def_readable+0x5e/0x98 The bug hits any time between 1 hour to 3 days. The dereference occurs in update_cfs_rq_h_load when accumulating h_load. The problem is that cfq_rq->h_load_next is not protected by any locking and can be updated by parallel calls to task_h_load. Depending on the compiler, code may be generated that re-reads cfq_rq->h_load_next after the check for NULL and then oops when reading se->avg.load_avg. The dissassembly showed that it was possible to reread h_load_next after the check for NULL. While this does not appear to be an issue for later compilers, it's still an accident if the correct code is generated. Full locking in this path would have high overhead so this patch uses READ_ONCE to read h_load_next only once and check for NULL before dereferencing. It was confirmed that there were no further oops after 10 days of testing. As Peter pointed out, it is also necessary to use WRITE_ONCE() to avoid any potential problems with store tearing. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Fixes: 685207963be9 ("sched: Move h_load calculation to task_h_load()") Link: https://lkml.kernel.org/r/20190319123610.nsivgf3mjbjjesxb@techsingularity.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03x86/uaccess, signal: Fix AC=1 bloatPeter Zijlstra
Occasionally GCC is less agressive with inlining and the following is observed: arch/x86/kernel/signal.o: warning: objtool: restore_sigcontext()+0x3cc: call to force_valid_ss.isra.5() with UACCESS enabled arch/x86/kernel/signal.o: warning: objtool: do_signal()+0x384: call to frame_uc_flags.isra.0() with UACCESS enabled Cure this by moving this code out of the AC=1 region, since it really isn't needed for the user access. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03x86/uaccess: Always inline user_access_begin()Peter Zijlstra
If GCC out-of-lines it, the STAC and CLAC are in different fuctions and objtool gets upset. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03x86/uaccess, xen: Suppress SMAP warningsPeter Zijlstra
drivers/xen/privcmd.o: warning: objtool: privcmd_ioctl()+0x1414: call to hypercall_page() with UACCESS enabled Some Xen hypercalls allow parameter buffers in user land, so they need to set AC=1. Avoid the warning for those cases. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Juergen Gross <jgross@suse.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: andrew.cooper3@citrix.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03x86/nospec, objtool: Introduce ANNOTATE_IGNORE_ALTERNATIVEPeter Zijlstra
To facillitate other usage of ignoring alternatives; rename ANNOTATE_NOSPEC_IGNORE to ANNOTATE_IGNORE_ALTERNATIVE. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03x86/uaccess: Fix up the fixupPeter Zijlstra
New tooling got confused about this: arch/x86/lib/memcpy_64.o: warning: objtool: .fixup+0x7: return with UACCESS enabled While the code isn't wrong, it is tedious (if at all possible) to figure out what function a particular chunk of .fixup belongs to. This then confuses the objtool uaccess validation. Instead of returning directly from the .fixup, jump back into the right function. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03mfd: sun6i-prcm: Allow to compile with COMPILE_TESTMaxime Ripard
Since this driver only has a dependency on ARCH_SUNXI just because it doesn't make any sense to run it on something else, we can definitely enable it through COMPILE_TEST as well to get some build coverage. Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2019-04-03x86/uaccess: Move copy_user_handle_tail() into asmPeter Zijlstra
By writing the function in asm we avoid cross object code flow and objtool no longer gets confused about a 'stray' CLAC. Also; the asm version is actually _simpler_. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03i915, uaccess: Fix redundant CLACPeter Zijlstra
New tooling noticed this: drivers/gpu/drm/i915/i915_gem_execbuffer.o: warning: objtool: .altinstr_replacement+0x3c: redundant UACCESS disable drivers/gpu/drm/i915/i915_gem_execbuffer.o: warning: objtool: .altinstr_replacement+0x66: redundant UACCESS disable You don't need user_access_end() if user_access_begin() fails. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03x86/ia32: Fix ia32_restore_sigcontext() AC leakPeter Zijlstra
Objtool spotted that we call native_load_gs_index() with AC set. Re-arrange the code to avoid that. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03tracing: Improve "if" macro code generationJosh Poimboeuf
With CONFIG_PROFILE_ALL_BRANCHES=y, the "if" macro converts the conditional to an array index. This can cause GCC to create horrible code. When there are nested ifs, the generated code uses register values to encode branching decisions. Make it easier for GCC to optimize by keeping the conditional as a conditional rather than converting it to an integer. This shrinks the generated code quite a bit, and also makes the code sane enough for objtool to understand. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: brgerst@gmail.com Cc: catalin.marinas@arm.com Cc: dvlasenk@redhat.com Cc: dvyukov@google.com Cc: hpa@zytor.com Cc: james.morse@arm.com Cc: julien.thierry@arm.com Cc: luto@amacapital.net Cc: luto@kernel.org Cc: rostedt@goodmis.org Cc: valentin.schneider@arm.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190307174802.46fmpysxyo35hh43@treble Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03sched/x86: Save [ER]FLAGS on context switchPeter Zijlstra
Effectively reverts commit: 2c7577a75837 ("sched/x86_64: Don't save flags on context switch") Specifically because SMAP uses FLAGS.AC which invalidates the claim that the kernel has clean flags. In particular; while preemption from interrupt return is fine (the IRET frame on the exception stack contains FLAGS) it breaks any code that does synchonous scheduling, including preempt_enable(). This has become a significant issue ever since commit: 5b24a7a2aa20 ("Add 'unsafe' user access functions for batched accesses") provided for means of having 'normal' C code between STAC / CLAC, exposing the FLAGS.AC state. So far this hasn't led to trouble, however fix it before it comes apart. Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@kernel.org Fixes: 5b24a7a2aa20 ("Add 'unsafe' user access functions for batched accesses") Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03perf/x86: Add sanity checks to x86_schedule_events()Peter Zijlstra
By computing the 'committed' index earlier, we can use it to validate the cached constraint state. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03perf/x86: Optimize x86_schedule_events()Peter Zijlstra
Now that cpuc->event_constraint[] is retained, we can avoid calling get_event_constraints() over and over again. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03perf/x86: Clear ->event_constraint[] on putPeter Zijlstra
The current code unconditionally clears cpuc->event_constraint[i] before calling get_event_constraints(.idx=i). The only site that cares is intel_get_event_constraints() where the c1 load will always be NULL. However, always calling get_event_constraints() on all events is wastefull, most times it will return the exact same result. Therefore retain the logic in intel_get_event_constraints() and change the generic code to only clear the constraint on put. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Stephane Eranian <eranian@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03perf/x86/intel: Optimize intel_get_excl_constraints()Peter Zijlstra
Avoid the POPCNT by noting we can decrement the weight for each cleared bit. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Stephane Eranian <eranian@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03perf/x86: Remove PERF_X86_EVENT_COMMITTEDPeter Zijlstra
The flag PERF_X86_EVENT_COMMITTED is used to find uncommitted events for which to call put_event_constraint() when scheduling fails. These are the newly added events to the list, and must form, per definition, the tail of cpuc->event_list[]. By computing the list index of the last successfull schedule, then iteration can start there and the flag is redundant. There are only 3 callers of x86_schedule_events(), notably: - x86_pmu_add() - x86_pmu_commit_txn() - validate_group() For x86_pmu_add(), cpuc->n_events isn't updated until after schedule_events() succeeds, therefore cpuc->n_events points to the desired index. For x86_pmu_commit_txn(), cpuc->n_events is updated, but we can trivially compute the desired value with cpuc->n_txn -- the number of events added in this transaction. For validate_group(), we can make the rule for x86_pmu_add() work by simply setting cpuc->n_events to 0 before calling schedule_events(). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Stephane Eranian <eranian@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03perf/x86: Simplify x86_pmu.get_constraints() interfacePeter Zijlstra
There is a special case for validate_events() where we'll call x86_pmu.get_constraints(.idx=-1). It's purpose, up until recent, seems to be to avoid taking a previous constraint from cpuc->event_constraint[] in intel_get_event_constraints(). (I could not find any other get_event_constraints() implementation using @idx) However, since that cpuc is freshly allocated, that array will in fact be initialized with NULL pointers, achieving the very same effect. Therefore remove this exception. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Stephane Eranian <eranian@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03perf/x86/intel: Simplify intel_tfa_commit_scheduling()Peter Zijlstra
validate_group() calls x86_schedule_events(.assign=NULL) and therefore will not call intel_tfa_commit_scheduling(). So there is no point in checking cpuc->is_fake, we'll never get there. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-02Merge tag 'pidfd-fixes-v5.1-rc3' of ↵Linus Torvalds
gitolite.kernel.org:pub/scm/linux/kernel/git/brauner/linux Pull pidfd fix from Christian Brauner: "This should be an uncontroversial fix for pidfd_send_signal() by Jann to better align it's behavior with other signal sending functions: In one of the early versions of the patchset it was suggested to not unconditionally error out when a signal with SI_USER is sent to a non-current task (cf. [1]). Instead, pidfd_send_signal() currently silently changes this to a regular kill signal. While this is technically fine, the semantics are weird since the kernel just silently converts a user's request behind their back and also no other signal sending function allows to do this. It gets more hairy when we introduce sending signals to a specific thread soon. So let's align pidfd_send_signal() with all the other signal sending functions and error out when SI_USER signals are sent to a non-current task" * tag 'pidfd-fixes-v5.1-rc3' of gitolite.kernel.org:pub/scm/linux/kernel/git/brauner/linux: signal: don't silently convert SI_USER signals to non-current pidfd
2019-04-03ASoC: Intel: cht_bsw_max98090_ti: Enable codec clock once and keep it enabledHans de Goede
Users have been seeing sound stability issues with max98090 codecs since: commit 648e921888ad ("clk: x86: Stop marking clocks as CLK_IS_CRITICAL") At first that commit broke sound for Chromebook Swanky and Clapper models, the problem was that the machine-driver has been controlling the wrong clock on those models since support for them was added. This was hidden by clk-pmc-atom.c keeping the actual clk on unconditionally. With the machine-driver controlling the proper clock, sound works again but we are seeing bug reports describing it as: low volume, "sounds like played at 10x speed" and instable. When these issues are hit the following message is seen in dmesg: "max98090 i2c-193C9890:00: PLL unlocked". Attempts have been made to fix this by inserting a delay between enabling the clk and enabling and checking the pll, but this has not helped. It seems that at least on boards which use pmc_plt_clk_0 as clock, if we ever disable the clk, the pll looses its lock and after that we get various issues. This commit fixes this by enabling the clock once at probe time on these boards. In essence this restores the old behavior of clk-pmc-atom.c always keeping the clk on on these boards. Fixes: 648e921888ad ("clk: x86: Stop marking clocks as CLK_IS_CRITICAL") Reported-by: Mogens Jensen <mogens-jensen@protonmail.com> Reported-by: Dean Wallace <duffydack73@gmail.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-02Merge tag 'hwmon-for-v5.1-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging Pull hwmon fixes from Guenter Roeck: "Couple of minor hwmon fixes" * tag 'hwmon-for-v5.1-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging: dt-bindings: hwmon: (adc128d818) Specify ti,mode property size hwmon: (ntc_thermistor) Fix temperature type reporting hwmon: (occ) Fix power sensor indexing hwmon: (w83773g) Select REGMAP_I2C to fix build error
2019-04-02Update Nicolas Pitre's email addressNicolas Pitre
The @linaro version won't be valid much longer. Signed-off-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-04-03ASoC: wm_adsp: Check for buffer in trigger stopCharles Keepax
Trigger stop can be called in situations where trigger start failed and as such it can't be assumed the buffer is already attached to the compressed stream or a NULL pointer may be dereferenced. Fixes: 639e5eb3c7d6 ("ASoC: wm_adsp: Correct handling of compressed streams that restart") Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-03KVM: arm/arm64: vgic-v3: Retire pending interrupts on disabling LPIsMarc Zyngier
When disabling LPIs (for example on reset) at the redistributor level, it is expected that LPIs that was pending in the CPU interface are eventually retired. Currently, this is not what is happening, and these LPIs will stay in the ap_list, eventually being acknowledged by the vcpu (which didn't quite expect this behaviour). The fix is thus to retire these LPIs from the list of pending interrupts as we disable LPIs. Reported-by: Heyi Guo <guoheyi@huawei.com> Tested-by: Heyi Guo <guoheyi@huawei.com> Fixes: 0e4e82f154e3 ("KVM: arm64: vgic-its: Enable ITS emulation as a virtual MSI controller") Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-04-02rtc: da9063: set uie_unsupported when relevantAlexandre Belloni
The DA9063AD doesn't support alarms on any seconds and its granularity is the minute. Set uie_unsupported in that case. Reported-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reported-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Acked-by: Steve Twiss <stwiss.opensource@diasemi.com> Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
2019-04-02drm/amdgpu: remove unnecessary rlc reset function on gfx9Le Ma
The rlc reset function is not necessary during gfx9 initialization/resume phase. And this function would even cause rlc fw loading failed on some gfx9 ASIC. Remove this function safely with verification well on Vega/Raven platform. Signed-off-by: Le Ma <le.ma@amd.com> Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-04-02Merge branch '40GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue Jeff Kirsher says: ==================== Intel Wired LAN Driver Fixes 2019-04-01 This series contains two fixes for XDP in the i40e driver. Björn provides both fixes, first moving a function out of the header and into the main.c file. Second fixes a regression introduced in an earlier patch that removed umem from the VSI. This caused an issue because the setup code would try to enable AF_XDP zero copy unconditionally, as long as there was a umem placed in the netdev receive structure. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-02ip6_tunnel: Match to ARPHRD_TUNNEL6 for dev typeSheena Mira-ato
The device type for ip6 tunnels is set to ARPHRD_TUNNEL6. However, the ip4ip6_err function is expecting the device type of the tunnel to be ARPHRD_TUNNEL. Since the device types do not match, the function exits and the ICMP error packet is not sent to the originating host. Note that the device type for IPv4 tunnels is set to ARPHRD_TUNNEL. Fix is to expect a tunnel device type of ARPHRD_TUNNEL6 instead. Now the tunnel device type matches and the ICMP error packet is sent to the originating host. Signed-off-by: Sheena Mira-ato <sheena.mira-ato@alliedtelesis.co.nz> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-02ARC: [hsdk] Make it easier to add PAE40 region to DTBVineet Gupta
1. Bump top level address-cells/size-cells nodes to 2 (to ensure all down stream addresses are 64-bits, unless explicitly specified otherwise (in "soc" bus with all peripherals) 2. "memory" also specified with address/size 2 3. Add a commented reference for PAE40 region beyond 4GB physical address space Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-04-02ARC: PAE40: don't panic and instead turn off hw iocVineet Gupta
HSDK currently panics when built for HIGHMEM/ARC_HAS_PAE40 because ioc is enabled with default which doesn't work for the 2 non contiguous memory nodes. So get PAE working by disabling ioc instead. Tested with !PAE40 by forcing @ioc_enable=0 and running the glibc testsuite over ssh Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-04-02staging: most: core: use device description as nameChristian Gromm
This patch uses the device description to clearly identity a device attached to the bus. It is needed as the currently useed mdevX notation is not sufficiant in case more than one network interface controller is being used at the same time. Cc: stable@vger.kernel.org Signed-off-by: Christian Gromm <christian.gromm@microchip.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-02misc: fastrpc: add checked value for dma_set_maskBo YU
There be should check return value from dma_set_mask to throw some info if fail to set dma mask. Detected by CoverityScan, CID# 1443983: Error handling issues (CHECKED_RETURN) Fixes: f6f9279f2bf0 ("misc: fastrpc: Add Qualcomm fastrpc basic driver model") Signed-off-by: Bo YU <tsu.yubo@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-02Merge tag 'misc-habanalabs-fixes-2019-04-01' of ↵Greg Kroah-Hartman
git://people.freedesktop.org/~gabbayo/linux into char-misc-linus Oded writes: The following bug fix is included in this tag: - Fix the low credit limit for DMA channel #0. Without this fix, the channel is unusable by the user. * tag 'misc-habanalabs-fixes-2019-04-01' of git://people.freedesktop.org/~gabbayo/linux: habanalabs: remove low credit limit of DMA #0
2019-04-02EDAC/altera, firmware/intel: Add Stratix10 ECC DBE SMC callThor Thayer
Reserve ECC Double Bit Error SMC call to alert U-Boot that a DBE has occurred. Move the call from local EDAC header file to a common header. [ bp: Merge the two patches. ] Signed-off-by: Thor Thayer <thor.thayer@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Richard Gong <richard.gong@intel.com> Reviewed-by: Alan Tull <atull@kernel.org> # firmware Cc: Greg KH <greg@kroah.com> Cc: James Morse <james.morse@arm.com> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: mchehab@kernel.org Link: https://lkml.kernel.org/r/1553870639-23895-1-git-send-email-thor.thayer@linux.intel.com Signed-off-by: Borislav Petkov <bp@suse.de>
2019-04-02blk-mq: add trace block plug and unplug for multiple queuesYufen Yu
For now, we just trace plug for single queue device or drivers provide .commit_rqs, and have not trace plug for multiple queues device. But, unplug events will be recorded when call blk_mq_flush_plug_list(). Then, trace events will be asymmetrical, just have unplug and without plug. This patch add trace plug and unplug for multiple queues device in blk_mq_make_request(). After that, we can accurately trace plug and unplug for multiple queues. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Yufen Yu <yuyufen@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-02block: use blk_free_flush_queue() to free hctx->fq in blk_mq_init_hctxShenghui Wang
kfree() can leak the hctx->fq->flush_rq field. Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Shenghui Wang <shhuiw@foxmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>