summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-10-08Merge tag 'linux-kselftest-5.4-rc3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull Kselftest fixes from Shuah Khan: "Fixes for existing tests and the framework. Cristian Marussi's patches add the ability to skip targets (tests) and exclude tests that didn't build from run-list. These patches improve the Kselftest results. Ability to skip targets helps avoid running tests that aren't supported in certain environments. As an example, bpf tests from mainline aren't supported on stable kernels and have dependency on bleeding edge llvm. Being able to skip bpf on systems that can't meet this llvm dependency will be helpful. Kselftest can be built and installed from the main Makefile. This change help simplify Kselftest use-cases which addresses request from users. Kees Cook added per test timeout support to limit individual test run-time" * tag 'linux-kselftest-5.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: selftests: watchdog: Add command line option to show watchdog_info selftests: watchdog: Validate optional file argument selftests/kselftest/runner.sh: Add 45 second timeout per test kselftest: exclude failed TARGETS from runlist kselftest: add capability to skip chosen TARGETS selftests: Add kselftest-all and kselftest-install targets
2019-10-08x86/cpu: Add Comet Lake to the Intel CPU models headerKan Liang
Comet Lake is the new 10th Gen Intel processor. Add two new CPU model numbers to the Intel family list. The CPU model numbers are not published in the SDM yet but they come from an authoritative internal source. [ bp: Touch up commit message. ] Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Cc: ak@linux.intel.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/1570549810-25049-2-git-send-email-kan.liang@linux.intel.com
2019-10-08rt2x00: remove input-polldev.h headerDmitry Torokhov
The driver does not use input subsystem so we do not need this header, and it is being removed, so stop pulling it in. Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2019-10-08doc: move namespaces.rst from kbuild/ to core-api/Masahiro Yamada
We discussed a better location for this file, and agreed that core-api/ is a good fit. Rename it to symbol-namespaces.rst for disambiguation, and also add it to index.rst and MAINTAINERS. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Matthias Maennich <maennich@google.com> Signed-off-by: Jessica Yu <jeyu@kernel.org>
2019-10-08drm/i915/gt: Flush submission tasklet before waiting/retiringChris Wilson
A common bane of ours is arbitrary delays in ksoftirqd processing our submission tasklet. Give the submission tasklet a kick before we wait to avoid those delays eating into a tight timeout. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Stuart Summers <stuart.summers@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191008105655.13256-1-chris@chris-wilson.co.uk
2019-10-08drm/i915/perf: drop list of streamsLionel Landwerlin
At some point in time there was the idea that we could have multiple stream from the same piece of HW but that never materialized and given the hard time we already have making everything work with the submission side, there is no real point having this list of 1 element around. Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191008140111.5437-1-chris@chris-wilson.co.uk
2019-10-08drm/i915/selftests: Assign the intel_runtime_pm pointer for mock_uncoreChris Wilson
Couple up our mock_uncore to know about the fake global device and its runtime powermanagement. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191008145045.23157-1-chris@chris-wilson.co.uk
2019-10-08drm: damage_helper: Fix race checking plane->state->fbSean Paul
Since the dirtyfb ioctl doesn't give us any hints as to which plane is scanning out the fb it's marking as damaged, we need to loop through planes to find it. Currently we just reach into plane state and check, but that can race with another commit changing the fb out from under us. This patch locks the plane before checking the fb and will release the lock if the plane is not displaying the dirty fb. Fixes: b9fc5e01d1ce ("drm: Add helper to implement legacy dirtyfb") Cc: Rob Clark <robdclark@gmail.com> Cc: Deepak Rawat <drawat@vmware.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <maxime.ripard@bootlin.com> Cc: Sean Paul <sean@poorly.run> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: dri-devel@lists.freedesktop.org Cc: <stable@vger.kernel.org> # v5.0+ Reported-by: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Sean Paul <seanpaul@chromium.org> Link: https://patchwork.freedesktop.org/patch/msgid/20190904202938.110207-1-sean@poorly.run
2019-10-08arm64: armv8_deprecated: Checking return value for memory allocationYunfeng Ye
There are no return value checking when using kzalloc() and kcalloc() for memory allocation. so add it. Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-08lib/string: Make memzero_explicit() inline instead of externalArvind Sankar
With the use of the barrier implied by barrier_data(), there is no need for memzero_explicit() to be extern. Making it inline saves the overhead of a function call, and allows the code to be reused in arch/*/purgatory without having to duplicate the implementation. Tested-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Reviewed-by: Hans de Goede <hdegoede@redhat.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H . Peter Anvin <hpa@zytor.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephan Mueller <smueller@chronox.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-crypto@vger.kernel.org Cc: linux-s390@vger.kernel.org Fixes: 906a4bb97f5d ("crypto: sha256 - Use get/put_unaligned_be32 to get input, memzero_explicit") Link: https://lkml.kernel.org/r/20191007220000.GA408752@rani.riverdale.lan Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-10-08x86/cpu/vmware: Use the full form of INL in VMWARE_PORTSami Tolvanen
LLVM's assembler doesn't accept the short form INL instruction: inl (%%dx) but instead insists on the output register to be explicitly specified: <inline asm>:1:7: error: invalid operand for instruction inl (%dx) ^ LLVM ERROR: Error parsing inline asm Use the full form of the instruction to fix the build. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Thomas Hellstrom <thellstrom@vmware.com> Cc: clang-built-linux@googlegroups.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization@lists.linux-foundation.org Cc: "VMware, Inc." <pv-drivers@vmware.com> Cc: x86-ml <x86@kernel.org> Link: https://github.com/ClangBuiltLinux/linux/issues/734 Link: https://lkml.kernel.org/r/20191007192129.104336-1-samitolvanen@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-10-08arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selectedMarc Zyngier
Allow the user to select the workaround for TX2-219, and update the silicon-errata.rst file to reflect this. Cc: <stable@vger.kernel.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-08arm64: Avoid Cavium TX2 erratum 219 when switching TTBRMarc Zyngier
As a PRFM instruction racing against a TTBR update can have undesirable effects on TX2, NOP-out such PRFM on cores that are affected by the TX2-219 erratum. Cc: <stable@vger.kernel.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-08arm64: Enable workaround for Cavium TX2 erratum 219 when running SMTMarc Zyngier
It appears that the only case where we need to apply the TX2_219_TVM mitigation is when the core is in SMT mode. So let's condition the enabling on detecting a CPU whose MPIDR_EL1.Aff0 is non-zero. Cc: <stable@vger.kernel.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-08x86/asm: Fix MWAITX C-state hint valueJanakarajan Natarajan
As per "AMD64 Architecture Programmer's Manual Volume 3: General-Purpose and System Instructions", MWAITX EAX[7:4]+1 specifies the optional hint of the optimized C-state. For C0 state, EAX[7:4] should be set to 0xf. Currently, a value of 0xf is set for EAX[3:0] instead of EAX[7:4]. Fix this by changing MWAITX_DISABLE_CSTATES from 0xf to 0xf0. This hasn't had any implications so far because setting reserved bits in EAX is simply ignored by the CPU. [ bp: Fixup comment in delay_mwaitx() and massage. ] Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "x86@kernel.org" <x86@kernel.org> Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20191007190011.4859-1-Janakarajan.Natarajan@amd.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-10-08arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is setMarc Zyngier
In order to workaround the TX2-219 erratum, it is necessary to trap TTBRx_EL1 accesses to EL2. This is done by setting HCR_EL2.TVM on guest entry, which has the side effect of trapping all the other VM-related sysregs as well. To minimize the overhead, a fast path is used so that we don't have to go all the way back to the main sysreg handling code, unless the rest of the hypervisor expects to see these accesses. Cc: <stable@vger.kernel.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-08btrfs: silence maybe-uninitialized warning in clone_rangeAustin Kim
GCC throws warning message as below: ‘clone_src_i_size’ may be used uninitialized in this function [-Wmaybe-uninitialized] #define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) ^ fs/btrfs/send.c:5088:6: note: ‘clone_src_i_size’ was declared here u64 clone_src_i_size; ^ The clone_src_i_size is only used as call-by-reference in a call to get_inode_info(). Silence the warning by initializing clone_src_i_size to 0. Note that the warning is a false positive and reported by older versions of GCC (eg. 7.x) but not eg 9.x. As there have been numerous people, the patch is applied. Setting clone_src_i_size to 0 does not otherwise make sense and would not do any action in case the code changes in the future. Signed-off-by: Austin Kim <austindh.kim@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> [ add note ] Signed-off-by: David Sterba <dsterba@suse.com>
2019-10-08efi/tpm: Fix sanity check of unsigned tbl_size being less than zeroColin Ian King
Currently the check for tbl_size being less than zero is always false because tbl_size is unsigned. Fix this by making it a signed int. Addresses-Coverity: ("Unsigned compared against 0") Signed-off-by: Colin Ian King <colin.king@canonical.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-janitors@vger.kernel.org Cc: linux-efi@vger.kernel.org Fixes: e658c82be556 ("efi/tpm: Only set 'efi_tpm_final_log_size' after successful event log parsing") Link: https://lkml.kernel.org/r/20191008100153.8499-1-colin.king@canonical.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-10-08drm/i915/selftests: Assign the mock_engine->uncore shortcutChris Wilson
Set up the engine->uncore shortcut on mock_engine creation. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191008071121.25088-1-chris@chris-wilson.co.uk
2019-10-08drm/i915/execlists: Assign virtual_engine->uncore from first siblingChris Wilson
Copy across the engine->uncore shortcut to the virtual_engine from its first physical engine, similar to the handling of the engine->gt backpointer. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191008070342.4045-1-chris@chris-wilson.co.uk
2019-10-08Merge branch 'for-joerg/arm-smmu/fixes' of ↵Joerg Roedel
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into iommu/fixes
2019-10-08drm/i915/tgl: Add DC3CO counter in i915_dmc_infoAnshuman Gupta
Adding DC3CO counter in i915_dmc_info debugfs will be useful for DC3CO validation. DMC firmware uses DMC_DEBUG3 register as DC3CO counter register on TGL, as per B.Specs DMC_DEBUG3 is general purpose register. v1: comment modification for DMC_DBUG3. using GEN >= 12 check instead of IS_TIGERLAKE() to print DMC_DEBUG3 counter value. Cc: Jani Nikula <jani.nikula@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Animesh Manna <animesh.manna@intel.com> Reviewed-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191003081738.22101-7-anshuman.gupta@intel.com
2019-10-08drm/i915/tgl: Switch between dc3co and dc5 based on display idlenessAnshuman Gupta
DC3CO is useful power state, when DMC detects PSR2 idle frame while an active video playback, playing 30fps video on 60hz panel is the classic example of this use case. B.Specs:49196 has a restriction to enable DC3CO only for Video Playback. It will be worthy to enable DC3CO after completion of each pageflip and switch back to DC5 when display is idle because driver doesn't differentiate between video playback and a normal pageflip. We will use Frontbuffer flush call tgl_dc3co_flush() to enable DC3CO state only for ORIGIN_FLIP flush call, because DC3CO state has primarily targeted for VPB use case. We are not interested here for frontbuffer invalidates calls because that triggers PSR2 exit, which will explicitly disable DC3CO. DC5 and DC6 saves more power, but can't be entered during video playback because there are not enough idle frames in a row to meet most PSR2 panel deep sleep entry requirement typically 4 frames. As PSR2 existing implementation is using minimum 6 idle frames for deep sleep, it is safer to enable DC5/6 after 6 idle frames (By scheduling a delayed work of 6 idle frames, once DC3CO has been enabled after a pageflip). After manually waiting for 6 idle frames DC5/6 will be enabled and PSR2 deep sleep idle frames will be restored to 6 idle frames, at this point DMC will triggers DC5/6 once PSR2 enters to deep sleep after 6 idle frames. In future when we will enable S/W PSR2 tracking, we can change the PSR2 required deep sleep idle frames to 1 so DMC can trigger the DC5/6 immediately after S/W manual waiting of 6 idle frames get complete. v2: calculated s/w state to switch over dc3co when there is an update. [Imre] Used cancel_delayed_work_sync() in order to avoid any race with already scheduled delayed work. [Imre] v3: Cancel_delayed_work_sync() may blocked the commit work. hence dropping it, dc5_idle_thread() checks the valid wakeref before putting the reference count, which avoids any chances of dropping a zero wakeref. [Imre (IRC)] v4: Used frontbuffer flush mechanism. [Imre] v5: Used psr.pipe to extract frontbuffer busy bits. [Imre] Used cancel_delayed_work_sync() in encoder disable path. [Imre] Used mod_delayed_work() instead of cancelling and scheduling a delayed work. [Imre] Used psr.lock in tgl_dc5_idle_thread() to enable psr2 deep sleep. [Imre] Removed DC5_REQ_IDLE_FRAMES macro. [Imre] v6: Used dc3co_exitline check instead of TGL and dc3co allowed_dc_mask checks, used delayed_work_pending with the psr lock and removed the psr2_deep_slp_disabled flag. [Imre] v7: Code refactoring, moved most of functional code to inte_psr.c [Imre] Using frontbuffer_bits on psr.pipe check instead of busy_frontbuffer_bits. [Imre] Calculating dc3co_exit_delay in intel_psr_enable_locked. [Imre] Cc: Jani Nikula <jani.nikula@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Animesh Manna <animesh.manna@intel.com> Reviewed-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191003081738.22101-6-anshuman.gupta@intel.com
2019-10-08drm/i915/tgl: Do modeset to enable and configure DC3CO exitlineAnshuman Gupta
DC3CO enabling B.Specs sequence requires to enable end configure exit scanlines to TRANS_EXITLINE register, programming this register has to be part of modeset sequence as this can't be change when transcoder or port is enabled. When system boots with only eDP panel there may not be real modeset as BIOS has already programmed the necessary registers, therefore it needs to force a modeset to enable and configure DC3CO exitline. v1: Computing dc3co_exitline crtc state from a DP encoder compute config. [Imre] Enabling and disabling DC3CO PSR2 transcoder exitline from encoder pre_enable and post_disable hooks. [Imre] Computing dc3co_exitline instead of has_dc3co_exitline bool. [Imre] v2: Code refactoring for symmetry and to avoid exported function. [Imre] Removing IS_TIGERLAKE check from compute_config, adding PIPE_A restriction and clearing dc3co_exitline state if crtc is not active or it is not PSR2 capable in dc3co exitline compute_config. [Imre] Using GEN >= 12 check in dc3co exitline get_config. [Imre] Cc: Jani Nikula <jani.nikula@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Animesh Manna <animesh.manna@intel.com> Reviewed-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191003081738.22101-5-anshuman.gupta@intel.com
2019-10-08drm/i915/tgl: Enable DC3CO state in "DC Off" power wellAnshuman Gupta
Add target_dc_state and used by set_target_dc_state API in order to enable DC3CO state with existing DC states. target_dc_state will enable/disable the desired DC state in DC_STATE_EN reg when "DC Off" power well gets disable/enable. v2: commit log improvement. v3: Used intel_wait_for_register to wait for DC3CO exit. [Imre] Used gen9_set_dc_state() to allow/disallow DC3CO. [Imre] Moved transcoder psr2 exit line enablement from tgl_allow_dc3co() to a appropriate place haswell_crtc_enable(). [Imre] Changed the DC3CO power well enabled call back logic as recommended in review comments. [Imre] v4: Used wait_for_us() instead of intel_wait_for_reg(). [Imre (IRC)] v5: using udelay() instead of waiting for DC3CO exit status. v6: Fixed minor unwanted change. v7: Removed DC3CO powerwell and POWER_DOMAIN_VIDEO. v8: Uniform checks by using only target_dc_state instead of allowed_dc_mask in "DC off" power well callback. [Imre] Adding "DC off" power well id to older platforms. [Imre] Removed psr2_deep_sleep flag from tgl_set_target_dc_state. [Imre] v9: Used switch case for target DC state in gen9_dc_off_power_well_disable(), checking DC3CO state against allowed DC mask, using WARN_ON() in tgl_set_target_dc_state(). [Imre] v10: Code refactoring and using sanitize_target_dc_state(). [Imre] Cc: Jani Nikula <jani.nikula@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Animesh Manna <animesh.manna@intel.com> Reviewed-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191003081738.22101-4-anshuman.gupta@intel.com
2019-10-08drm/i915/tgl: Add DC3CO mask to allowed_dc_mask and gen9_dc_maskAnshuman Gupta
Enable dc3co state in enable_dc module param and add dc3co enable mask to allowed_dc_mask and gen9_dc_mask. v1: Adding enable_dc=3,4 options to enable DC3CO with DC5 and DC6 independently. [Animesh] v2: Using a switch statement for cleaner code. [Animesh] Cc: Jani Nikula <jani.nikula@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Animesh Manna <animesh.manna@intel.com> Reviewed-by: Animesh Manna <animesh.manna@intel.com> Reviewed-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191003081738.22101-3-anshuman.gupta@intel.com
2019-10-08drm/i915/tgl: Add DC3CO required register and bitsAnshuman Gupta
Adding following definition to i915_reg.h 1. DC_STATE_EN register DC3CO bit fields and masks. DC3CO enable bit will be used by driver to make DC3CO ready for DMC f/w and status bit will be used as DC3CO entry status. 2. Transcoder EXITLINE register and its bit fields and mask. Transcoder EXITLINE enable bit represents PSR2 idle frame reset should be applied at exit line and exitlines mask represent required number of scanlines at which DC3CO exit happens. B.Specs:49196 v1: Use of REG_BIT and using extra space for EXITLINE_ macro definition. [Animesh] v2: Grouping EXITLINE reg bits with EXITLINE(trans) define, no functional change. [Ville] Cc: Jani Nikula <jani.nikula@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Animesh Manna <animesh.manna@intel.com> Reviewed-by: Animesh Manna <animesh.manna@intel.com> Reviewed-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007094607.2111-1-anshuman.gupta@intel.com
2019-10-08drm/i915/perf: Set the exclusive stream under perf->lockChris Wilson
The BKL struct_mutex is no more, the only serialisation we required for setting the exclusive stream is already managed by ce->pin_mutex in gen8_configure_all_contexts(). As such, we can manipulate i915_perf.exclusive_stream underneath our own (already held) perf->lock. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007140812.10963-2-chris@chris-wilson.co.uk Link: https://patchwork.freedesktop.org/patch/msgid/20191007210942.18145-2-chris@chris-wilson.co.uk
2019-10-08drm/i915/perf: Wean ourselves off dev_privChris Wilson
Use the local uncore accessors for the GT rather than using the [not-so] magic global dev_priv mmio routines. In the process, we also teach the perf stream to use backpointers to the i915_perf rather than digging it out of dev_priv. v2: Rebase onto i915_perf_types.h Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20191007140812.10963-1-chris@chris-wilson.co.uk Link: https://patchwork.freedesktop.org/patch/msgid/20191007210942.18145-1-chris@chris-wilson.co.uk
2019-10-08drm/i915: Fix Kconfig indentationKrzysztof Kozlowski
Adjust indentation from spaces to tab (+optional two spaces) as in coding style with command like: $ sed -e 's/^ /\t/' -i */Kconfig Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007173346.9379-1-krzk@kernel.org
2019-10-08drm/sun4i: dsi: Fix video start delay computationJagan Teki
The LCD timing definitions between Linux DRM vs Allwinner are different, below diagram shows this clear differences. Active Front Sync Back Region Porch Porch <-----------------------><----------------><--------------><--------------> //////////////////////| ////////////////////// | ////////////////////// |.................. ................ ________________ <----- [hv]display -----> <------------- [hv]sync_start ------------> <--------------------- [hv]sync_end ----------------------> <-------------------------------- [hv]total ------------------------------> <----- lcd_[xy] --------> <- lcd_[hv]spw -> <---------- lcd_[hv]bp ---------> <-------------------------------- lcd_[hv]t ------------------------------> The DSI driver misinterpreted the vbp term from the BSP code to refer only to the backporch, when in fact it was backporch + sync. Thus the driver incorrectly used the vertical front porch plus sync in its calculation of the DRQ set bit value, when it should not have included the sync timing. Including additional sync timings leads to flip_done timed out as: WARNING: CPU: 0 PID: 31 at drivers/gpu/drm/drm_atomic_helper.c:1429 drm_atomic_helper_wait_for_vblanks.part.1+0x298/0x2a0 [CRTC:46:crtc-0] vblank wait timed out Modules linked in: CPU: 0 PID: 31 Comm: kworker/0:1 Not tainted 5.1.0-next-20190514-00029-g09e5b0ed0a58 #18 Hardware name: Allwinner sun8i Family Workqueue: events deferred_probe_work_func [<c010ed54>] (unwind_backtrace) from [<c010b76c>] (show_stack+0x10/0x14) [<c010b76c>] (show_stack) from [<c0688c70>] (dump_stack+0x84/0x98) [<c0688c70>] (dump_stack) from [<c011d9e4>] (__warn+0xfc/0x114) [<c011d9e4>] (__warn) from [<c011da40>] (warn_slowpath_fmt+0x44/0x68) [<c011da40>] (warn_slowpath_fmt) from [<c040cd50>] (drm_atomic_helper_wait_for_vblanks.part.1+0x298/0x2a0) [<c040cd50>] (drm_atomic_helper_wait_for_vblanks.part.1) from [<c040e694>] (drm_atomic_helper_commit_tail_rpm+0x5c/0x6c) [<c040e694>] (drm_atomic_helper_commit_tail_rpm) from [<c040e4dc>] (commit_tail+0x40/0x6c) [<c040e4dc>] (commit_tail) from [<c040e5cc>] (drm_atomic_helper_commit+0xbc/0x128) [<c040e5cc>] (drm_atomic_helper_commit) from [<c0411b64>] (restore_fbdev_mode_atomic+0x1cc/0x1dc) [<c0411b64>] (restore_fbdev_mode_atomic) from [<c04156f8>] (drm_fb_helper_restore_fbdev_mode_unlocked+0x54/0xa0) [<c04156f8>] (drm_fb_helper_restore_fbdev_mode_unlocked) from [<c0415774>] (drm_fb_helper_set_par+0x30/0x54) [<c0415774>] (drm_fb_helper_set_par) from [<c03ad450>] (fbcon_init+0x560/0x5ac) [<c03ad450>] (fbcon_init) from [<c03eb8a0>] (visual_init+0xbc/0x104) [<c03eb8a0>] (visual_init) from [<c03ed1b8>] (do_bind_con_driver+0x1b0/0x390) [<c03ed1b8>] (do_bind_con_driver) from [<c03ed780>] (do_take_over_console+0x13c/0x1c4) [<c03ed780>] (do_take_over_console) from [<c03ad800>] (do_fbcon_takeover+0x74/0xcc) [<c03ad800>] (do_fbcon_takeover) from [<c013c9c8>] (notifier_call_chain+0x44/0x84) [<c013c9c8>] (notifier_call_chain) from [<c013cd20>] (__blocking_notifier_call_chain+0x48/0x60) [<c013cd20>] (__blocking_notifier_call_chain) from [<c013cd50>] (blocking_notifier_call_chain+0x18/0x20) [<c013cd50>] (blocking_notifier_call_chain) from [<c03a6e44>] (register_framebuffer+0x1e0/0x2f8) [<c03a6e44>] (register_framebuffer) from [<c04153c0>] (__drm_fb_helper_initial_config_and_unlock+0x2fc/0x50c) [<c04153c0>] (__drm_fb_helper_initial_config_and_unlock) from [<c04158c8>] (drm_fbdev_client_hotplug+0xe8/0x1b8) [<c04158c8>] (drm_fbdev_client_hotplug) from [<c0415a20>] (drm_fbdev_generic_setup+0x88/0x118) [<c0415a20>] (drm_fbdev_generic_setup) from [<c043f060>] (sun4i_drv_bind+0x128/0x160) [<c043f060>] (sun4i_drv_bind) from [<c044b598>] (try_to_bring_up_master+0x164/0x1a0) [<c044b598>] (try_to_bring_up_master) from [<c044b668>] (__component_add+0x94/0x140) [<c044b668>] (__component_add) from [<c0445e1c>] (sun6i_dsi_probe+0x144/0x234) [<c0445e1c>] (sun6i_dsi_probe) from [<c0452ef4>] (platform_drv_probe+0x48/0x9c) [<c0452ef4>] (platform_drv_probe) from [<c04512cc>] (really_probe+0x1dc/0x2c8) [<c04512cc>] (really_probe) from [<c0451518>] (driver_probe_device+0x60/0x160) [<c0451518>] (driver_probe_device) from [<c044f7a4>] (bus_for_each_drv+0x74/0xb8) [<c044f7a4>] (bus_for_each_drv) from [<c045107c>] (__device_attach+0xd0/0x13c) [<c045107c>] (__device_attach) from [<c0450474>] (bus_probe_device+0x84/0x8c) [<c0450474>] (bus_probe_device) from [<c0450900>] (deferred_probe_work_func+0x64/0x90) [<c0450900>] (deferred_probe_work_func) from [<c0135970>] (process_one_work+0x204/0x420) [<c0135970>] (process_one_work) from [<c013690c>] (worker_thread+0x274/0x5a0) [<c013690c>] (worker_thread) from [<c013b3d8>] (kthread+0x11c/0x14c) [<c013b3d8>] (kthread) from [<c01010e8>] (ret_from_fork+0x14/0x2c) Exception stack(0xde539fb0 to 0xde539ff8) 9fa0: 00000000 00000000 00000000 00000000 9fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 9fe0: 00000000 00000000 00000000 00000000 00000013 00000000 ---[ end trace 495200a78b24980e ]--- random: fast init done [drm:drm_atomic_helper_wait_for_dependencies] *ERROR* [CRTC:46:crtc-0] flip_done timed out [drm:drm_atomic_helper_wait_for_dependencies] *ERROR* [CONNECTOR:48:DSI-1] flip_done timed out [drm:drm_atomic_helper_wait_for_dependencies] *ERROR* [PLANE:30:plane-0] flip_done timed out With the terms(as described in above diagram) fixed, the panel displays correctly without any timeouts. Tested-by: Merlijn Wajer <merlijn@wizzup.org> Signed-off-by: Jagan Teki <jagan@amarulasolutions.com> Signed-off-by: Icenowy Zheng <icenowy@aosc.io> Signed-off-by: Maxime Ripard <mripard@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20191006160303.24413-2-icenowy@aosc.io
2019-10-08drm/panel: tpo-td043mtea1: Fix SPI aliasLaurent Pinchart
The panel-tpo-td043mtea1 driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it, and move the manual alias to an SPI module device table. Fixes: dc2e1e5b2799 ("drm/panel: Add driver for the Toppoly TD043MTEA1 panel") Reported-by: H. Nikolaus Schaller <hns@goldelico.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007170801.27647-6-laurent.pinchart@ideasonboard.com Acked-by: Sam Ravnborg <sam@ravnborg.org> Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.com> Tested-by: H. Nikolaus Schaller <hns@goldelico.com>
2019-10-08drm/panel: tpo-td028ttec1: Fix SPI aliasLaurent Pinchart
The panel-tpo-td028ttec1 driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it. Fixes: 415b8dd08711 ("drm/panel: Add driver for the Toppoly TD028TTEC1 panel") Reported-by: H. Nikolaus Schaller <hns@goldelico.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007170801.27647-5-laurent.pinchart@ideasonboard.com Acked-by: Sam Ravnborg <sam@ravnborg.org> Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.com> Tested-by: H. Nikolaus Schaller <hns@goldelico.com> Tested-by: Andreas Kemnade <andreas@kemnade.info>
2019-10-08drm/panel: sony-acx565akm: Fix SPI aliasLaurent Pinchart
The panel-sony-acx565akm driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it, and move the manual alias to an SPI module device table. Fixes: 1c8fc3f0c5d2 ("drm/panel: Add driver for the Sony ACX565AKM panel") Reported-by: H. Nikolaus Schaller <hns@goldelico.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007170801.27647-4-laurent.pinchart@ideasonboard.com Acked-by: Sam Ravnborg <sam@ravnborg.org> Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.com>
2019-10-08drm/panel: nec-nl8048hl11: Fix SPI aliasLaurent Pinchart
The panel-nec-nl8048hl11 driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it, and move the manual alias to an SPI module device table. Fixes: df439abe6501 ("drm/panel: Add driver for the NEC NL8048HL11 panel") Reported-by: H. Nikolaus Schaller <hns@goldelico.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007170801.27647-3-laurent.pinchart@ideasonboard.com Acked-by: Sam Ravnborg <sam@ravnborg.org> Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.com>
2019-10-08drm/panel: lg-lb035q02: Fix SPI aliasLaurent Pinchart
The panel-lg-lb035q02 driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it, and move the manual alias to an SPI module device table. Fixes: f5b0c6542476 ("drm/panel: Add driver for the LG Philips LB035Q02 panel") Reported-by: H. Nikolaus Schaller <hns@goldelico.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007170801.27647-2-laurent.pinchart@ideasonboard.com Acked-by: Sam Ravnborg <sam@ravnborg.org> Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.com>
2019-10-07io_uring: remove wait loop spurious wakeupsPavel Begunkov
Any changes interesting to tasks waiting in io_cqring_wait() are commited with io_cqring_ev_posted(). However, io_ring_drop_ctx_refs() also tries to do that but with no reason, that means spurious wakeups every io_free_req() and io_uring_enter(). Just use percpu_ref_put() instead. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-08Merge tag 'drm-intel-next-2019-10-07' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-intel into drm-next UAPI Changes: - Never allow userptr into the mappable GGTT (Chris) No existing users. Avoid anyone from even trying to spare a deadlock scenario. Cross-subsystem Changes: Core Changes: Driver Changes: - Eliminate struct_mutex use as BKL! (Chris) Only used for execbuf serialisation. - Initialize DDI TC and TBT ports (D-I) on Tigerlake (Lucas) - Fix DKL link training for 2.7GHz and 1.62GHz (Jose) - Add Tigerlake DKL PHY programming sequences (Clinton) - Add Tigerlake Thunderbolt PLL divider values (Imre) - drm/i915: Use helpers for drm_mm_node booleans (Chris) - Restrict L3 remapping sysfs interface to dwords (Chris) - Fix audio power up sequence for gen10+ display (Kai) - Skip redundant execlist resubmission (Chris) - Only unwedge if we can reset GPU first (Chris) - Initialise breadcrumb lists on the virtual engine (Chris) - Don't rely on kernel context existing during early errors (Matt A) - Update Icelake+ MG_DP_MODE programming table (Clinton) - Update DMC firmware for Icelake (Anusha) - Downgrade DP MST error after unplugging TypeC cable (Srinivasan) - Limit MST modes based on plane size too (Ville) - Polish intel_tv_mode_valid() (Ville) - Fix g4x sprite scaling stride check with GTT remapping (Ville) - Don't advertize non-exisiting crtcs (Ville) - Clean up encoder->crtc_mask setup (Ville) - Use tc_port instead of port parameter to MG registers (Jose) - Remove static variable for aux last status (Jani) - Implement a better i945gm vblank irq vs. C-states workaround (Ville) - Make the object creation interface consistent (CQ) - Rename intel_vga_msr_write() to intel_vga_reset_io_mem() (Jani, Ville) - Eliminate previous drm_dbg/drm_err usage (Jani) - Move gmbus setup down to intel_modeset_init() (Jani) - Abstract all vgaarb access to intel_vga.[ch] (Jani) - Split out i915_switcheroo.[ch] from i915_drv.c (Jani) - Use intel_gt in has_reset* (Chris) - Eliminate return value for i915_gem_init_early (Matt A) - Selftest improvements (Chris) - Update HuC firmware header version number format (Daniele) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191007134801.GA24313@jlahtine-desk.ger.corp.intel.com
2019-10-07Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge misc fixes from Andrew Morton: "The usual shower of hotfixes. Chris's memcg patches aren't actually fixes - they're mature but a few niggling review issues were late to arrive. The ocfs2 fixes are quite old - those took some time to get reviewer attention. Subsystems affected by this patch series: ocfs2, hotfixes, mm/memcg, mm/slab-generic" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two) mm, sl[ou]b: improve memory accounting mm, memcg: make scan aggression always exclude protection mm, memcg: make memory.emin the baseline for utilisation determination mm, memcg: proportional memory.{low,min} reclaim mm/vmpressure.c: fix a signedness bug in vmpressure_register_event() mm/page_alloc.c: fix a crash in free_pages_prepare() mm/z3fold.c: claim page in the beginning of free kernel/sysctl.c: do not override max_threads provided by userspace memcg: only record foreign writebacks with dirty pages when memcg is not disabled mm: fix -Wmissing-prototypes warnings writeback: fix use-after-free in finish_writeback_work() mm/memremap: drop unused SECTION_SIZE and SECTION_MASK panic: ensure preemption is disabled during panic() fs: ocfs2: fix a possible null-pointer dereference in ocfs2_info_scan_inode_alloc() fs: ocfs2: fix a possible null-pointer dereference in ocfs2_write_end_nolock() fs: ocfs2: fix possible null-pointer dereferences in ocfs2_xa_prepare_entry() ocfs2: clear zero in unaligned direct IO
2019-10-07mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)Vlastimil Babka
In most configurations, kmalloc() happens to return naturally aligned (i.e. aligned to the block size itself) blocks for power of two sizes. That means some kmalloc() users might unknowingly rely on that alignment, until stuff breaks when the kernel is built with e.g. CONFIG_SLUB_DEBUG or CONFIG_SLOB, and blocks stop being aligned. Then developers have to devise workaround such as own kmem caches with specified alignment [1], which is not always practical, as recently evidenced in [2]. The topic has been discussed at LSF/MM 2019 [3]. Adding a 'kmalloc_aligned()' variant would not help with code unknowingly relying on the implicit alignment. For slab implementations it would either require creating more kmalloc caches, or allocate a larger size and only give back part of it. That would be wasteful, especially with a generic alignment parameter (in contrast with a fixed alignment to size). Ideally we should provide to mm users what they need without difficult workarounds or own reimplementations, so let's make the kmalloc() alignment to size explicitly guaranteed for power-of-two sizes under all configurations. What this means for the three available allocators? * SLAB object layout happens to be mostly unchanged by the patch. The implicitly provided alignment could be compromised with CONFIG_DEBUG_SLAB due to redzoning, however SLAB disables redzoning for caches with alignment larger than unsigned long long. Practically on at least x86 this includes kmalloc caches as they use cache line alignment, which is larger than that. Still, this patch ensures alignment on all arches and cache sizes. * SLUB layout is also unchanged unless redzoning is enabled through CONFIG_SLUB_DEBUG and boot parameter for the particular kmalloc cache. With this patch, explicit alignment is guaranteed with redzoning as well. This will result in more memory being wasted, but that should be acceptable in a debugging scenario. * SLOB has no implicit alignment so this patch adds it explicitly for kmalloc(). The potential downside is increased fragmentation. While pathological allocation scenarios are certainly possible, in my testing, after booting a x86_64 kernel+userspace with virtme, around 16MB memory was consumed by slab pages both before and after the patch, with difference in the noise. [1] https://lore.kernel.org/linux-btrfs/c3157c8e8e0e7588312b40c853f65c02fe6c957a.1566399731.git.christophe.leroy@c-s.fr/ [2] https://lore.kernel.org/linux-fsdevel/20190225040904.5557-1-ming.lei@redhat.com/ [3] https://lwn.net/Articles/787740/ [akpm@linux-foundation.org: documentation fixlet, per Matthew] Link: http://lkml.kernel.org/r/20190826111627.7505-3-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Christoph Hellwig <hch@lst.de> Cc: David Sterba <dsterba@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Ming Lei <ming.lei@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: "Darrick J . Wong" <darrick.wong@oracle.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07mm, sl[ou]b: improve memory accountingVlastimil Babka
Patch series "guarantee natural alignment for kmalloc()", v2. This patch (of 2): SLOB currently doesn't account its pages at all, so in /proc/meminfo the Slab field shows zero. Modifying a counter on page allocation and freeing should be acceptable even for the small system scenarios SLOB is intended for. Since reclaimable caches are not separated in SLOB, account everything as unreclaimable. SLUB currently doesn't account kmalloc() and kmalloc_node() allocations larger than order-1 page, that are passed directly to the page allocator. As they also don't appear in /proc/slabinfo, it might look like a memory leak. For consistency, account them as well. (SLAB doesn't actually use page allocator directly, so no change there). Ideally SLOB and SLUB would be handled in separate patches, but due to the shared kmalloc_order() function and different kfree() implementations, it's easier to patch both at once to prevent inconsistencies. Link: http://lkml.kernel.org/r/20190826111627.7505-2-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Ming Lei <ming.lei@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: "Darrick J . Wong" <darrick.wong@oracle.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07mm, memcg: make scan aggression always exclude protectionChris Down
This patch is an incremental improvement on the existing memory.{low,min} relative reclaim work to base its scan pressure calculations on how much protection is available compared to the current usage, rather than how much the current usage is over some protection threshold. This change doesn't change the experience for the user in the normal case too much. One benefit is that it replaces the (somewhat arbitrary) 100% cutoff with an indefinite slope, which makes it easier to ballpark a memory.low value. As well as this, the old methodology doesn't quite apply generically to machines with varying amounts of physical memory. Let's say we have a top level cgroup, workload.slice, and another top level cgroup, system-management.slice. We want to roughly give 12G to system-management.slice, so on a 32GB machine we set memory.low to 20GB in workload.slice, and on a 64GB machine we set memory.low to 52GB. However, because these are relative amounts to the total machine size, while the amount of memory we want to generally be willing to yield to system.slice is absolute (12G), we end up putting more pressure on system.slice just because we have a larger machine and a larger workload to fill it, which seems fairly unintuitive. With this new behaviour, we don't end up with this unintended side effect. Previously the way that memory.low protection works is that if you are 50% over a certain baseline, you get 50% of your normal scan pressure. This is certainly better than the previous cliff-edge behaviour, but it can be improved even further by always considering memory under the currently enforced protection threshold to be out of bounds. This means that we can set relatively low memory.low thresholds for variable or bursty workloads while still getting a reasonable level of protection, whereas with the previous version we may still trivially hit the 100% clamp. The previous 100% clamp is also somewhat arbitrary, whereas this one is more concretely based on the currently enforced protection threshold, which is likely easier to reason about. There is also a subtle issue with the way that proportional reclaim worked previously -- it promotes having no memory.low, since it makes pressure higher during low reclaim. This happens because we base our scan pressure modulation on how far memory.current is between memory.min and memory.low, but if memory.low is unset, we only use the overage method. In most cromulent configurations, this then means that we end up with *more* pressure than with no memory.low at all when we're in low reclaim, which is not really very usable or expected. With this patch, memory.low and memory.min affect reclaim pressure in a more understandable and composable way. For example, from a user standpoint, "protected" memory now remains untouchable from a reclaim aggression standpoint, and users can also have more confidence that bursty workloads will still receive some amount of guaranteed protection. Link: http://lkml.kernel.org/r/20190322160307.GA3316@chrisdown.name Signed-off-by: Chris Down <chris@chrisdown.name> Reviewed-by: Roman Gushchin <guro@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Dennis Zhou <dennis@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07mm, memcg: make memory.emin the baseline for utilisation determinationChris Down
Roman points out that when when we do the low reclaim pass, we scale the reclaim pressure relative to position between 0 and the maximum protection threshold. However, if the maximum protection is based on memory.elow, and memory.emin is above zero, this means we still may get binary behaviour on second-pass low reclaim. This is because we scale starting at 0, not starting at memory.emin, and since we don't scan at all below emin, we end up with cliff behaviour. This should be a fairly uncommon case since usually we don't go into the second pass, but it makes sense to scale our low reclaim pressure starting at emin. You can test this by catting two large sparse files, one in a cgroup with emin set to some moderate size compared to physical RAM, and another cgroup without any emin. In both cgroups, set an elow larger than 50% of physical RAM. The one with emin will have less page scanning, as reclaim pressure is lower. Rebase on top of and apply the same idea as what was applied to handle cgroup_memory=disable properly for the original proportional patch http://lkml.kernel.org/r/20190201045711.GA18302@chrisdown.name ("mm, memcg: Handle cgroup_disable=memory when getting memcg protection"). Link: http://lkml.kernel.org/r/20190201051810.GA18895@chrisdown.name Signed-off-by: Chris Down <chris@chrisdown.name> Suggested-by: Roman Gushchin <guro@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Dennis Zhou <dennis@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07mm, memcg: proportional memory.{low,min} reclaimChris Down
cgroup v2 introduces two memory protection thresholds: memory.low (best-effort) and memory.min (hard protection). While they generally do what they say on the tin, there is a limitation in their implementation that makes them difficult to use effectively: that cliff behaviour often manifests when they become eligible for reclaim. This patch implements more intuitive and usable behaviour, where we gradually mount more reclaim pressure as cgroups further and further exceed their protection thresholds. This cliff edge behaviour happens because we only choose whether or not to reclaim based on whether the memcg is within its protection limits (see the use of mem_cgroup_protected in shrink_node), but we don't vary our reclaim behaviour based on this information. Imagine the following timeline, with the numbers the lruvec size in this zone: 1. memory.low=1000000, memory.current=999999. 0 pages may be scanned. 2. memory.low=1000000, memory.current=1000000. 0 pages may be scanned. 3. memory.low=1000000, memory.current=1000001. 1000001* pages may be scanned. (?!) * Of course, we won't usually scan all available pages in the zone even without this patch because of scan control priority, over-reclaim protection, etc. However, as shown by the tests at the end, these techniques don't sufficiently throttle such an extreme change in input, so cliff-like behaviour isn't really averted by their existence alone. Here's an example of how this plays out in practice. At Facebook, we are trying to protect various workloads from "system" software, like configuration management tools, metric collectors, etc (see this[0] case study). In order to find a suitable memory.low value, we start by determining the expected memory range within which the workload will be comfortable operating. This isn't an exact science -- memory usage deemed "comfortable" will vary over time due to user behaviour, differences in composition of work, etc, etc. As such we need to ballpark memory.low, but doing this is currently problematic: 1. If we end up setting it too low for the workload, it won't have *any* effect (see discussion above). The group will receive the full weight of reclaim and won't have any priority while competing with the less important system software, as if we had no memory.low configured at all. 2. Because of this behaviour, we end up erring on the side of setting it too high, such that the comfort range is reliably covered. However, protected memory is completely unavailable to the rest of the system, so we might cause undue memory and IO pressure there when we *know* we have some elasticity in the workload. 3. Even if we get the value totally right, smack in the middle of the comfort zone, we get extreme jumps between no pressure and full pressure that cause unpredictable pressure spikes in the workload due to the current binary reclaim behaviour. With this patch, we can set it to our ballpark estimation without too much worry. Any undesirable behaviour, such as too much or too little reclaim pressure on the workload or system will be proportional to how far our estimation is off. This means we can set memory.low much more conservatively and thus waste less resources *without* the risk of the workload falling off a cliff if we overshoot. As a more abstract technical description, this unintuitive behaviour results in having to give high-priority workloads a large protection buffer on top of their expected usage to function reliably, as otherwise we have abrupt periods of dramatically increased memory pressure which hamper performance. Having to set these thresholds so high wastes resources and generally works against the principle of work conservation. In addition, having proportional memory reclaim behaviour has other benefits. Most notably, before this patch it's basically mandatory to set memory.low to a higher than desirable value because otherwise as soon as you exceed memory.low, all protection is lost, and all pages are eligible to scan again. By contrast, having a gradual ramp in reclaim pressure means that you now still get some protection when thresholds are exceeded, which means that one can now be more comfortable setting memory.low to lower values without worrying that all protection will be lost. This is important because workingset size is really hard to know exactly, especially with variable workloads, so at least getting *some* protection if your workingset size grows larger than you expect increases user confidence in setting memory.low without a huge buffer on top being needed. Thanks a lot to Johannes Weiner and Tejun Heo for their advice and assistance in thinking about how to make this work better. In testing these changes, I intended to verify that: 1. Changes in page scanning become gradual and proportional instead of binary. To test this, I experimented stepping further and further down memory.low protection on a workload that floats around 19G workingset when under memory.low protection, watching page scan rates for the workload cgroup: +------------+-----------------+--------------------+--------------+ | memory.low | test (pgscan/s) | control (pgscan/s) | % of control | +------------+-----------------+--------------------+--------------+ | 21G | 0 | 0 | N/A | | 17G | 867 | 3799 | 23% | | 12G | 1203 | 3543 | 34% | | 8G | 2534 | 3979 | 64% | | 4G | 3980 | 4147 | 96% | | 0 | 3799 | 3980 | 95% | +------------+-----------------+--------------------+--------------+ As you can see, the test kernel (with a kernel containing this patch) ramps up page scanning significantly more gradually than the control kernel (without this patch). 2. More gradual ramp up in reclaim aggression doesn't result in premature OOMs. To test this, I wrote a script that slowly increments the number of pages held by stress(1)'s --vm-keep mode until a production system entered severe overall memory contention. This script runs in a highly protected slice taking up the majority of available system memory. Watching vmstat revealed that page scanning continued essentially nominally between test and control, without causing forward reclaim progress to become arrested. [0]: https://facebookmicrosites.github.io/cgroup2/docs/overview.html#case-study-the-fbtax2-project [akpm@linux-foundation.org: reflow block comments to fit in 80 cols] [chris@chrisdown.name: handle cgroup_disable=memory when getting memcg protection] Link: http://lkml.kernel.org/r/20190201045711.GA18302@chrisdown.name Link: http://lkml.kernel.org/r/20190124014455.GA6396@chrisdown.name Signed-off-by: Chris Down <chris@chrisdown.name> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Roman Gushchin <guro@fb.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Dennis Zhou <dennis@kernel.org> Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07mm/vmpressure.c: fix a signedness bug in vmpressure_register_event()Dan Carpenter
The "mode" and "level" variables are enums and in this context GCC will treat them as unsigned ints so the error handling is never triggered. I also removed the bogus initializer because it isn't required any more and it's sort of confusing. [akpm@linux-foundation.org: reduce implicit and explicit typecasting] [akpm@linux-foundation.org: fix return value, add comment, per Matthew] Link: http://lkml.kernel.org/r/20190925110449.GO3264@mwanda Fixes: 3cadfa2b9497 ("mm/vmpressure.c: convert to use match_string() helper") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: David Rientjes <rientjes@google.com> Reviewed-by: Matthew Wilcox <willy@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Enrico Weigelt <info@metux.net> Cc: Kate Stewart <kstewart@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07mm/page_alloc.c: fix a crash in free_pages_prepare()Qian Cai
On architectures like s390, arch_free_page() could mark the page unused (set_page_unused()) and any access later would trigger a kernel panic. Fix it by moving arch_free_page() after all possible accessing calls. Hardware name: IBM 2964 N96 400 (z/VM 6.4.0) Krnl PSW : 0404e00180000000 0000000026c2b96e (__free_pages_ok+0x34e/0x5d8) R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3 Krnl GPRS: 0000000088d43af7 0000000000484000 000000000000007c 000000000000000f 000003d080012100 000003d080013fc0 0000000000000000 0000000000100000 00000000275cca48 0000000000000100 0000000000000008 000003d080010000 00000000000001d0 000003d000000000 0000000026c2b78a 000000002717fdb0 Krnl Code: 0000000026c2b95c: ec1100b30659 risbgn %r1,%r1,0,179,6 0000000026c2b962: e32014000036 pfd 2,1024(%r1) #0000000026c2b968: d7ff10001000 xc 0(256,%r1),0(%r1) >0000000026c2b96e: 41101100 la %r1,256(%r1) 0000000026c2b972: a737fff8 brctg %r3,26c2b962 0000000026c2b976: d7ff10001000 xc 0(256,%r1),0(%r1) 0000000026c2b97c: e31003400004 lg %r1,832 0000000026c2b982: ebff1430016a asi 5168(%r1),-1 Call Trace: __free_pages_ok+0x16a/0x5d8) memblock_free_all+0x206/0x290 mem_init+0x58/0x120 start_kernel+0x2b0/0x570 startup_continue+0x6a/0xc0 INFO: lockdep is turned off. Last Breaking-Event-Address: __free_pages_ok+0x372/0x5d8 Kernel panic - not syncing: Fatal exception: panic_on_oops 00: HCPGIR450W CP entered; disabled wait PSW 00020001 80000000 00000000 26A2379C In the past, only kernel_poison_pages() would trigger this but it needs "page_poison=on" kernel cmdline, and I suspect nobody tested that on s390. Recently, kernel_init_free_pages() (commit 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")) was added and could trigger this as well. [akpm@linux-foundation.org: add comment] Link: http://lkml.kernel.org/r/1569613623-16820-1-git-send-email-cai@lca.pw Fixes: 8823b1dbc05f ("mm/page_poison.c: enable PAGE_POISONING as a separate option") Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options") Signed-off-by: Qian Cai <cai@lca.pw> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Duyck <alexander.duyck@gmail.com> Cc: <stable@vger.kernel.org> [5.3+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07mm/z3fold.c: claim page in the beginning of freeVitaly Wool
There's a really hard to reproduce race in z3fold between z3fold_free() and z3fold_reclaim_page(). z3fold_reclaim_page() can claim the page after z3fold_free() has checked if the page was claimed and z3fold_free() will then schedule this page for compaction which may in turn lead to random page faults (since that page would have been reclaimed by then). Fix that by claiming page in the beginning of z3fold_free() and not forgetting to clear the claim in the end. [vitalywool@gmail.com: v2] Link: http://lkml.kernel.org/r/20190928113456.152742cf@bigdell Link: http://lkml.kernel.org/r/20190926104844.4f0c6efa1366b8f5741eaba9@gmail.com Signed-off-by: Vitaly Wool <vitalywool@gmail.com> Reported-by: Markus Linnala <markus.linnala@gmail.com> Cc: Dan Streetman <ddstreet@ieee.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Henry Burns <henrywolfeburns@gmail.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Markus Linnala <markus.linnala@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07kernel/sysctl.c: do not override max_threads provided by userspaceMichal Hocko
Partially revert 16db3d3f1170 ("kernel/sysctl.c: threads-max observe limits") because the patch is causing a regression to any workload which needs to override the auto-tuning of the limit provided by kernel. set_max_threads is implementing a boot time guesstimate to provide a sensible limit of the concurrently running threads so that runaways will not deplete all the memory. This is a good thing in general but there are workloads which might need to increase this limit for an application to run (reportedly WebSpher MQ is affected) and that is simply not possible after the mentioned change. It is also very dubious to override an admin decision by an estimation that doesn't have any direct relation to correctness of the kernel operation. Fix this by dropping set_max_threads from sysctl_max_threads so any value is accepted as long as it fits into MAX_THREADS which is important to check because allowing more threads could break internal robust futex restriction. While at it, do not use MIN_THREADS as the lower boundary because it is also only a heuristic for automatic estimation and admin might have a good reason to stop new threads to be created even when below this limit. This became more severe when we switched x86 from 4k to 8k kernel stacks. Starting since 6538b8ea886e ("x86_64: expand kernel stack to 16K") (3.16) we use THREAD_SIZE_ORDER = 2 and that halved the auto-tuned value. In the particular case 3.12 kernel.threads-max = 515561 4.4 kernel.threads-max = 200000 Neither of the two values is really insane on 32GB machine. I am not sure we want/need to tune the max_thread value further. If anything the tuning should be removed altogether if proven not useful in general. But we definitely need a way to override this auto-tuning. Link: http://lkml.kernel.org/r/20190922065801.GB18814@dhcp22.suse.cz Fixes: 16db3d3f1170 ("kernel/sysctl.c: threads-max observe limits") Signed-off-by: Michal Hocko <mhocko@suse.com> Reviewed-by: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07memcg: only record foreign writebacks with dirty pages when memcg is not ↵Baoquan He
disabled In kdump kernel, memcg usually is disabled with 'cgroup_disable=memory' for saving memory. Now kdump kernel will always panic when dump vmcore to local disk: BUG: kernel NULL pointer dereference, address: 0000000000000ab8 Oops: 0000 [#1] SMP NOPTI CPU: 0 PID: 598 Comm: makedumpfile Not tainted 5.3.0+ #26 Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 10/02/2018 RIP: 0010:mem_cgroup_track_foreign_dirty_slowpath+0x38/0x140 Call Trace: __set_page_dirty+0x52/0xc0 iomap_set_page_dirty+0x50/0x90 iomap_write_end+0x6e/0x270 iomap_write_actor+0xce/0x170 iomap_apply+0xba/0x11e iomap_file_buffered_write+0x62/0x90 xfs_file_buffered_aio_write+0xca/0x320 [xfs] new_sync_write+0x12d/0x1d0 vfs_write+0xa5/0x1a0 ksys_write+0x59/0xd0 do_syscall_64+0x59/0x1e0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 And this will corrupt the 1st kernel too with 'cgroup_disable=memory'. Via the trace and with debugging, it is pointing to commit 97b27821b485 ("writeback, memcg: Implement foreign dirty flushing") which introduced this regression. Disabling memcg causes the null pointer dereference at uninitialized data in function mem_cgroup_track_foreign_dirty_slowpath(). Fix it by returning directly if memcg is disabled, but not trying to record the foreign writebacks with dirty pages. Link: http://lkml.kernel.org/r/20190924141928.GD31919@MiWiFi-R3L-srv Fixes: 97b27821b485 ("writeback, memcg: Implement foreign dirty flushing") Signed-off-by: Baoquan He <bhe@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jan Kara <jack@suse.cz> Cc: Tejun Heo <tj@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-07mm: fix -Wmissing-prototypes warningsYi Wang
We get two warnings when build kernel W=1: mm/shuffle.c:36:12: warning: no previous prototype for `shuffle_show' [-Wmissing-prototypes] mm/sparse.c:220:6: warning: no previous prototype for `subsection_mask_set' [-Wmissing-prototypes] Make the functions static to fix this. Link: http://lkml.kernel.org/r/1566978161-7293-1-git-send-email-wang.yi59@zte.com.cn Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>