summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-07-29mmc: sdhci-pltfm: Drop define for SDHCI_PLTFM_PMOPSUlf Hansson
Due to previous changes this define has no longer a purpose. Instead move the sdhci-pltfm drivers over to use the exported struct sdhci_pltfm_pmops. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-07-29mmc: sdhci-pltfm: Convert to use the SET_SYSTEM_SLEEP_PM_OPSUlf Hansson
Move the system PM callbacks within #ifdef CONFIG_PM_SLEEP as to avoid them being build when not used. This also allows us to use the SET_SYSTEM_SLEEP_PM_OPS macro which simplifies the code. Within this context it also makes sense to move the declaration of the struct sdhci_pltfm_pmops, outside the #ifdef CONFIG_PM as the SET_SYSTEM_SLEEP_PM_OPS deals with this. This further simplifies the code. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-07-29mmc: sdhci-pltfm: Make sdhci_pltfm_suspend|resume() staticUlf Hansson
There are no users left of these exported APIs, so let's make them static. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-07-29mmc: sdhci-esdhc-imx: Use common sdhci_suspend|resume_host()Ulf Hansson
To prepare to make the sdhci_pltfm_suspend|resume() static functions, move sdhci-esdhc-imx over to use the sdhci_suspend|resume_host(). Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Dong Aisheng <aisheng.dong@nxp.com>
2016-07-29mmc: sdhci-esdhc-imx: Assign system PM ops within #ifdef CONFIG_PM_SLEEPUlf Hansson
The system PM callbacks isn't used unless CONFIG_PM_SLEEP is set, thus it triggers a compiler warning about unused functions. Avoid this by changing from CONFIG_PM to CONFIG_PM_SLEEP. Reported-by: Arnd Bergmann <arnd@arndb.de> Fixes: b70d0b3b5b29 ("mmc: sdhci-esdhc-imx: add esdhc specific suspend resume callback") Cc: Dong Aisheng <aisheng.dong@nxp.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Dong Aisheng <aisheng.dong@nxp.com>
2016-07-29MIPS: c-r4k: Use SMP calls for CM indexed cache opsJames Hogan
The MIPS Coherence Manager (CM) can propagate address-based ("hit") cache operations to other cores in the coherent system, alleviating software of the need to use SMP calls, however indexed cache operations are not propagated by hardware since doing so makes no sense for separate caches. Update r4k_op_needs_ipi() to report that only hit cache operations are globalized by the CM, requiring indexed cache operations to be globalized by software via an SMP call. r4k_on_each_cpu() previously had a special case for CONFIG_MIPS_MT_SMP, intended to avoid the SMP calls when the only other CPUs in the system were other VPEs in the same core, and hence sharing the same caches. This was changed by commit cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores") to apparently handle multi-core multi-VPE systems, but it focussed mainly on hit cache ops, so the SMP calls were still disabled entirely for CM systems. This doesn't normally cause problems, but tests can be written to hit these corner cases by using multiple threads, or changing task affinities to force the process to migrate cores. For example the failure of mprotect RW->RX to globally sync icaches (via flush_cache_range) can be detected by modifying and mprotecting a code page on one core, and migrating to a different core to execute from it. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13807/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Avoid small flush_icache_range SMP callsJames Hogan
Avoid SMP calls for flushing small icache ranges. On non-CM platforms, and CM platforms too after we make r4k_on_each_cpu() take the cache op type into account, it will be called on multiple CPUs due to the possibility that local_r4k_flush_icache_range_ipi() could do non-globalized indexed cache ops. This rougly copies the range size check out into r4k_flush_icache_range(), which can disallow indexed cache ops and allow r4k_on_each_cpu() to skip the SMP call. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13805/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Local flush_icache_range cache op overrideJames Hogan
Allow the permitted cache op types used by local_r4k_flush_icache_range_ipi() to be overridden by the SMP caller. This will allow SMP calls to be avoided under certain circumstances, falling back to a single CPU performing globalized hit cache ops only. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13803/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Split r4k_flush_kernel_vmap_range()James Hogan
Split the operation of r4k_flush_kernel_vmap_range() into separate SMP callbacks for the indexed cache flush and hit cache flush cases, since the logic to determine which to use can be determined by the initiating CPU prior to doing any SMP calls. This will help when we change r4k_on_each_cpu() to distinguish indexed and hit cache ops in a later patch, preventing globalized hit cache ops being performed redundantly on multiple CPUs. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13806/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Exclude sibling CPUs in SMP callsJames Hogan
When performing SMP calls to foreign cores, exclude sibling CPUs from the provided map, as we already handle the local core on the current CPU. This prevents an SMP call from for example core 0, VPE 1 to VPE 0 on the same core. In the process the cpu_foreign_map cpumask is turned into an array of cpumasks, so that each CPU has its own version of it which excludes sibling CPUs. r4k_op_needs_ipi() is also updated to reflect that cache management SMP calls are not needed when all CPUs are siblings (i.e. there are no foreign CPUs according to the new cpu_foreign_map[] semantics which exclude siblings). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: Felix Fietkau <nbd@nbd.name> Cc: Jayachandran C. <jchandra@broadcom.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13801/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Fix valid ASID optimisationJames Hogan
Several cache operations are optimised to return early from the SMP call handler if the memory map in question has no valid ASID on the current CPU, or any online CPU in the case of MIPS_MT_SMP. The idea is that if a memory map has never been used on a CPU it shouldn't have cache lines in need of flushing. However this doesn't cover all cases when ASIDs for other CPUs need to be checked: - Offline VPEs may have recently been online and brought lines into the (shared) cache, so they should also be checked, rather than only online CPUs. - SMP systems with a Coherence Manager (CM), but with MT disabled still have globalized hit cache ops, but don't use SMP calls, so all present CPUs should be taken into account. - R6 systems have a different multithreading implementation, so MIPS_MT_SMP won't be set, but as above may still have a CM which globalizes hit cache ops. Additionally for non-globalized cache operations where an SMP call to a single VPE in each foreign core is used, it is not necessary to check every CPU in the system, only sibling CPUs sharing the same first level cache. Fix this by making has_valid_asid() take a cache op type argument like r4k_on_each_cpu(), so it can determine whether r4k_on_each_cpu() will have done SMP calls to other cores. It can then determine which set of CPUs to check the ASIDs of based on that, excluding foreign CPUs if an SMP call will have been performed. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13804/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Add r4k_on_each_cpu cache op type argJames Hogan
The r4k_on_each_cpu() function calls the specified cache flush helper on other CPUs if deemed necessary due to the cache ops not being globalized by hardware. However this really depends on the cache op addressing type, as the MIPS Coherence Manager (CM) if present will globalize "hit" cache ops (addressed by virtual address), but not "index" cache ops (addressed by cache index). This results in index cache ops only being performed on a single CPU when CM is present. Most (but not all) of the functions called by r4k_on_each_cpu() perform cache operations exclusively with a single cache op type, so add a type argument and modify the callers to pass in some combination of R4K_HIT (global kernel virtual addressing or user virtual addressing conditional upon matching active_mm) and R4K_INDEX (index into cache). This will allow r4k_on_each_cpu() to later distinguish these cases and decide whether to perform an SMP call based on it. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13798/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Avoid dcache flush for sigtrampsJames Hogan
Avoid the dcache and scache flush in local_r4k_flush_cache_sigtramp() if the icache fills straight from the dcache. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13802/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Fix sigtramp SMP call to use kmapJames Hogan
Fix r4k_flush_cache_sigtramp() and local_r4k_flush_cache_sigtramp() to flush the delay slot emulation trampoline cacheline through a kmap rather than directly when the active_mm doesn't match that of the task initiating the flush, a bit like local_r4k_flush_cache_page() does. This would fix a corner case on SMP systems without hardware globalized hit cache ops, where a migration to another CPU after the flush, where that CPU did not have the same mm active at the time of the flush, could result in stale icache content being executed instead of the trampoline, e.g. from a previous delay slot emulation with a similar stack pointer. This case was artificially triggered by replacing the icache flush with a full indexed flush (not globalized on CM systems) and forcing the SMP call to take place, with a test program that alternated two FPU delay slots with a parent process repeatedly changing scheduler affinity. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13797/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: c-r4k: Fix protected_writeback_scache_line for EVAJames Hogan
The protected_writeback_scache_line() function is used by local_r4k_flush_cache_sigtramp() to flush an FPU delay slot emulation trampoline on the userland stack from the caches so it is visible to subsequent instruction fetches. Commit de8974e3f76c ("MIPS: asm: r4kcache: Add EVA cache flushing functions") updated some protected_ cache flush functions to use EVA CACHEE instructions via protected_cachee_op(), and commit 83fd43449baa ("MIPS: r4kcache: Add EVA case for protected_writeback_dcache_line") did the same thing for protected_writeback_dcache_line(), but protected_writeback_scache_line() never got updated. Lets fix that now to flush the right user address from the secondary cache rather than some arbitrary kernel unmapped address. This issue was spotted through code inspection, and it seems unlikely to be possible to hit this in practice. It theoretically affect EVA kernels on EVA capable cores with an L2 cache, where the icache fetches straight from RAM (cpu_icache_snoops_remote_store == 0), running a hard float userland with FPU disabled (nofpu). That both Malta and Boston platforms override cpu_icache_snoops_remote_store to 1 suggests that all MIPS cores fetch instructions into icache straight from L2 rather than RAM. Fixes: de8974e3f76c ("MIPS: asm: r4kcache: Add EVA cache flushing functions") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13800/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: SMP: Drop stop_this_cpu() cpu_foreign_map hackJames Hogan
Commit cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores") added the cpu_foreign_map cpumask containing a single VPE from each online core, and recalculated it when secondary CPUs are brought up. stop_this_cpu() was also updated to recalculate cpu_foreign_map, but with an additional hack before marking the CPU as offline to copy cpu_online_mask into cpu_foreign_map and perform an SMP memory barrier. This appears to have been intended to prevent cache management IPIs being missed when the VPE representing the core in cpu_foreign_map is taken offline while other VPEs remain online. Unfortunately there is nothing in this hack to prevent r4k_on_each_cpu() from reading the old cpu_foreign_map, and smp_call_function_many() from reading that new cpu_online_mask with the core's representative VPE marked offline. It then wouldn't send an IPI to any online VPEs of that core. stop_this_cpu() is only actually called in panic and system shutdown / halt / reboot situations, in which case all CPUs are going down and we don't really need to care about cache management, so drop this hack. Note that the __cpu_disable() case for CPU hotplug is handled in the previous commit, and no synchronisation is needed there due to the use of stop_machine() which prevents hotplug from taking place while any CPU has disabled preemption (as r4k_on_each_cpu() does). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13796/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: SMP: Update cpu_foreign_map on CPU disableJames Hogan
When a CPU is disabled via CPU hotplug, cpu_foreign_map is not updated. This could result in cache management SMP calls being sent to offline CPUs instead of online siblings in the same core. Add a call to calculate_cpu_foreign_map() in the various MIPS cpu disable callbacks after set_cpu_online(). All cases are updated for consistency and to keep cpu_foreign_map strictly up to date, not just those which may support hardware multithreading. Fixes: cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Hongliang Tao <taohl@lemote.com> Cc: Hua Yan <yanh@lemote.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13799/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29MIPS: SMP: Clear ASID without confusing has_valid_asid()James Hogan
The SMP flush_tlb_*() functions may clear the memory map's ASIDs for other CPUs if the mm has only a single user (the current CPU) in order to avoid SMP calls. However this makes it appear to has_valid_asid(), which is used by various cache flush functions, as if the CPUs have never run in the mm, and therefore can't have cached any of its memory. For flush_tlb_mm() this doesn't sound unreasonable. flush_tlb_range() corresponds to flush_cache_range() which does do full indexed cache flushes, but only on the icache if the specified mapping is executable, otherwise it doesn't guarantee that there are no cache contents left for the mm. flush_tlb_page() corresponds to flush_cache_page(), which will perform address based cache ops on the specified page only, and also only touches the icache if the page is executable. It does not guarantee that there are no cache contents left for the mm. For example, this affects flush_cache_range() which uses the has_valid_asid() optimisation. It is required to flush the icache when mappings are made executable (e.g. using mprotect) so they are immediately usable. If some code is changed to non executable in order to be modified then it will not be flushed from the icache during that time, but the ASID on other CPUs may still be cleared for TLB flushing. When the code is changed back to executable, flush_cache_range() will assume the code hasn't run on those other CPUs due to the zero ASID, and won't invalidate the icache on them. This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the above two flush_tlb_*() functions when the corresponding cache flushes are likely to be incomplete (non executable range flush, or any page flush). This ASID appears valid to has_valid_asid(), but still triggers ASID regeneration due to the upper ASID version bits being 0, which is less than the minimum ASID version of 1 and so always treated as stale. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13795/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-07-29drm: rcar: use generic code for managing zpos plane propertyBenjamin Gaignard
version 6: rebased patch on top rcar-du changes for zpos version 4: fix null pointer issue while setting zpos in plane reset function This patch replaces zpos property handling custom code in rcar DRM driver with calls to generic DRM code. Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
2016-07-29drm/exynos: use generic code for managing zpos plane propertyMarek Szyprowski
This patch replaces zpos property handling custom code in Exynos DRM driver with calls to generic DRM code. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Cc: Inki Dae <inki.dae@samsung.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Cc: Joonyoung Shim <jy0922.shim@samsung.com> Cc: Seung-Woo Kim <sw0312.kim@samsung.com> Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Cc: Gustavo Padovan <gustavo@padovan.org> Cc: vincent.abriou@st.com Cc: fabien.dessenne@st.com Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2016-07-29drm: sti: use generic zpos for planeBenjamin Gaignard
remove private zpos property and use instead the generic new. zpos range is now fixed per plane type and normalized before being using in mixer. Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Cc: Inki Dae <inki.dae@samsung.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Cc: Joonyoung Shim <jy0922.shim@samsung.com> Cc: Seung-Woo Kim <sw0312.kim@samsung.com> Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Cc: Gustavo Padovan <gustavo@padovan.org> Cc: vincent.abriou@st.com Cc: fabien.dessenne@st.com Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2016-07-29drm: add generic zpos propertyMarek Szyprowski
version 8: - move drm_blend.o from drm-y to drm_kms_helper-y to avoid EXPORT_SYMBOL(drm_atomic_helper_normalize_zpos) - remove dead function declarations in drm_crtc.h version 7: - remove useless EXPORT_SYMBOL() - better z-order wording in Documentation version 6: - add zpos in gpu documentation file - merge Ville patch about zpos initial value and API improvement. I have split Ville patch between zpos core and drivers version 5: - remove zpos range check and comeback to 0 to N-1 normalization algorithm version 4: - make sure that normalized zpos value is stay in the defined property range and warn user if not This patch adds support for generic plane's zpos property property with well-defined semantics: - added zpos properties to plane and plane state structures - added helpers for normalizing zpos properties of given set of planes - well defined semantics: planes are sorted by zpos values and then plane id value if zpos equals Normalized zpos values are calculated automatically when generic muttable zpos property has been initialized. Drivers can simply use plane_state->normalized_zpos in their atomic_check and/or plane_update callbacks without any additional calls to DRM core. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Compare to Marek's original patch zpos property is now specific to each plane and no more to the core. Normalize function take care of the range of per plane defined range before set normalized_zpos. Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Inki Dae <inki.dae@samsung.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Cc: Joonyoung Shim <jy0922.shim@samsung.com> Cc: Seung-Woo Kim <sw0312.kim@samsung.com> Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Cc: Gustavo Padovan <gustavo@padovan.org> Cc: vincent.abriou@st.com Cc: fabien.dessenne@st.com Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2016-07-29Merge remote-tracking branch 'media_tree/vsp1' into generic-zpos-v8Benjamin Gaignard
2016-07-28sparc64 mm: Fix base TSB sizing when hugetlb pages are usedMike Kravetz
do_sparc64_fault() calculates both the base and huge page RSS sizes and uses this information in calls to tsb_grow(). The calculation for base page TSB size is not correct if the task uses hugetlb pages. hugetlb pages are not accounted for in RSS, therefore the call to get_mm_rss(mm) does not include hugetlb pages. However, the number of pages based on huge_pte_count (which does include hugetlb pages) is subtracted from this value. This will result in an artificially small and often negative RSS calculation. The base TSB size is then often set to max_tsb_size as the passed RSS is unsigned, so a negative value looks really big. THP pages are also accounted for in huge_pte_count, and THP pages are accounted for in RSS so the calculation in do_sparc64_fault() is correct if a task only uses THP pages. A single huge_pte_count is not sufficient for TSB sizing if both hugetlb and THP pages can be used. Instead of a single counter, use two: one for hugetlb and one for THP. Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-07-29drm: aux ->transfer() can return 0, deal with itVille Syrjälä
Restore the correct behaviour (as in check msg.reply) when aux ->transfer() returns 0. It got removed in commit 82922da39190 ("drm/dp_helper: Retry aux transactions on all errors") Now I can actually dump the "entire" DPCD on a Dell UP2314Q with ddrescue. It has some offsets in the DPCD that can't be read for some resaon, all you get is defers. Previously ddrescue would just give up at the first unredable offset on account of read() returning 0 means EOF. Here's the ddrescue log for the interested: 0x00000000 0x00001400 + 0x00001400 0x00000030 - 0x00001430 0x000001D0 + 0x00001600 0x00000030 - 0x00001630 0x0001F9D0 + 0x00021000 0x00000001 - 0x00021001 0x000DEFFF + Cc: Lyude <cpaul@redhat.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch Cc: stable@vger.kernel.org Fixes: 82922da39190 ("drm/dp_helper: Retry aux transactions on all errors") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-07-28Merge tag 'trace-v4.8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "This is mostly clean ups and small fixes. Some of the more visible changes are: - The function pid code uses the event pid filtering logic - [ku]probe events have access to current->comm - trace_printk now has sample code - PCI devices now trace physical addresses - stack tracing has less unnessary functions traced" * tag 'trace-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: printk, tracing: Avoiding unneeded blank lines tracing: Use __get_str() when manipulating strings tracing, RAS: Cleanup on __get_str() usage tracing: Use outer () on __get_str() definition ftrace: Reduce size of function graph entries tracing: Have HIST_TRIGGERS select TRACING tracing: Using for_each_set_bit() to simplify trace_pid_write() ftrace: Move toplevel init out of ftrace_init_tracefs() tracing/function_graph: Fix filters for function_graph threshold tracing: Skip more functions when doing stack tracing of events tracing: Expose CPU physical addresses (resource values) for PCI devices tracing: Show the preempt count of when the event was called tracing: Add trace_printk sample code tracing: Choose static tp_printk buffer by explicit nesting count tracing: expose current->comm to [ku]probe events ftrace: Have set_ftrace_pid use the bitmap like events do tracing: Move pid_list write processing into its own function tracing: Move the pid_list seq_file functions to be global tracing: Move filtered_pid helper functions into trace.c tracing: Make the pid filtering helper functions global
2016-07-28Merge tag 'vfio-v4.8-rc1' of git://github.com/awilliam/linux-vfioLinus Torvalds
Pull VFIO updates from Alex Williamson: - Enable no-iommu mode for platform devices (Peng Fan) - Sub-page mmap for exclusive pages (Yongji Xie) - Use-after-free fix (Ilya Lesokhin) - Support for ACPI-based platform devices (Sinan Kaya) * tag 'vfio-v4.8-rc1' of git://github.com/awilliam/linux-vfio: vfio: platform: check reset call return code during release vfio: platform: check reset call return code during open vfio, platform: make reset driver a requirement by default vfio: platform: call _RST method when using ACPI vfio: platform: add extra debug info argument to call reset vfio: platform: add support for ACPI probe vfio: platform: determine reset capability vfio: platform: move reset call to a common function vfio: platform: rename reset function vfio: fix possible use after free of vfio group vfio-pci: Allow to mmap sub-page MMIO BARs if the mmio page is exclusive vfio: platform: support No-IOMMU mode
2016-07-28Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/shli/md Pull MD updates from Shaohua Li: - A bunch of patches from Neil Brown to fix RCU usage - Two performance improvement patches from Tomasz Majchrzak - Alexey Obitotskiy fixes module refcount issue - Arnd Bergmann fixes time granularity - Cong Wang fixes a list corruption issue - Guoqing Jiang fixes a deadlock in md-cluster - A null pointer deference fix from me - Song Liu fixes misuse of raid6 rmw - Other trival/cleanup fixes from Guoqing Jiang and Xiao Ni * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/md: (28 commits) MD: fix null pointer deference raid10: improve random reads performance md: add missing sysfs_notify on array_state update Fix kernel module refcount handling md: use seconds granularity for error logging md: reduce the number of synchronize_rcu() calls when multiple devices fail. md: be extra careful not to take a reference to a Faulty device. md/multipath: add rcu protection to rdev access in multipath_status. md/raid5: add rcu protection to rdev accesses in raid5_status. md/raid5: add rcu protection to rdev accesses in want_replace md/raid5: add rcu protection to rdev accesses in handle_failed_sync. md/raid1: add rcu protection to rdev in fix_read_error md/raid1: small code cleanup in end_sync_write md/raid1: small cleanup in raid1_end_read/write_request md/raid10: simplify print_conf a little. md/raid10: minor code improvement in fix_read_error() md/raid10: add rcu protection to rdev access during reshape. md/raid10: add rcu protection to rdev access in raid10_sync_request. md/raid10: add rcu protection in raid10_status. md/raid10: fix refounct imbalance when resyncing an array with a replacement device. ...
2016-07-28Merge tag 'libnvdimm-for-4.8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm Pull libnvdimm updates from Dan Williams: - Replace pcommit with ADR / directed-flushing. The pcommit instruction, which has not shipped on any product, is deprecated. Instead, the requirement is that platforms implement either ADR, or provide one or more flush addresses per nvdimm. ADR (Asynchronous DRAM Refresh) flushes data in posted write buffers to the memory controller on a power-fail event. Flush addresses are defined in ACPI 6.x as an NVDIMM Firmware Interface Table (NFIT) sub-structure: "Flush Hint Address Structure". A flush hint is an mmio address that when written and fenced assures that all previous posted writes targeting a given dimm have been flushed to media. - On-demand ARS (address range scrub). Linux uses the results of the ACPI ARS commands to track bad blocks in pmem devices. When latent errors are detected we re-scrub the media to refresh the bad block list, userspace can also request a re-scrub at any time. - Support for the Microsoft DSM (device specific method) command format. - Support for EDK2/OVMF virtual disk device memory ranges. - Various fixes and cleanups across the subsystem. * tag 'libnvdimm-for-4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (41 commits) libnvdimm-btt: Delete an unnecessary check before the function call "__nd_device_register" nfit: do an ARS scrub on hitting a latent media error nfit: move to nfit/ sub-directory nfit, libnvdimm: allow an ARS scrub to be triggered on demand libnvdimm: register nvdimm_bus devices with an nd_bus driver pmem: clarify a debug print in pmem_clear_poison x86/insn: remove pcommit Revert "KVM: x86: add pcommit support" nfit, tools/testing/nvdimm/: unify shutdown paths libnvdimm: move ->module to struct nvdimm_bus_descriptor nfit: cleanup acpi_nfit_init calling convention nfit: fix _FIT evaluation memory leak + use after free tools/testing/nvdimm: add manufacturing_{date|location} dimm properties tools/testing/nvdimm: add virtual ramdisk range acpi, nfit: treat virtual ramdisk SPA as pmem region pmem: kill __pmem address space pmem: kill wmb_pmem() libnvdimm, pmem: use nvdimm_flush() for namespace I/O writes fs/dax: remove wmb_pmem() libnvdimm, pmem: flush posted-write queues on shutdown ...
2016-07-28Merge tag 'pinctrl-v4.8-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl Pull pin control updates from Linus Walleij: "This is the bulk of pin control changes for the v4.8 kernel cycle. Nothing stands out as especially exiting: new drivers, new subdrivers, lots of cleanups and incremental features. Business as usual. New drivers: - New driver for Oxnas pin control and GPIO. This ARM-based chipset is used in a few storage (NAS) type devices. - New driver for the MAX77620/MAX20024 pin controller portions. - New driver for the Intel Merrifield pin controller. New subdrivers: - New subdriver for the Qualcomm MDM9615 - New subdriver for the STM32F746 MCU - New subdriver for the Broadcom NSP SoC. Cleanups: - Demodularization of bool compiled-in drivers. Apart from this there is just regular incremental improvements to a lot of drivers, especially Uniphier and PFC" * tag 'pinctrl-v4.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl: (131 commits) pinctrl: fix pincontrol definition for marvell pinctrl: xway: fix typo Revert "pinctrl: amd: make it explicitly non-modular" pinctrl: iproc: Add NSP and Stingray GPIO support pinctrl: Update iProc GPIO DT bindings pinctrl: bcm: add OF dependencies pinctrl: ns2: remove redundant dev_err call in ns2_pinmux_probe() pinctrl: Add STM32F746 MCU support pinctrl: intel: Protect set wake flow by spin lock pinctrl: nsp: remove redundant dev_err call in nsp_pinmux_probe() pinctrl: uniphier: add Ethernet pin-mux settings sh-pfc: Use PTR_ERR_OR_ZERO() to simplify the code pinctrl: ns2: fix return value check in ns2_pinmux_probe() pinctrl: qcom: update DT bindings with ebi2 groups pinctrl: qcom: establish proper EBI2 pin groups pinctrl: imx21: Remove the MODULE_DEVICE_TABLE() macro Documentation: dt: Add new compatible to STM32 pinctrl driver bindings includes: dt-bindings: Add STM32F746 pinctrl DT bindings pinctrl: sunxi: fix nand0 function name for sun8i pinctrl: uniphier: remove pointless pin-mux settings for PH1-LD11 ...
2016-07-28Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge more updates from Andrew Morton: "The rest of MM" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (101 commits) mm, compaction: simplify contended compaction handling mm, compaction: introduce direct compaction priority mm, thp: remove __GFP_NORETRY from khugepaged and madvised allocations mm, page_alloc: make THP-specific decisions more generic mm, page_alloc: restructure direct compaction handling in slowpath mm, page_alloc: don't retry initial attempt in slowpath mm, page_alloc: set alloc_flags only once in slowpath lib/stackdepot.c: use __GFP_NOWARN for stack allocations mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB mm, kasan: account for object redzone in SLUB's nearest_obj() mm: fix use-after-free if memory allocation failed in vma_adjust() zsmalloc: Delete an unnecessary check before the function call "iput" mm/memblock.c: fix index adjustment error in __next_mem_range_rev() mem-hotplug: alloc new page from a nearest neighbor node when mem-offline mm: optimize copy_page_to/from_iter_iovec mm: add cond_resched() to generic_swapfile_activate() Revert "mm, mempool: only set __GFP_NOMEMALLOC if there are free elements" mm, compaction: don't isolate PageWriteback pages in MIGRATE_SYNC_LIGHT mode mm: hwpoison: remove incorrect comments make __section_nr() more efficient ...
2016-07-28[media] cec: fix off-by-one memsetHans Verkuil
The unused bytes of the features array should be zeroed, but the start index was one byte too early. This caused the device features byte to be overwritten by 0. The compliance test for the CEC_S_LOG_ADDRS ioctl didn't catch this because it tested byte continuation with the second device features byte being 0 :-( Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2016-07-28[media] staging: add MEDIA_SUPPORT dependencyArnd Bergmann
staging media drivers tend to have a build time dependency on the media support. In particular, the newly added pulse8 cec driver can only be a loadable module if MEDIA_SUPPORT=m, but its build dependency is on a 'bool' symbol (MEDIA_CEC), so a randconfig build can fail with pulse8_cec built-in: drivers/staging/built-in.o: In function `pulse8_disconnect': dgnc_utils.c:(.text+0x114): undefined reference to `cec_unregister_adapter' drivers/staging/built-in.o: In function `pulse8_irq_work_handler': dgnc_utils.c:(.text+0x1bc): undefined reference to `cec_transmit_done' dgnc_utils.c:(.text+0x1d8): undefined reference to `cec_received_msg' dgnc_utils.c:(.text+0x1f4): undefined reference to `cec_transmit_done' dgnc_utils.c:(.text+0x218): undefined reference to `cec_transmit_done' dgnc_utils.c:(.text+0x23c): undefined reference to `cec_transmit_done' drivers/staging/built-in.o: In function `pulse8_connect': dgnc_utils.c:(.text+0x844): undefined reference to `cec_allocate_adapter' dgnc_utils.c:(.text+0x8a4): undefined reference to `cec_delete_adapter' dgnc_utils.c:(.text+0xa10): undefined reference to `cec_register_adapter' Originally, MEDIA_CEC itself was a tristate symbol, which would have prevented this, but since 5bb2399a4fe4 ("[media] cec: fix Kconfig dependency problems"), it doesn't work like that any more. This encloses all of the staging media drivers in a CONFIG_MEDIA_SUPPORT dependency in Kconfig, which solves the problem by enforcing that none of the drivers can be built-in if the media core is a module. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2016-07-28[media] vivid: don't handle CEC_MSG_SET_STREAM_PATHHans Verkuil
vivid shouldn't process the CEC_MSG_SET_STREAM_PATH message: this will confuse userspace follower code because it isn't aware of the state change of becoming an active source. Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2016-07-28[media] media: adv7180: Fix broken interrupt register accessSteve Longerbeam
Access to the interrupt page registers has been broken since at least commit 3999e5d01da7 ("[media] adv7180: Do implicit register paging"). That commit forgot to add the interrupt page number to the register defines. Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com> Tested-by: Tim Harvey <tharvey@gateworks.com> Acked-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2016-07-28[media] vb2: Fix allocation size of dma_parmsVincent Stehlé
When allocating memory to hold the device dma parameters in vb2_dma_contig_set_max_seg_size(), the requested size is by mistake only the size of a pointer. Request the correct size instead. Fixes: 3f0339691896 ("media: vb2-dma-contig: add helper for setting dma max seg size") Signed-off-by: Vincent Stehlé <vincent.stehle@laposte.net> Cc: Sylwester Nawrocki <s.nawrocki@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2016-07-28[media] vim2m: copy the other colorspace-related fields as wellHans Verkuil
The xfer_func, ycbcr_enc and quantization fields should also be copied from output to capture format. Since this driver serves as example code it is important that this is handled correctly. Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2016-07-28[media] adv7511: fix VIC autodetectHans Verkuil
The adv7511 will automatically fill in the VIC code in the AVI InfoFrame based on the timings of the incoming pixelport signals. However, to have this work correctly it needs to specify the fps value in a register. After doing this the proper VIC code is filled in. Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2016-07-28doc-rst: Remove the media docbookMauro Carvalho Chehab
Now that all media documentation was converted to Sphinx, we should get rid of the old DocBook one, as we don't want people to submit patches against the old stuff. Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> Acked-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2016-07-28mm, compaction: simplify contended compaction handlingVlastimil Babka
Async compaction detects contention either due to failing trylock on zone->lock or lru_lock, or by need_resched(). Since 1f9efdef4f3f ("mm, compaction: khugepaged should not give up due to need_resched()") the code got quite complicated to distinguish these two up to the __alloc_pages_slowpath() level, so different decisions could be taken for khugepaged allocations. After the recent changes, khugepaged allocations don't check for contended compaction anymore, so we again don't need to distinguish lock and sched contention, and simplify the current convoluted code a lot. However, I believe it's also possible to simplify even more and completely remove the check for contended compaction after the initial async compaction for costly orders, which was originally aimed at THP page fault allocations. There are several reasons why this can be done now: - with the new defaults, THP page faults no longer do reclaim/compaction at all, unless the system admin has overridden the default, or application has indicated via madvise that it can benefit from THP's. In both cases, it means that the potential extra latency is expected and worth the benefits. - even if reclaim/compaction proceeds after this patch where it previously wouldn't, the second compaction attempt is still async and will detect the contention and back off, if the contention persists - there are still heuristics like deferred compaction and pageblock skip bits in place that prevent excessive THP page fault latencies Link: http://lkml.kernel.org/r/20160721073614.24395-9-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm, compaction: introduce direct compaction priorityVlastimil Babka
In the context of direct compaction, for some types of allocations we would like the compaction to either succeed or definitely fail while trying as hard as possible. Current async/sync_light migration mode is insufficient, as there are heuristics such as caching scanner positions, marking pageblocks as unsuitable or deferring compaction for a zone. At least the final compaction attempt should be able to override these heuristics. To communicate how hard compaction should try, we replace migration mode with a new enum compact_priority and change the relevant function signatures. In compact_zone_order() where struct compact_control is constructed, the priority is mapped to suitable control flags. This patch itself has no functional change, as the current priority levels are mapped back to the same migration modes as before. Expanding them will be done next. Note that !CONFIG_COMPACTION variant of try_to_compact_pages() is removed, as the only caller exists under CONFIG_COMPACTION. Link: http://lkml.kernel.org/r/20160721073614.24395-8-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm, thp: remove __GFP_NORETRY from khugepaged and madvised allocationsVlastimil Babka
After the previous patch, we can distinguish costly allocations that should be really lightweight, such as THP page faults, with __GFP_NORETRY. This means we don't need to recognize khugepaged allocations via PF_KTHREAD anymore. We can also change THP page faults in areas where madvise(MADV_HUGEPAGE) was used to try as hard as khugepaged, as the process has indicated that it benefits from THP's and is willing to pay some initial latency costs. We can also make the flags handling less cryptic by distinguishing GFP_TRANSHUGE_LIGHT (no reclaim at all, default mode in page fault) from GFP_TRANSHUGE (only direct reclaim, khugepaged default). Adding __GFP_NORETRY or __GFP_KSWAPD_RECLAIM is done where needed. The patch effectively changes the current GFP_TRANSHUGE users as follows: * get_huge_zero_page() - the zero page lifetime should be relatively long and it's shared by multiple users, so it's worth spending some effort on it. We use GFP_TRANSHUGE, and __GFP_NORETRY is not added. This also restores direct reclaim to this allocation, which was unintentionally removed by commit e4a49efe4e7e ("mm: thp: set THP defrag by default to madvise and add a stall-free defrag option") * alloc_hugepage_khugepaged_gfpmask() - this is khugepaged, so latency is not an issue. So if khugepaged "defrag" is enabled (the default), do reclaim via GFP_TRANSHUGE without __GFP_NORETRY. We can remove the PF_KTHREAD check from page alloc. As a side-effect, khugepaged will now no longer check if the initial compaction was deferred or contended. This is OK, as khugepaged sleep times between collapsion attempts are long enough to prevent noticeable disruption, so we should allow it to spend some effort. * migrate_misplaced_transhuge_page() - already was masking out __GFP_RECLAIM, so just convert to GFP_TRANSHUGE_LIGHT which is equivalent. * alloc_hugepage_direct_gfpmask() - vma's with VM_HUGEPAGE (via madvise) are now allocating without __GFP_NORETRY. Other vma's keep using __GFP_NORETRY if direct reclaim/compaction is at all allowed (by default it's allowed only for madvised vma's). The rest is conversion to GFP_TRANSHUGE(_LIGHT). [mhocko@suse.com: suggested GFP_TRANSHUGE_LIGHT] Link: http://lkml.kernel.org/r/20160721073614.24395-7-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm, page_alloc: make THP-specific decisions more genericVlastimil Babka
Since THP allocations during page faults can be costly, extra decisions are employed for them to avoid excessive reclaim and compaction, if the initial compaction doesn't look promising. The detection has never been perfect as there is no gfp flag specific to THP allocations. At this moment it checks the whole combination of flags that makes up GFP_TRANSHUGE, and hopes that no other users of such combination exist, or would mind being treated the same way. Extra care is also taken to separate allocations from khugepaged, where latency doesn't matter that much. It is however possible to distinguish these allocations in a simpler and more reliable way. The key observation is that after the initial compaction followed by the first iteration of "standard" reclaim/compaction, both __GFP_NORETRY allocations and costly allocations without __GFP_REPEAT are declared as failures: /* Do not loop if specifically requested */ if (gfp_mask & __GFP_NORETRY) goto nopage; /* * Do not retry costly high order allocations unless they are * __GFP_REPEAT */ if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT)) goto nopage; This means we can further distinguish allocations that are costly order *and* additionally include the __GFP_NORETRY flag. As it happens, GFP_TRANSHUGE allocations do already fall into this category. This will also allow other costly allocations with similar high-order benefit vs latency considerations to use this semantic. Furthermore, we can distinguish THP allocations that should try a bit harder (such as from khugepageed) by removing __GFP_NORETRY, as will be done in the next patch. Link: http://lkml.kernel.org/r/20160721073614.24395-6-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm, page_alloc: restructure direct compaction handling in slowpathVlastimil Babka
The retry loop in __alloc_pages_slowpath is supposed to keep trying reclaim and compaction (and OOM), until either the allocation succeeds, or returns with failure. Success here is more probable when reclaim precedes compaction, as certain watermarks have to be met for compaction to even try, and more free pages increase the probability of compaction success. On the other hand, starting with light async compaction (if the watermarks allow it), can be more efficient, especially for smaller orders, if there's enough free memory which is just fragmented. Thus, the current code starts with compaction before reclaim, and to make sure that the last reclaim is always followed by a final compaction, there's another direct compaction call at the end of the loop. This makes the code hard to follow and adds some duplicated handling of migration_mode decisions. It's also somewhat inefficient that even if reclaim or compaction decides not to retry, the final compaction is still attempted. Some gfp flags combination also shortcut these retry decisions by "goto noretry;", making it even harder to follow. This patch attempts to restructure the code with only minimal functional changes. The call to the first compaction and THP-specific checks are now placed above the retry loop, and the "noretry" direct compaction is removed. The initial compaction is additionally restricted only to costly orders, as we can expect smaller orders to be held back by watermarks, and only larger orders to suffer primarily from fragmentation. This better matches the checks in reclaim's shrink_zones(). There are two other smaller functional changes. One is that the upgrade from async migration to light sync migration will always occur after the initial compaction. This is how it has been until recent patch "mm, oom: protect !costly allocations some more", which introduced upgrading the mode based on COMPACT_COMPLETE result, but kept the final compaction always upgraded, which made it even more special. It's better to return to the simpler handling for now, as migration modes will be further modified later in the series. The second change is that once both reclaim and compaction declare it's not worth to retry the reclaim/compact loop, there is no final compaction attempt. As argued above, this is intentional. If that final compaction were to succeed, it would be due to a wrong retry decision, or simply a race with somebody else freeing memory for us. The main outcome of this patch should be simpler code. Logically, the initial compaction without reclaim is the exceptional case to the reclaim/compaction scheme, but prior to the patch, it was the last loop iteration that was exceptional. Now the code matches the logic better. The change also enable the following patches. Link: http://lkml.kernel.org/r/20160721073614.24395-5-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm, page_alloc: don't retry initial attempt in slowpathVlastimil Babka
After __alloc_pages_slowpath() sets up new alloc_flags and wakes up kswapd, it first tries get_page_from_freelist() with the new alloc_flags, as it may succeed e.g. due to using min watermark instead of low watermark. It makes sense to to do this attempt before adjusting zonelist based on alloc_flags/gfp_mask, as it's still relatively a fast path if we just wake up kswapd and successfully allocate. This patch therefore moves the initial attempt above the retry label and reorganizes a bit the part below the retry label. We still have to attempt get_page_from_freelist() on each retry, as some allocations cannot do that as part of direct reclaim or compaction, and yet are not allowed to fail (even though they do a WARN_ON_ONCE() and thus should not exist). We can reuse the call meant for ALLOC_NO_WATERMARKS attempt and just set alloc_flags to ALLOC_NO_WATERMARKS if the context allows it. As a side-effect, the attempts from direct reclaim/compaction will also no longer obey watermarks once this is set, but there's little harm in that. Kswapd wakeups are also done on each retry to be safe from potential races resulting in kswapd going to sleep while a process (that may not be able to reclaim by itself) is still looping. Link: http://lkml.kernel.org/r/20160721073614.24395-4-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm, page_alloc: set alloc_flags only once in slowpathVlastimil Babka
In __alloc_pages_slowpath(), alloc_flags doesn't change after it's initialized, so move the initialization above the retry: label. Also make the comment above the initialization more descriptive. The only exception in the alloc_flags being constant is ALLOC_NO_WATERMARKS, which may change due to TIF_MEMDIE being set on the allocating thread. We can fix this, and make the code simpler and a bit more effective at the same time, by moving the part that determines ALLOC_NO_WATERMARKS from gfp_to_alloc_flags() to gfp_pfmemalloc_allowed(). This means we don't have to mask out ALLOC_NO_WATERMARKS in numerous places in __alloc_pages_slowpath() anymore. The only two tests for the flag can instead call gfp_pfmemalloc_allowed(). Link: http://lkml.kernel.org/r/20160721073614.24395-3-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28lib/stackdepot.c: use __GFP_NOWARN for stack allocationsKirill A. Shutemov
This (large, atomic) allocation attempt can fail. We expect and handle that, so avoid the scary warning. Link: http://lkml.kernel.org/r/20160720151905.GB19146@node.shutemov.name Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUBAlexander Potapenko
For KASAN builds: - switch SLUB allocator to using stackdepot instead of storing the allocation/deallocation stacks in the objects; - change the freelist hook so that parts of the freelist can be put into the quarantine. [aryabinin@virtuozzo.com: fixes] Link: http://lkml.kernel.org/r/1468601423-28676-1-git-send-email-aryabinin@virtuozzo.com Link: http://lkml.kernel.org/r/1468347165-41906-3-git-send-email-glider@google.com Signed-off-by: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Steven Rostedt (Red Hat) <rostedt@goodmis.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kostya Serebryany <kcc@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Kuthonuzo Luruo <kuthonuzo.luruo@hpe.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm, kasan: account for object redzone in SLUB's nearest_obj()Alexander Potapenko
When looking up the nearest SLUB object for a given address, correctly calculate its offset if SLAB_RED_ZONE is enabled for that cache. Previously, when KASAN had detected an error on an object from a cache with SLAB_RED_ZONE set, the actual start address of the object was miscalculated, which led to random stacks having been reported. When looking up the nearest SLUB object for a given address, correctly calculate its offset if SLAB_RED_ZONE is enabled for that cache. Fixes: 7ed2f9e663854db ("mm, kasan: SLAB support") Link: http://lkml.kernel.org/r/1468347165-41906-2-git-send-email-glider@google.com Signed-off-by: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Steven Rostedt (Red Hat) <rostedt@goodmis.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kostya Serebryany <kcc@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Kuthonuzo Luruo <kuthonuzo.luruo@hpe.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28mm: fix use-after-free if memory allocation failed in vma_adjust()Kirill A. Shutemov
There's one case when vma_adjust() expands the vma, overlapping with *two* next vma. See case 6 of mprotect, described in the comment to vma_merge(). To handle this (and only this) situation we iterate twice over main part of the function. See "goto again". Vegard reported[1] that he sees out-of-bounds access complain from KASAN, if anon_vma_clone() on the *second* iteration fails. This happens because we free 'next' vma by the end of first iteration and don't have a way to undo this if anon_vma_clone() fails on the second iteration. The solution is to do all required allocations upfront, before we touch vmas. The allocation on the second iteration is only required if first two vmas don't have anon_vma, but third does. So we need, in total, one anon_vma_clone() call. It's easy to adjust 'exporter' to the third vma for such case. [1] http://lkml.kernel.org/r/1469514843-23778-1-git-send-email-vegard.nossum@oracle.com Link: http://lkml.kernel.org/r/1469625255-126641-1-git-send-email-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Vegard Nossum <vegard.nossum@oracle.com> Cc: Rik van Riel <riel@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>