summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-07-27btrfs: check-integrity: remove unnecessary failure messages during memory ↵Liao Pingfang
allocation As there is a dump_stack() done on memory allocation failures, these messages might as well be deleted instead. Signed-off-by: Liao Pingfang <liao.pingfang@zte.com.cn> Reviewed-by: David Sterba <dsterba@suse.com> [ minor tweaks ] Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: use helper btrfs_get_block_groupAnand Jain
Use the helper function where it is open coded to increment the block_group reference count As btrfs_get_block_group() is a one-liner we could have open-coded it, but its partner function btrfs_put_block_group() isn't one-liner which does the free part in it. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: let btrfs_return_cluster_to_free_space() return voidAnand Jain
__btrfs_return_cluster_to_free_space() returns only 0. And all its parent functions don't need the return value either so make this a void function. Further, as none of the callers of btrfs_return_cluster_to_free_space() is actually using the return from this function, make this function also return void. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: remove no longer necessary chunk mutex locking casesFilipe Manana
Initially when the 'removed' flag was added to a block group to avoid races between block group removal and fitrim, by commit 04216820fe83d5 ("Btrfs: fix race between fs trimming and block group remove/allocation"), we had to lock the chunks mutex because we could be moving the block group from its current list, the pending chunks list, into the pinned chunks list, or we could just be adding it to the pinned chunks if it was not in the pending chunks list. Both lists were protected by the chunk mutex. However we no longer have those lists since commit 1c11b63eff2a67 ("btrfs: replace pending/pinned chunks lists with io tree"), and locking the chunk mutex is no longer necessary because of that. The same happens at btrfs_unfreeze_block_group(), we lock the chunk mutex because the block group's extent map could be part of the pinned chunks list and the call to remove_extent_mapping() could be deleting it from that list, which used to be protected by that mutex. So just remove those lock and unlock calls as they are not needed anymore. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: factor out reading of bg from find_frist_block_groupJohannes Thumshirn
When find_first_block_group() finds a block group item in the extent-tree, it does a lookup of the object in the extent mapping tree and does further checks on the item. Factor out this step from find_first_block_group() so we can further simplify the code. While we're at it, we can also just return early in find_first_block_group(), if the tree slot isn't found. Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: get mapping tree directly from fsinfo in find_first_block_groupJohannes Thumshirn
We already have an fs_info in our function parameters, there's no need to do the maths again and get fs_info from the extent_root just to get the mapping_tree. Instead directly grab the mapping_tree from fs_info. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: simplify checks when adding excluded rangesNikolay Borisov
Adresses held in 'logical' array are always guaranteed to fall within the boundaries of the block group. That is, 'start' can never be smaller than cache->start. This invariant follows from the way the address are calculated in btrfs_rmap_block: stripe_nr = physical - map->stripes[i].physical; stripe_nr = div64_u64(stripe_nr, map->stripe_len); bytenr = chunk_start + stripe_nr * io_stripe_size; I.e it's always some IO stripe within the given chunk. Exploit this invariant to simplify the body of the loop by removing the unnecessary 'if' since its 'else' part is the one always executed. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: read stripe len directly in btrfs_rmap_blockNikolay Borisov
extent_map::orig_block_len contains the size of a physical stripe when it's used to describe block groups (calculated in read_one_chunk via calc_stripe_length or calculated in decide_stripe_size and then assigned to extent_map::orig_block_len in create_chunk). Exploit this fact to get the size directly rather than opencoding the calculations. No functional changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: don't balance btree inode pages from buffered write pathNikolay Borisov
The call to btrfs_btree_balance_dirty has been there since the early days of BTRFS, when the btree was directly modified from the write path, hence dirtied btree inode pages. With the implementation of b888db2bd7b6 ("Btrfs: Add delayed allocation to the extent based page tree code") 13 years ago the btree is no longer modified from the write path, hence there is no point in calling this function. Just remove it. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27Merge tag 'linux-cpupower-5.9-rc1' of ↵Rafael J. Wysocki
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux Pull cpupower utility updates for v5.9 from Shuah Khan: "This cpupower update for Linux 5.9-rc1 consists of 2 fixes to coccicheck warnings and one change to replacing HTTP links with HTTPS ones." * tag 'linux-cpupower-5.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux: cpupower: Replace HTTP links with HTTPS ones cpupower: Fix NULL but dereferenced coccicheck errors cpupower: Fix comparing pointer to 0 coccicheck warns
2020-07-27x86/cpu: Refactor sync_core() for readabilityRicardo Neri
Instead of having #ifdef/#endif blocks inside sync_core() for X86_64 and X86_32, implement the new function iret_to_self() with two versions. In this manner, avoid having to use even more more #ifdef/#endif blocks when adding support for SERIALIZE in sync_core(). Co-developed-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200727043132.15082-4-ricardo.neri-calderon@linux.intel.com
2020-07-27x86/cpu: Relocate sync_core() to sync_core.hRicardo Neri
Having sync_core() in processor.h is problematic since it is not possible to check for hardware capabilities via the *cpu_has() family of macros. The latter needs the definitions in processor.h. It also looks more intuitive to relocate the function to sync_core.h. This changeset does not make changes in functionality. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20200727043132.15082-3-ricardo.neri-calderon@linux.intel.com
2020-07-27x86/cpufeatures: Add enumeration for SERIALIZE instructionRicardo Neri
The Intel architecture defines a set of Serializing Instructions (a detailed definition can be found in Vol.3 Section 8.3 of the Intel "main" manual, SDM). However, these instructions do more than what is required, have side effects and/or may be rather invasive. Furthermore, some of these instructions are only available in kernel mode or may cause VMExits. Thus, software using these instructions only to serialize execution (as defined in the manual) must handle the undesired side effects. As indicated in the name, SERIALIZE is a new Intel architecture Serializing Instruction. Crucially, it does not have any of the mentioned side effects. Also, it does not cause VMExit and can be used in user mode. This new instruction is currently documented in the latest "extensions" manual (ISE). It will appear in the "main" manual in the future. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20200727043132.15082-2-ricardo.neri-calderon@linux.intel.com
2020-07-27Merge tag 'v5.8-rc7' into x86/cpu, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-27Merge back cpufreq material for v5.9.Rafael J. Wysocki
2020-07-27x86/mm/64: Make sync_global_pgds() staticJoerg Roedel
The function is only called from within init_64.c and can be static. Also remove it from pgtable_64.h. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-4-joro@8bytes.org
2020-07-27x86/mm/64: Do not sync vmalloc/ioremap mappingsJoerg Roedel
Remove the code to sync the vmalloc and ioremap ranges for x86-64. The page-table pages are all pre-allocated now so that synchronization is no longer necessary. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-3-joro@8bytes.org
2020-07-27x86/mm: Pre-allocate P4D/PUD pages for vmalloc areaJoerg Roedel
Pre-allocate the page-table pages for the vmalloc area at the level which needs synchronization on x86-64, which is P4D for 5-level and PUD for 4-level paging. Doing this at boot makes sure no synchronization of that area is necessary at runtime. The synchronization takes the pgd_lock and iterates over all page-tables in the system, so it can take quite long and is better avoided. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-2-joro@8bytes.org
2020-07-27ACPI: OSL: Clean up the removal of unused memory mappingsRafael J. Wysocki
Fold acpi_os_map_cleanup_deferred() into acpi_os_map_remove() and pass the latter to INIT_RCU_WORK() in acpi_os_drop_map_ref() to make the code more straightforward. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-07-27ACPI: OSL: Use deferred unmapping in acpi_os_unmap_iomem()Rafael J. Wysocki
There is no reason (knwon to me) why any of the existing users of acpi_os_unmap_iomem() would need to wait for the unused memory mappings left by it to actually go away, so use the deferred unmapping of ACPI memory introduced previously in that function. While at it, fold __acpi_os_unmap_iomem() back into acpi_os_unmap_iomem(), which has become a simple wrapper around it, and make acpi_os_unmap_memory() call the latter. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-07-27ACPI: OSL: Use deferred unmapping in acpi_os_unmap_generic_address()Rafael J. Wysocki
There is no reason (knwon to me) why any of the existing users of acpi_os_unmap_generic_address() would need to wait for the unused memory mappings left by it to actually go away, so use the deferred unmapping of ACPI memory introduced previously in that function. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-07-27ACPICA: Preserve memory opregion mappingsRafael J. Wysocki
The ACPICA's strategy with respect to the handling of memory mappings associated with memory operation regions is to avoid mapping the entire region at once which may be problematic at least in principle (for example, it may lead to conflicts with overlapping mappings having different attributes created by drivers). It may also be wasteful, because memory opregions on some systems take up vast chunks of address space while the fields in those regions actually accessed by AML are sparsely distributed. For this reason, a one-page "window" is mapped for a given opregion on the first memory access through it and if that "window" does not cover an address range accessed through that opregion subsequently, it is unmapped and a new "window" is mapped to replace it. Next, if the new "window" is not sufficient to acess memory through the opregion in question in the future, it will be replaced with yet another "window" and so on. That may lead to a suboptimal sequence of memory mapping and unmapping operations, for example if two fields in one opregion separated from each other by a sufficiently wide chunk of unused address space are accessed in an alternating pattern. The situation may still be suboptimal if the deferred unmapping introduced previously is supported by the OS layer. For instance, the alternating memory access pattern mentioned above may produce a relatively long list of mappings to release with substantial duplication among the entries in it, which could be avoided if acpi_ex_system_memory_space_handler() did not release the mapping used by it previously as soon as the current access was not covered by it. In order to improve that, modify acpi_ex_system_memory_space_handler() to preserve all of the memory mappings created by it until the memory regions associated with them go away. Accordingly, update acpi_ev_system_memory_region_setup() to unmap all memory associated with memory opregions that go away. Reported-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Xiang Li <xiang.z.li@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-07-27ACPI: OSL: Implement deferred unmapping of ACPI memoryRafael J. Wysocki
The ACPI OS layer in Linux uses RCU to protect the walkers of the list of ACPI memory mappings from seeing an inconsistent state while it is being updated. Among other situations, that list can be walked in (NMI and non-NMI) interrupt context, so using a sleeping lock to protect it is not an option. However, performance issues related to the RCU usage in there appear, as described by Dan Williams: "Recently a performance problem was reported for a process invoking a non-trival ASL program. The method call in this case ends up repetitively triggering a call path like: acpi_ex_store acpi_ex_store_object_to_node acpi_ex_write_data_to_field acpi_ex_insert_into_field acpi_ex_write_with_update_rule acpi_ex_field_datum_io acpi_ex_access_region acpi_ev_address_space_dispatch acpi_ex_system_memory_space_handler acpi_os_map_cleanup.part.14 _synchronize_rcu_expedited.constprop.89 schedule The end result of frequent synchronize_rcu_expedited() invocation is tiny sub-millisecond spurts of execution where the scheduler freely migrates this apparently sleepy task. The overhead of frequent scheduler invocation multiplies the execution time by a factor of 2-3X." The source of this is that acpi_ex_system_memory_space_handler() unmaps the memory mapping currently cached by it at the access time if that mapping doesn't cover the memory area being accessed. Consequently, if there is a memory opregion with two fields separated from each other by an unused chunk of address space that is large enough for not being covered by a single mapping, and they happen to be used in an alternating pattern, the unmapping will occur on every acpi_ex_system_memory_space_handler() invocation for that memory opregion and that will lead to significant overhead. Moreover, acpi_ex_system_memory_space_handler() carries out the memory unmapping with the namespace and interpreter mutexes held which may lead to additional latency, because all of the tasks wanting to acquire on of these mutexes need to wait for the memory unmapping operation to complete. To address that, rework acpi_os_unmap_memory() so that it does not release the memory mapping covering the given address range right away and instead make it queue up the mapping at hand for removal via queue_rcu_work(). Reported-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Xiang Li <xiang.z.li@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-07-27MAINTAINERS: Add Git repository for memory controller driversKrzysztof Kozlowski
Add dedicated Krzysztof Kozlowski's Git repository on @kernel.org for memory controller drivers. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: brcmstb_dpfe: Fix language typoKrzysztof Kozlowski
Fix firwmare -> firmware. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Florian Fainelli <f.fainelli@gmail.com>
2020-07-27memory: samsung: exynos5422-dmc: Correct white space issuesKrzysztof Kozlowski
Remove unneeded blank line and align indentation with open parenthesis. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: samsung: exynos-srom: Correct alignmentKrzysztof Kozlowski
Align indentation with open parenthesis (or fix existing alignment). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: pl172: Enclose macro argument usage in parenthesisKrzysztof Kozlowski
Macros arguments should be enclosed by parenthesis for safety. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: of: Correct kerneldocKrzysztof Kozlowski
Use proper kerneldoc to fix GCC warnings like: drivers/memory/of_memory.c:30: warning: Function parameter or member 'dev' not described in 'of_get_min_tck' Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: omap-gpmc: Fix language typoKrzysztof Kozlowski
Fix arbitary -> arbitrary. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: omap-gpmc: Correct white space issuesKrzysztof Kozlowski
Remove some unneeded blank lines, align indentation with open parenthesis (or fix existing alignment). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: omap-gpmc: Use 'unsigned int' for consistencyKrzysztof Kozlowski
Driver uses 'unsigned int' in other places instead of 'unsigned'. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: omap-gpmc: Enclose macro argument usage in parenthesisKrzysztof Kozlowski
Macros arguments should be enclosed by parenthesis for safety. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: omap-gpmc: Correct kerneldocKrzysztof Kozlowski
Use proper kerneldoc to fix GCC warnings like: drivers/memory/omap-gpmc.c:299: warning: Function parameter or member 'cs' not described in 'gpmc_get_clk_period' drivers/memory/omap-gpmc.c:432: warning: Excess function parameter 'ma' description in 'get_gpmc_timing_reg' Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: mvebu-devbus: Align with open parenthesisKrzysztof Kozlowski
The line continuation contained spaces but still failed to properly align with open parenthesis. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: mvebu-devbus: Add missing braces to all arms of if statementKrzysztof Kozlowski
Add missing braces to all arms of if statement to align with coding convention. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27memory: bt1-l2-ctl: Add blank lines after declarationsKrzysztof Kozlowski
Add blank lines to improve code readability. No functional change. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-27s390/vmemmap: coding style updatesHeiko Carstens
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmemmap: avoid memset(PAGE_UNUSED) when adding consecutive sectionsDavid Hildenbrand
Let's avoid memset(PAGE_UNUSED) when adding consecutive sections, whereby the vmemmap of a single section does not span full PMDs. Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-10-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmemmap: remember unused sub-pmd rangesDavid Hildenbrand
With a memmap size of 56 bytes or 72 bytes per page, the memmap for a 256 MB section won't span full PMDs. As we populate single sections and depopulate single sections, the depopulation step would not be able to free all vmemmap pmds anymore. Do it similarly to x86, marking the unused memmap ranges in a special way (pad it with 0xFD). This allows us to add/remove sections, cleaning up all allocated vmemmap pages even if the memmap size is not multiple of 16 bytes per page. A 56 byte memmap can, for example, be created with !CONFIG_MEMCG and !CONFIG_SLUB. Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-9-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmemmap: fallback to PTEs if mapping large PMD failsDavid Hildenbrand
Let's fallback to single pages if short on huge pages. No need to stop memory hotplug. Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-8-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmem: cleanup empty page tablesDavid Hildenbrand
Let's cleanup empty page tables. Consider only page tables that fully fall into the idendity mapping and the vmemmap range. As there are no valid accesses to vmem/vmemmap within non-populated ranges, the single tlb flush at the end should be sufficient. Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-7-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmemmap: take the vmem_mutex when populating/freeingDavid Hildenbrand
Let's synchronize all accesses to the 1:1 and vmemmap mappings. This will be especially relevant when wanting to cleanup empty page tables that could be shared by both. Avoid races when removing tables that might be just about to get reused. Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-6-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmemmap: cleanup when vmemmap_populate() failsDavid Hildenbrand
Cleanup what we partially added in case vmemmap_populate() fails. For vmem, this is already handled by vmem_add_mapping(). Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-5-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmemmap: extend modify_pagetable() to handle vmemmapDavid Hildenbrand
Extend our shiny new modify_pagetable() to handle !direct (vmemmap) mappings. Convert vmemmap_populate() and implement vmemmap_free(). Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-4-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmem: consolidate vmem_add_range() and vmem_remove_range()David Hildenbrand
We want to have only a single pagetable walker and reuse the same functionality for vmemmap handling. Let's start by consolidating vmem_add_range() and vmem_remove_range(), converting it into a recursive implementation. A recursive implementation makes it easier to expand individual cases without harming readability. In addition, we minimize traversing the whole hierarchy over and over again. One change is that we don't unmap large PMDs/PUDs when not completely covered by the request, something that should never happen with direct mappings, unless one would be removing in other granularity than added, which would be broken already. Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-3-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/vmem: rename vmem_add_mem() to vmem_add_range()David Hildenbrand
Let's match the name to vmem_remove_range(). Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200722094558.9828-2-david@redhat.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390: enable HAVE_FUNCTION_ERROR_INJECTIONIlya Leoshkevich
This kernel feature is required for enabling BPF_KPROBE_OVERRIDE. Define override_function_with_return() and regs_set_return_value() functions, and fix compile errors in syscall_wrapper.h. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27s390/pci: clarify comment in s390_mmio_read/writeNiklas Schnelle
The existing comment was talking about reading in the write part and vice versa. While we are here make it more clear why restricting the syscalls to MIO capable devices is okay. Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2020-07-27io: Fix return type of _inb and _inlStafford Horne
The return type of functions _inb, _inw and _inl are all u16 which looks wrong. This patch makes them u8, u16 and u32 respectively. The original commit text for these does not indicate that these should be all forced to u16. Fixes: f009c89df79a ("io: Provide _inX() and _outX()") Signed-off-by: Stafford Horne <shorne@gmail.com> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>