summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-05-22powerpc/kaslr_booke: Fix build errorYueHaibing
arch/powerpc/mm/nohash/kaslr_booke.c: In function ‘kaslr_get_cmdline’: arch/powerpc/mm/nohash/kaslr_booke.c:46:2: error: implicit declaration of function ‘early_init_dt_scan_chosen’ early_init_dt_scan_chosen(boot_command_line); ^~~~~~~~~~~~~~~~~~~~~~~~~ arch/powerpc/mm/nohash/kaslr_booke.c: In function ‘get_initrd_range’: arch/powerpc/mm/nohash/kaslr_booke.c:210:10: error: implicit declaration of function ‘of_read_number’ start = of_read_number(prop, len / 4); ^~~~~~~~~~~~~~ Add missing include files to fix this. Fixes: 86c38fec69a4 ("powerpc: Remove asm/prom.h from all files that don't need it") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220517094900.14900-1-yuehaibing@huawei.com
2022-05-22powerpc/book3e: Fix build errorYueHaibing
arch/powerpc/mm/nohash/fsl_book3e.c: In function ‘relocate_init’: arch/powerpc/mm/nohash/fsl_book3e.c:348:2: error: implicit declaration of function ‘early_get_first_memblock_info’ early_get_first_memblock_info(__va(dt_ptr), &size); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Add missing include file linux/of_fdt.h to fix this. Fixes: 86c38fec69a4 ("powerpc: Remove asm/prom.h from all files that don't need it") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220517094830.27560-1-yuehaibing@huawei.com
2022-05-22powerpc: Book3S 64-bit outline-only KASAN supportDaniel Axtens
Implement a limited form of KASAN for Book3S 64-bit machines running under the Radix MMU, supporting only outline mode. - Enable the compiler instrumentation to check addresses and maintain the shadow region. (This is the guts of KASAN which we can easily reuse.) - Require kasan-vmalloc support to handle modules and anything else in vmalloc space. - KASAN needs to be able to validate all pointer accesses, but we can't instrument all kernel addresses - only linear map and vmalloc. On boot, set up a single page of read-only shadow that marks all iomap and vmemmap accesses as valid. - Document KASAN in powerpc docs. Background ---------- KASAN support on Book3S is a bit tricky to get right: - It would be good to support inline instrumentation so as to be able to catch stack issues that cannot be caught with outline mode. - Inline instrumentation requires a fixed offset. - Book3S runs code with translations off ("real mode") during boot, including a lot of generic device-tree parsing code which is used to determine MMU features. [ppc64 mm note: The kernel installs a linear mapping at effective address c000...-c008.... This is a one-to-one mapping with physical memory from 0000... onward. Because of how memory accesses work on powerpc 64-bit Book3S, a kernel pointer in the linear map accesses the same memory both with translations on (accessing as an 'effective address'), and with translations off (accessing as a 'real address'). This works in both guests and the hypervisor. For more details, see s5.7 of Book III of version 3 of the ISA, in particular the Storage Control Overview, s5.7.3, and s5.7.5 - noting that this KASAN implementation currently only supports Radix.] - Some code - most notably a lot of KVM code - also runs with translations off after boot. - Therefore any offset has to point to memory that is valid with translations on or off. One approach is just to give up on inline instrumentation. This way boot-time checks can be delayed until after the MMU is set is up, and we can just not instrument any code that runs with translations off after booting. Take this approach for now and require outline instrumentation. Previous attempts allowed inline instrumentation. However, they came with some unfortunate restrictions: only physically contiguous memory could be used and it had to be specified at compile time. Maybe we can do better in the future. [paulus@ozlabs.org - Rebased onto 5.17. Note that a kernel with CONFIG_KASAN=y will crash during boot on a machine using HPT translation because not all the entry points to the generic KASAN code are protected with a call to kasan_arch_is_ready().] Originally-by: Balbir Singh <bsingharora@gmail.com> # ppc64 out-of-line radix version Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> [mpe: Update copyright year and comment formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/YoTE69OQwiG7z+Gu@cleo
2022-05-22powerpc/kasan: Disable address sanitization in kexec pathsDaniel Axtens
The kexec code paths involve code that necessarily run in real mode, as CPUs are disabled and control is transferred to the new kernel. Disable address sanitization for the kexec code and the functions called in real mode on CPUs being disabled. [paulus@ozlabs.org: combined a few work-in-progress commits of Daniel's and wrote the commit message.] Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> [mpe: Move pseries_machine_kexec() into kexec.c so setup.c can be instrumented] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/YoTFSQ2TUSEaDdVC@cleo
2022-05-22powerpc/kasan: Don't instrument non-maskable or raw interruptsDaniel Axtens
Disable address sanitization for raw and non-maskable interrupt handlers, because they can run in real mode, where we cannot access the shadow memory. (Note that kasan_arch_is_ready() doesn't test for real mode, since it is a static branch for speed, and in any case not all the entry points to the generic KASAN code are protected by kasan_arch_is_ready guards.) The changes to interrupt_nmi_enter/exit_prepare() look larger than they actually are. The changes are equivalent to adding !IS_ENABLED(CONFIG_KASAN) to the conditions for calling nmi_enter() or nmi_exit() in real mode. That is, the code is equivalent to using the following condition for calling nmi_enter/exit: if (((!IS_ENABLED(CONFIG_PPC_BOOK3S_64) || !firmware_has_feature(FW_FEATURE_LPAR) || radix_enabled()) && !IS_ENABLED(CONFIG_KASAN) || (mfmsr() & MSR_DR)) That unwieldy condition has been split into several statements with comments, for easier reading. The nmi_ipi_lock functions that call atomic functions (i.e., nmi_ipi_lock_start(), nmi_ipi_lock() and nmi_ipi_unlock()), besides being marked noinstr, now call arch_atomic_* functions instead of atomic_* functions because with KASAN enabled, the atomic_* functions are wrappers which explicitly do address sanitization on their arguments. Since we are trying to avoid address sanitization, we have to use the lower-level arch_atomic_* versions. In hv_nmi_check_nonrecoverable(), the regs_set_unrecoverable() call has been open-coded so as to avoid having to either trust the inlining or mark regs_set_unrecoverable() as noinstr. [paulus@ozlabs.org: combined a few work-in-progress commits of Daniel's and wrote the commit message.] Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/YoTFGaKM8Pd46PIK@cleo
2022-05-22powerpc/mm/kasan: rename kasan_init_32.c to init_32.cDaniel Axtens
kasan is already implied by the directory name, we don't need to repeat it. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/YoTEyoi+xu9brJYe@cleo
2022-05-22kasan: Document support on 32-bit powerpcDaniel Axtens
KASAN is supported on 32-bit powerpc and the docs should reflect this. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/YoTEnMLrnd64j0w5@cleo
2022-05-22powerpc/ftrace: Remove ftrace init tramp once kernel init is completeNaveen N. Rao
Stop using the ftrace trampoline for init section once kernel init is complete. Fixes: 67361cf8071286 ("powerpc/ftrace: Handle large kernel configs") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220516071422.463738-1-naveen.n.rao@linux.vnet.ibm.com
2022-05-22powerpc/irq: Remove arch_local_irq_restore() for !CONFIG_CC_HAS_ASM_GOTOChristophe Leroy
All supported versions of GCC & clang support asm goto. Remove the !CONFIG_CC_HAS_ASM_GOTO version of arch_local_irq_restore() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/58df50c9e77e2ed945bacdead30412770578886b.1652715336.git.christophe.leroy@csgroup.eu
2022-05-22selftests/powerpc: Better reporting in spectre_v2Russell Currey
In commit f3054ffd71b5 ("selftests/powerpc: Return skip code for spectre_v2"), the spectre_v2 selftest is updated to be aware of cases where the vulnerability status reported in sysfs is incorrect, skipping the test instead. This happens because qemu can misrepresent the mitigation status of the host to the guest. If the count cache is disabled in the host, and this is correctly reported to the guest, then the guest won't apply mitigations. If the guest is then migrated to a new host where mitigations are necessary, it is now vulnerable because it has not applied mitigations. Update the selftest to report when we see excessive misses, indicative of the count cache being disabled. If software flushing is enabled, also warn that these flushes are just wasting performance. Signed-off-by: Russell Currey <ruscur@russell.cc> [mpe: Rebase and update change log appropriately] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210608064809.199116-1-ruscur@russell.cc
2022-05-22powerpc/powernv: Get STF barrier requirements from device-treeRussell Currey
The device-tree property no-need-store-drain-on-priv-state-switch is equivalent to H_CPU_BEHAV_NO_STF_BARRIER from the H_CPU_GET_CHARACTERISTICS hcall on pseries. Since commit 84ed26fd00c5 ("powerpc/security: Add a security feature for STF barrier") powernv systems with this device-tree property have been enabling the STF barrier when they have no need for it. This patch fixes this by clearing the STF barrier feature on those systems. Fixes: 84ed26fd00c5 ("powerpc/security: Add a security feature for STF barrier") Reported-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220404101536.104794-2-ruscur@russell.cc
2022-05-22powerpc/powernv: Get L1D flush requirements from device-treeRussell Currey
The device-tree properties no-need-l1d-flush-msr-pr-1-to-0 and no-need-l1d-flush-kernel-on-user-access are the equivalents of H_CPU_BEHAV_NO_L1D_FLUSH_ENTRY and H_CPU_BEHAV_NO_L1D_FLUSH_UACCESS from the H_GET_CPU_CHARACTERISTICS hcall on pseries respectively. In commit d02fa40d759f ("powerpc/powernv: Remove POWER9 PVR version check for entry and uaccess flushes") the condition for disabling the L1D flush on kernel entry and user access was changed from any non-P9 CPU to only checking P7 and P8. Without the appropriate device-tree checks for newer processors on powernv, these flushes are unnecessarily enabled on those systems. This patch corrects this. Fixes: d02fa40d759f ("powerpc/powernv: Remove POWER9 PVR version check for entry and uaccess flushes") Reported-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220404101536.104794-1-ruscur@russell.cc
2022-05-22powerpc/85xx/p2020: Add fsl,mpc8548-pmc nodePali Rohár
P2020 also contains Power Management Controller and their registers at offset 0xe0070 compatible with mpc8548. So add PMC node into DTS include file fsl/p2020si-post.dtsi Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220506203621.26314-1-pali@kernel.org
2022-05-22powerpc/64: Only WARN if __pa()/__va() called with bad addressesMichael Ellerman
We added checks to __pa() / __va() to ensure they're only called with appropriate addresses. But using BUG_ON() is too strong, it means virt_addr_valid() will BUG when DEBUG_VIRTUAL is enabled. Instead switch them to warnings, arm64 does the same. Fixes: 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220406145802.538416-5-mpe@ellerman.id.au
2022-05-22arch/Kconfig: Drop references to powerpc PAGE_SIZE symbolsMichael Ellerman
In the previous commit powerpc added PAGE_SIZE related config symbols using the generic names. So there's no need to refer to them in the definition of PAGE_SIZE_LESS_THAN_64KB etc, the negative dependency on the generic symbol is sufficient (in this case !PAGE_SIZE_64KB). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220505125123.2088143-2-mpe@ellerman.id.au
2022-05-22powerpc: Add generic PAGE_SIZE config symbolsMichael Ellerman
Other arches (sh, mips, hexagon) use standard names for PAGE_SIZE related config symbols. Add matching symbols for powerpc, which are enabled by default but depend on our architecture specific PAGE_SIZE symbols. This allows generic/driver code to express dependencies on the PAGE_SIZE without needing to refer to architecture specific config symbols. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220505125123.2088143-1-mpe@ellerman.id.au
2022-05-22powerpc/pseries/vas: sysfs comments with the correct entriesHaren Myneni
VAS entry is created as a misc device and the sysfs comments should list the proper entries Reported-by: Matheus Castanho <mscastanho@ibm.com> Signed-off-by: Haren Myneni <haren@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6dee950c7b72a4965c102208041f14a063cf5a8c.camel@linux.ibm.com
2022-05-22powerpc/powernv/vas: Assign real address to rx_fifo in vas_rx_win_attrHaren Myneni
In init_winctx_regs(), __pa() is called on winctx->rx_fifo and this function is called to initialize registers for receive and fault windows. But the real address is passed in winctx->rx_fifo for receive windows and the virtual address for fault windows which causes errors with DEBUG_VIRTUAL enabled. Fixes this issue by assigning only real address to rx_fifo in vas_rx_win_attr struct for both receive and fault windows. Reported-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Haren Myneni <haren@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/338e958c7ab8f3b266fa794a1f80f99b9671829e.camel@linux.ibm.com
2022-05-22powerpc/opcodes: Remove unused PPC_INST_XXX macrosChristophe Leroy
The following PPC_INST_XXX macros are not used anymore outside ppc-opcode.h: - PPC_INST_LD - PPC_INST_STD - PPC_INST_ADDIS - PPC_INST_ADD - PPC_INST_DIVD Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8c28636126f69141419953b5638b4a908c184dc1.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22powerpc/inst: Remove PPC_INST_BLChristophe Leroy
Convert last users of PPC_INST_BL to PPC_RAW_BL() And remove PPC_INST_BL. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d9eacb758e7ae7cf224211ebe3f6f7d409a333be.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22powerpc/modules: Use PPC_LI macros instead of opencodingChristophe Leroy
Use PPC_LI_MASK and PPC_LI() instead of opencoding. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3d56d7bc3200403773d54e62659d0e01292a055d.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22powerpc/inst: Remove PPC_INST_BRANCHChristophe Leroy
Convert last users of PPC_INST_BRANCH to PPC_RAW_BRANCH() And remove PPC_INST_BRANCH. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fa8807108a2ef2287a2c9651d6e1ff7c051923d9.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22powerpc/ftrace: Don't use copy_from_kernel_nofault() in ↵Christophe Leroy
module_trampoline_target() module_trampoline_target() is quite a hot path used when activating/deactivating function tracer. Avoid the heavy copy_from_kernel_nofault() by doing four calls to copy_inst_from_kernel_nofault(). Use __copy_inst_from_kernel_nofault() for the 3 last calls. First call is done to copy_from_kernel_nofault() to check address is within kernel space. No risk to wrap out the top of kernel space because the last page is never mapped so if address is in last page the first copy will fails and the other ones will never be performed. And also make it notrace just like all functions that call it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c55559103e014b7863161559d340e8e9484eaaa6.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22powerpc/inst: Add __copy_inst_from_kernel_nofault()Christophe Leroy
On the same model as get_user() versus __get_user(), introduce __copy_inst_from_kernel_nofault() which doesn't check address. To be used by callers that have already checked that the adress is a kernel address. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1f3702890d6dbd64702b61834753bcc96851c18c.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22powerpc/ftrace: Minimise number of #ifdefsChristophe Leroy
A lot of #ifdefs can be replaced by IS_ENABLED() Do so. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fold in changes suggested by Naveen and Christophe on list] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/18ce6708d6f8c71d87436f9c6019f04df4125128.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22powerpc/ftrace: Simplify expected_nop_sequence()Christophe Leroy
Avoid ifdefs around expected_nop_sequence(). While at it make it a bool. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/305d22472f1f92127fba09692df6bb5d079a8cd0.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22powerpc/ftrace: Use size macro instead of opencodingChristophe Leroy
0x80000000 is SZ_2G. Use it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix comparison against unsigned -SZ_2G] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bb6626e884acffe87b58736291df57db3deaa9b9.1652074503.git.christophe.leroy@csgroup.eu
2022-05-22smb3: add trace point for oplock not foundSteve French
In order to debug problems with server potentially sending us an oplock that we don't recognize (or a race with close and oplock break) it would be helpful to have a dynamic trace point for this case. New tracepoint is called trace_smb3_oplock_not_found Signed-off-by: Steve French <stfrench@microsoft.com>
2022-05-22cifs: return the more nuanced writeback error on close()ChenXiaoSong
As filemap_check_errors() only report -EIO or -ENOSPC, we return more nuanced writeback error -(file->f_mapping->wb_err & MAX_ERRNO). filemap_write_and_wait filemap_write_and_wait_range filemap_check_errors -ENOSPC or -EIO filemap_check_wb_err errseq_check return -(file->f_mapping->wb_err & MAX_ERRNO) Signed-off-by: ChenXiaoSong <chenxiaosong2@huawei.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2022-05-21smb3: add trace point for lease not found issueSteve French
When trying to debug problems with server sending us a lease we don't recognize, it would be helpful to have a dynamic trace point for this case. New tracepoint is called trace_smb3_lease_not_found Acked-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2022-05-21cifs: smbd: fix typo in commentJulia Lawall
Spelling mistake (triple letters) in comment. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Steve French <stfrench@microsoft.com>
2022-05-21ext4: fix bug_on in ext4_writepagesYe Bin
we got issue as follows: EXT4-fs error (device loop0): ext4_mb_generate_buddy:1141: group 0, block bitmap and bg descriptor inconsistent: 25 vs 31513 free cls ------------[ cut here ]------------ kernel BUG at fs/ext4/inode.c:2708! invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI CPU: 2 PID: 2147 Comm: rep Not tainted 5.18.0-rc2-next-20220413+ #155 RIP: 0010:ext4_writepages+0x1977/0x1c10 RSP: 0018:ffff88811d3e7880 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffff88811c098000 RDX: 0000000000000000 RSI: ffff88811c098000 RDI: 0000000000000002 RBP: ffff888128140f50 R08: ffffffffb1ff6387 R09: 0000000000000000 R10: 0000000000000007 R11: ffffed10250281ea R12: 0000000000000001 R13: 00000000000000a4 R14: ffff88811d3e7bb8 R15: ffff888128141028 FS: 00007f443aed9740(0000) GS:ffff8883aef00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020007200 CR3: 000000011c2a4000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> do_writepages+0x130/0x3a0 filemap_fdatawrite_wbc+0x83/0xa0 filemap_flush+0xab/0xe0 ext4_alloc_da_blocks+0x51/0x120 __ext4_ioctl+0x1534/0x3210 __x64_sys_ioctl+0x12c/0x170 do_syscall_64+0x3b/0x90 It may happen as follows: 1. write inline_data inode vfs_write new_sync_write ext4_file_write_iter ext4_buffered_write_iter generic_perform_write ext4_da_write_begin ext4_da_write_inline_data_begin -> If inline data size too small will allocate block to write, then mapping will has dirty page ext4_da_convert_inline_data_to_extent ->clear EXT4_STATE_MAY_INLINE_DATA 2. fallocate do_vfs_ioctl ioctl_preallocate vfs_fallocate ext4_fallocate ext4_convert_inline_data ext4_convert_inline_data_nolock ext4_map_blocks -> fail will goto restore data ext4_restore_inline_data ext4_create_inline_data ext4_write_inline_data ext4_set_inode_state -> set inode EXT4_STATE_MAY_INLINE_DATA 3. writepages __ext4_ioctl ext4_alloc_da_blocks filemap_flush filemap_fdatawrite_wbc do_writepages ext4_writepages if (ext4_has_inline_data(inode)) BUG_ON(ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) The root cause of this issue is we destory inline data until call ext4_writepages under delay allocation mode. But there maybe already convert from inline to extent. To solve this issue, we call filemap_flush first.. Cc: stable@kernel.org Signed-off-by: Ye Bin <yebin10@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20220516122634.1690462-1-yebin10@huawei.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-05-21ext4: refactor and move ext4_ioctl_get_encryption_pwsalt()Ritesh Harjani
This patch move code for FS_IOC_GET_ENCRYPTION_PWSALT case into ext4's crypto.c file, i.e. ext4_ioctl_get_encryption_pwsalt() and uuid_is_zero(). This is mostly refactoring logic and should not affect any functionality change. Suggested-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ritesh Harjani <ritesh.list@gmail.com> Link: https://lore.kernel.org/r/5af98b17152a96b245b4f7d2dfb8607fc93e36aa.1652595565.git.ritesh.list@gmail.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-05-21ext4: cleanup function defs from ext4.h into crypto.cRitesh Harjani
Some of these functions when CONFIG_FS_ENCRYPTION is enabled are not really inline (let compiler be the best judge of it). Remove inline and move them into crypto.c where they should be present. Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ritesh Harjani <ritesh.list@gmail.com> Link: https://lore.kernel.org/r/b7b9de2c7226298663fb5a0c28909135e2ab220f.1652595565.git.ritesh.list@gmail.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-05-21ext4: move ext4 crypto code to its own file crypto.cRitesh Harjani
This is to cleanup super.c file which has grown quite large. So, start moving ext4 crypto related code to where it should be in the first place i.e. fs/ext4/crypto.c Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ritesh Harjani <ritesh.list@gmail.com> Link: https://lore.kernel.org/r/7d637e093cbc34d727397e8d41a53a1b9ca7d7a4.1652595565.git.ritesh.list@gmail.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-05-21Merge tag 'perf-tools-fixes-for-v5.18-2022-05-21' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux Pull perf tools fixes from Arnaldo Carvalho de Melo: - Fix and validate CPU map inputs in synthetic PERF_RECORD_STAT events in 'perf stat'. - Fix x86's arch__intr_reg_mask() for the hybrid platform. - Address 'perf bench numa' compiler error on s390. - Fix check for btf__load_from_kernel_by_id() in libbpf. - Fix "all PMU test" 'perf test' to skip hv_24x7/hv_gpci tests on powerpc. - Fix session topology test to skip the test in guest environment. - Skip BPF 'perf test' if clang is not present. - Avoid shell test description infinite loop in 'perf test'. - Fix Intel LBR callstack entries and nr print message. * tag 'perf-tools-fixes-for-v5.18-2022-05-21' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: perf session: Fix Intel LBR callstack entries and nr print message perf test bpf: Skip test if clang is not present perf test session topology: Fix test to skip the test in guest environment perf bench numa: Address compiler error on s390 perf test: Avoid shell test description infinite loop perf regs x86: Fix arch__intr_reg_mask() for the hybrid platform perf test: Fix "all PMU test" to skip hv_24x7/hv_gpci tests on powerpc perf stat: Fix and validate CPU map inputs in synthetic PERF_RECORD_STAT events perf build: Fix check for btf__load_from_kernel_by_id() in libbpf
2022-05-21Merge tag 'input-for-v5.18-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input Pull input fixes from Dmitry Torokhov: "A small fixup to ili210x touchscreen driver, and updated maintainer entry for the device tree binding of Mediatek 6779 keypad: - fix reset timing of Ilitek touchscreens - update maintainer entry of DT binding of Mediatek 6779 keypad" * tag 'input-for-v5.18-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: Input: ili210x - use one common reset implementation Input: ili210x - fix reset timing dt-bindings: input: mediatek,mt6779-keypad: update maintainer
2022-05-21Merge tag 'scsi-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: "Two patches, both in drivers. The iscsi one is fixing the cpumask issue you commented on and the ufs one is a late arriving fix for conditions that can occur in Host Performance Booster reads" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: ufs: core: Fix referencing invalid rsp field scsi: target: Fix incorrect use of cpumask_t
2022-05-21Input: cypress_ps2 - fix typo in commentJulia Lawall
Spelling mistake (triple letters) in comment. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Link: https://lore.kernel.org/r/20220521111145.81697-27-Julia.Lawall@inria.fr Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2022-05-21perf session: Fix Intel LBR callstack entries and nr print messageChengdong Li
When generating callstack information from branch_stack(Intel LBR), the actual number of callstack entry should be bigger than the number of branch_stack, for example: branch_stack records: B() -> C() A() -> B() converted callstack records should be: C() B() A() though, the number of callstack equals to the number of branch stack plus 1. This patch fixes above issue in branch_stack__printf(). For example, # echo 'scale=2000; 4*a(1)' > cmd # perf record --call-graph lbr bc -l < cmd Before applying this patch, `perf script -D` output: 1220022677386876 0x2a40 [0xd8]: PERF_RECORD_SAMPLE(IP, 0x4002): 17990/17990: 0x40a6d6 period: 894172 addr: 0 ... LBR call chain: nr:8 ..... 0: fffffffffffffe00 ..... 1: 000000000040a410 ..... 2: 000000000040573c ..... 3: 0000000000408650 ..... 4: 00000000004022f2 ..... 5: 00000000004015f5 ..... 6: 00007f5ed6dcb553 ..... 7: 0000000000401698 ... FP chain: nr:2 ..... 0: fffffffffffffe00 ..... 1: 000000000040a6d8 ... branch callstack: nr:6 # which is not consistent with LBR records. ..... 0: 000000000040a410 ..... 1: 0000000000408650 # ditto ..... 2: 00000000004022f2 ..... 3: 00000000004015f5 ..... 4: 00007f5ed6dcb553 ..... 5: 0000000000401698 ... thread: bc:17990 ...... dso: /usr/bin/bc bc 17990 1220022.677386: 894172 cycles: 40a410 [unknown] (/usr/bin/bc) 40573c [unknown] (/usr/bin/bc) 408650 [unknown] (/usr/bin/bc) 4022f2 [unknown] (/usr/bin/bc) 4015f5 [unknown] (/usr/bin/bc) 7f5ed6dcb553 __libc_start_main+0xf3 (/usr/lib64/libc-2.17.so) 401698 [unknown] (/usr/bin/bc) After applied: 1220022677386876 0x2a40 [0xd8]: PERF_RECORD_SAMPLE(IP, 0x4002): 17990/17990: 0x40a6d6 period: 894172 addr: 0 ... LBR call chain: nr:8 ..... 0: fffffffffffffe00 ..... 1: 000000000040a410 ..... 2: 000000000040573c ..... 3: 0000000000408650 ..... 4: 00000000004022f2 ..... 5: 00000000004015f5 ..... 6: 00007f5ed6dcb553 ..... 7: 0000000000401698 ... FP chain: nr:2 ..... 0: fffffffffffffe00 ..... 1: 000000000040a6d8 ... branch callstack: nr:7 ..... 0: 000000000040a410 ..... 1: 000000000040573c ..... 2: 0000000000408650 ..... 3: 00000000004022f2 ..... 4: 00000000004015f5 ..... 5: 00007f5ed6dcb553 ..... 6: 0000000000401698 ... thread: bc:17990 ...... dso: /usr/bin/bc bc 17990 1220022.677386: 894172 cycles: 40a410 [unknown] (/usr/bin/bc) 40573c [unknown] (/usr/bin/bc) 408650 [unknown] (/usr/bin/bc) 4022f2 [unknown] (/usr/bin/bc) 4015f5 [unknown] (/usr/bin/bc) 7f5ed6dcb553 __libc_start_main+0xf3 (/usr/lib64/libc-2.17.so) 401698 [unknown] (/usr/bin/bc) Change from v1: - refined code style according to Jiri's review comments. Signed-off-by: Chengdong Li <chengdongli@tencent.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: German Gomez <german.gomez@arm.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: likexu@tencent.com Link: https://lore.kernel.org/r/20220517015726.96131-1-chengdongli@tencent.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-21perf test bpf: Skip test if clang is not presentAthira Rajeev
Perf BPF filter test fails in environment where "clang" is not installed. Test failure logs: <<>> 42: BPF filter : 42.1: Basic BPF filtering : Skip 42.2: BPF pinning : FAILED! 42.3: BPF prologue generation : FAILED! <<>> Enabling verbose option provided debug logs which says clang/llvm needs to be installed. Snippet of verbose logs: <<>> 42.2: BPF pinning : --- start --- test child forked, pid 61423 ERROR: unable to find clang. Hint: Try to install latest clang/llvm to support BPF. Check your $PATH <<logs_here>> Failed to compile test case: 'Basic BPF llvm compile' Unable to get BPF object, fix kbuild first test child finished with -1 ---- end ---- BPF filter subtest 2: FAILED! <<>> Here subtests, "BPF pinning" and "BPF prologue generation" failed and logs shows clang/llvm is needed. After installing clang, testcase passes. Reason on why subtest failure happens though logs has proper debug information: Main function __test__bpf calls test_llvm__fetch_bpf_obj by passing 4th argument as true ( 4th arguments maps to parameter "force" in test_llvm__fetch_bpf_obj ). But this will cause test_llvm__fetch_bpf_obj to skip the check for clang/llvm. Snippet of code part which checks for clang based on parameter "force" in test_llvm__fetch_bpf_obj: <<>> if (!force && (!llvm_param.user_set_param && <<>> Since force is set to "false", test won't get skipped and fails to compile test case. The BPF code compilation needs clang, So pass the fourth argument as "false" and also skip the test if reason for return is "TEST_SKIP" After the patch: <<>> 42: BPF filter : 42.1: Basic BPF filtering : Skip 42.2: BPF pinning : Skip 42.3: BPF prologue generation : Skip <<>> Fixes: ba1fae431e74bb42 ("perf test: Add 'perf test BPF'") Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Disha Goel <disgoel@linux.vnet.ibm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nageswara R Sastry <rnsastry@linux.ibm.com> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lore.kernel.org/r/20220511115438.84032-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-21perf test session topology: Fix test to skip the test in guest environmentAthira Rajeev
The session topology test fails in powerpc pSeries platform. Test logs: <<>> Session topology : FAILED! <<>> This testcases tests cpu topology by checking the core_id and socket_id stored in perf_env from perf session. The data from perf session is compared with the cpu topology information from "/sys/devices/system/cpu/cpuX/topology" like core_id, physical_package_id. In case of virtual environment, detail like physical_package_id is restricted to be exposed. Hence physical_package_id is set to -1. The testcase fails on such platforms since socket_id can't be fetched from topology info. Skip the testcase in powerpc if physical_package_id returns -1. Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>--- Tested-by: Disha Goel <disgoel@linux.vnet.ibm.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nageswara R Sastry <rnsastry@linux.ibm.com> Cc: linuxppc-dev@lists.ozlabs.org Link: https://lore.kernel.org/r/20220511114959.84002-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-21perf bench numa: Address compiler error on s390Thomas Richter
The compilation on s390 results in this error: # make DEBUG=y bench/numa.o ... bench/numa.c: In function ‘__bench_numa’: bench/numa.c:1749:81: error: ‘%d’ directive output may be truncated writing between 1 and 11 bytes into a region of size between 10 and 20 [-Werror=format-truncation=] 1749 | snprintf(tname, sizeof(tname), "process%d:thread%d", p, t); ^~ ... bench/numa.c:1749:64: note: directive argument in the range [-2147483647, 2147483646] ... # The maximum length of the %d replacement is 11 characters because of the negative sign. Therefore extend the array by two more characters. Output after: # make DEBUG=y bench/numa.o > /dev/null 2>&1; ll bench/numa.o -rw-r--r-- 1 root root 418320 May 19 09:11 bench/numa.o # Fixes: 3aff8ba0a4c9c919 ("perf bench numa: Avoid possible truncation when using snprintf()") Suggested-by: Namhyung Kim <namhyung@gmail.com> Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Sumanth Korikkar <sumanthk@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Link: https://lore.kernel.org/r/20220520081158.2990006-1-tmricht@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-21perf test: Avoid shell test description infinite loopIan Rogers
for_each_shell_test() is already strict in expecting tests to be files and executable. It is sometimes possible when it iterates over all files that it finds one that is executable and lacks a newline character. When this happens the loop never terminates as it doesn't check for EOF. Add the EOF check to make this loop at least bounded by the file size. If the description is returned as NULL then also skip the test. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Marco Elver <elver@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Sohaib Mohamed <sohaib.amhmd@gmail.com> Cc: Stephane Eranian <eranian@google.com> Link: https://lore.kernel.org/r/20220517204144.645913-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-21perf regs x86: Fix arch__intr_reg_mask() for the hybrid platformKan Liang
The X86 specific arch__intr_reg_mask() is to check whether the kernel and hardware can collect XMM registers. But it doesn't work on some hybrid platform. Without the patch on ADL-N: $ perf record -I? available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10 R11 R12 R13 R14 R15 The config of the test event doesn't contain the PMU information. The kernel may fail to initialize it on the correct hybrid PMU and return the wrong non-supported information. Add the PMU information into the config for the hybrid platform. The same register set is supported among different hybrid PMUs. Checking the first available one is good enough. With the patch on ADL-N: $ perf record -I? available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10 R11 R12 R13 R14 R15 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 XMM9 XMM10 XMM11 XMM12 XMM13 XMM14 XMM15 Fixes: 6466ec14aaf44ff1 ("perf regs x86: Add X86 specific arch__intr_reg_mask()") Reported-by: Ammy Yi <ammy.yi@intel.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: https://lore.kernel.org/r/20220518145125.1494156-1-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-21perf test: Fix "all PMU test" to skip hv_24x7/hv_gpci tests on powerpcAthira Rajeev
"perf all PMU test" picks the input events from "perf list --raw-dump pmu" list and runs "perf stat -e" for each of the event in the list. In case of powerpc, the PowerVM environment supports events from hv_24x7 and hv_gpci PMU which is of example format like below: - hv_24x7/CPM_ADJUNCT_INST,domain=?,core=?/ - hv_gpci/event,partition_id=?/ The value for "?" needs to be filled in depending on system and respective event. CPM_ADJUNCT_INST needs have core value and domain value. hv_gpci event needs partition_id. Similarly, there are other events for hv_24x7 and hv_gpci having "?" in event format. Hence skip these events on powerpc platform since values like partition_id, domain is specific to system and event. Fixes: 3d5ac9effcc640d5 ("perf test: Workload test of all PMUs") Signed-off-by: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Disha Goel <disgoel@linux.vnet.ibm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nageswara R Sastry <rnsastry@linux.ibm.com> Link: https://lore.kernel.org/r/20220520101236.17249-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-05-21cifs: set the CREATE_NOT_FILE when opening the directory in use_cached_dir()Ronnie Sahlberg
This enforces that we can only do this for directories and not normal files or else the server will return an error. This means that we will have conditionally check IF the path refers to a directory or not in all the call-sites where we are unsure. Right now this check is for "" i.e. root. Reviewed-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Reviewed-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2022-05-21cifs: check for smb1 in open_cached_dir()Ronnie Sahlberg
Check protocol version in open_cached_dir() and return not supported for SMB1. This allows us to call open_cached_dir() from code that is common to both smb1 and smb2/3 in future patches without having to do this check in the call-site. At the same time, add a check if tcon is valid or not for the same reason. Reviewed-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Reviewed-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2022-05-21cifs: move definition of cifs_fattr earlier in cifsglob.hRonnie Sahlberg
This only moves these definitions to come earlier in the file but not change the definition itself. This is done to reduce the amount of changes in future patches. Reviewed-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Reviewed-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2022-05-21mailbox: qcom-ipcc: Log the pending interrupt during resumePrasad Sodagudi
Enable logging of the pending interrupt that triggered device wakeup. This logging information helps to debug IRQs that cause periodic device wakeups by printing the detailed information of pending IPCC interrupts. Scenario: Device wakeup caused by Modem crash Logs: qcom-ipcc mailbox: virq: 182 triggered client-id: 2; signal-id: 2 From the IPCC bindings it can further be understood that the client here is IPCC_CLIENT_MPSS and the signal was IPCC_MPROC_SIGNAL_SMP2P. Reviewed-by: Manivannan Sadhasivam <mani@kernel.org> Signed-off-by: Prasad Sodagudi <quic_psodagud@quicinc.com> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>