summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-07-01tools arch x86: Sync the msr-index.h copy with the kernel sourcesArnaldo Carvalho de Melo
To pick up the changes from these csets: 1348924ba8169f35 ("x86/msr: Define new bits in TSX_FORCE_ABORT MSR") That cause no changes to tooling: $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > before $ cp arch/x86/include/asm/msr-index.h tools/arch/x86/include/asm/msr-index.h $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > after $ diff -u before after $ Just silences this perf build warning: Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h' diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h Cc: Borislav Petkov <bp@suse.de> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-01perf arm-spe: Don't wait for PERF_RECORD_EXIT eventLeo Yan
When decode Arm SPE trace, it waits for PERF_RECORD_EXIT event (the last perf event) for processing trace data, which is needless and even might cause logic error, e.g. it might fail to correlate perf events with Arm SPE events correctly. So this patch removes the condition checking for PERF_RECORD_EXIT event. Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: James Clark <james.clark@arm.com> Tested-by: James Clark <james.clark@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Al Grant <Al.Grant@arm.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Garry <john.garry@huawei.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210519071939.1598923-6-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-01perf arm-spe: Bail out if the trace is later than perf eventLeo Yan
It's possible that record in Arm SPE trace is later than perf event and vice versa. This asks to correlate the perf events and Arm SPE synthesized events to be processed in the manner of correct timing. To achieve the time ordering, this patch reverses the flow, it firstly calls arm_spe_sample() and then calls arm_spe_decode(). By comparing the timestamp value and detect the perf event is coming earlier than Arm SPE trace data, it bails out from the decoding loop, the last record is pushed into auxtrace stack and is deferred to generate sample. To track the timestamp, everytime it updates timestamp for the latest record. Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: James Clark <james.clark@arm.com> Tested-by: James Clark <james.clark@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Al Grant <Al.Grant@arm.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Garry <john.garry@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20210519071939.1598923-5-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-01perf arm-spe: Assign kernel time to synthesized eventLeo Yan
In current code, it assigns the arch timer counter to the synthesized samples Arm SPE trace, thus the samples don't contain the kernel time but only contain the raw counter value. To fix the issue, this patch converts the timer counter to kernel time and assigns it to sample timestamp. Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: James Clark <james.clark@arm.com> Tested-by: James Clark <james.clark@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Al Grant <Al.Grant@arm.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Garry <john.garry@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20210519071939.1598923-4-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-01perf arm-spe: Convert event kernel time to counter valueLeo Yan
When handle a perf event, Arm SPE decoder needs to decide if this perf event is earlier or later than the samples from Arm SPE trace data; to do comparision, it needs to use the same unit for the time. This patch converts the event kernel time to arch timer's counter value, thus it can be used to compare with counter value contained in Arm SPE Timestamp packet. Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: James Clark <james.clark@arm.com> Tested-by: James Clark <james.clark@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Al Grant <Al.Grant@arm.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Garry <john.garry@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20210519071939.1598923-3-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-01perf arm-spe: Save clock parameters from TIME_CONV eventLeo Yan
During the recording phase, "perf record" tool synthesizes event PERF_RECORD_TIME_CONV for the hardware clock parameters and saves the event into the data file. Afterwards, when processing the data file, the event TIME_CONV will be processed at the very early time and is stored into session context. This patch extracts these parameters from the session context and saves into the structure "spe->tc" with the type perf_tsc_conversion, so that the parameters are ready for conversion between clock counter and time stamp. Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: James Clark <james.clark@arm.com> Tested-by: James Clark <james.clark@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Al Grant <Al.Grant@arm.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Garry <john.garry@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20210519071939.1598923-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-01perf cs-etm: Remove callback cs_etm_find_snapshot()Leo Yan
The callback cs_etm_find_snapshot() is invoked for snapshot mode, its main purpose is to find the correct AUX trace data and returns "head" and "old" (we can call "old" as "old head") to the caller, the caller __auxtrace_mmap__read() uses these two pointers to decide the AUX trace data size. This patch removes cs_etm_find_snapshot() with below reasons: - The first thing in cs_etm_find_snapshot() is to check if the head has wrapped around, if it is not, directly bails out. The checking is pointless, this is because the "head" and "old" pointers both are monotonical increasing so they never wrap around. - cs_etm_find_snapshot() adjusts the "head" and "old" pointers and assumes the AUX ring buffer is fully filled with the hardware trace data, so it always subtracts the difference "mm->len" from "head" to get "old". Let's imagine the snapshot is taken in very short interval, the tracers only fill a small chunk of the trace data into the AUX ring buffer, in this case, it's wrongly to copy the whole the AUX ring buffer to perf file. - As the "head" and "old" pointers are monotonically increased, the function __auxtrace_mmap__read() handles these two pointers properly. It calculates the reminders for these two pointers, and the size is clamped to be never more than "snapshot_size". We can simply reply on the function __auxtrace_mmap__read() to calculate the correct result for data copying, it's not necessary to add Arm CoreSight specific callback. Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: James Clark <james.clark@arm.com> Tested-by: James Clark <james.clark@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Daniel Kiss <daniel.kiss@arm.com> Cc: Denis Nikitin <denik@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Garry <john.garry@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: coresight@lists.linaro.org Link: http://lore.kernel.org/lkml/20210701093537.90759-3-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-01perf bpf_counter: Move common functions to bpf_counter.hNamhyung Kim
Some helper functions will be used for cgroup counting too. Move them to a header file for sharing. Committer notes: Fix the build on older systems with: - struct bpf_map_info map_info = {0}; + struct bpf_map_info map_info = { .id = 0, }; This wasn't breaking the build in such systems as bpf_counter.c isn't built due to: tools/perf/util/Build: perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o The bpf_counter.h file on the other hand is included from places that are built everywhere. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20210625071826.608504-4-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-01Merge tag 'fs_for_v5.14-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs Pull misc fs updates from Jan Kara: "The new quotactl_fd() syscall (remake of quotactl_path() syscall that got introduced & disabled in 5.13 cycle), and couple of udf, reiserfs, isofs, and writeback fixes and cleanups" * tag 'fs_for_v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs: writeback: fix obtain a reference to a freeing memcg css quota: remove unnecessary oom message isofs: remove redundant continue statement quota: Wire up quotactl_fd syscall quota: Change quotactl_path() systcall to an fd-based one reiserfs: Remove unneed check in reiserfs_write_full_page() udf: Fix NULL pointer dereference in udf_symlink function reiserfs: add check for invalid 1st journal block
2021-07-01tracing: Resize tgid_map to pid_max, not PID_MAX_DEFAULTPaul Burton
Currently tgid_map is sized at PID_MAX_DEFAULT entries, which means that on systems where pid_max is configured higher than PID_MAX_DEFAULT the ftrace record-tgid option doesn't work so well. Any tasks with PIDs higher than PID_MAX_DEFAULT are simply not recorded in tgid_map, and don't show up in the saved_tgids file. In particular since systemd v243 & above configure pid_max to its highest possible 1<<22 value by default on 64 bit systems this renders the record-tgids option of little use. Increase the size of tgid_map to the configured pid_max instead, allowing it to cover the full range of PIDs up to the maximum value of PID_MAX_LIMIT if the system is configured that way. On 64 bit systems with pid_max == PID_MAX_LIMIT this will increase the size of tgid_map from 256KiB to 16MiB. Whilst this 64x increase in memory overhead sounds significant 64 bit systems are presumably best placed to accommodate it, and since tgid_map is only allocated when the record-tgid option is actually used presumably the user would rather it spends sufficient memory to actually record the tgids they expect. The size of tgid_map could also increase for CONFIG_BASE_SMALL=y configurations, but these seem unlikely to be systems upon which people are both configuring a large pid_max and running ftrace with record-tgid anyway. Of note is that we only allocate tgid_map once, the first time that the record-tgid option is enabled. Therefore its size is only set once, to the value of pid_max at the time the record-tgid option is first enabled. If a user increases pid_max after that point, the saved_tgids file will not contain entries for any tasks with pids beyond the earlier value of pid_max. Link: https://lkml.kernel.org/r/20210701172407.889626-2-paulburton@google.com Fixes: d914ba37d714 ("tracing: Add support for recording tgid of tasks") Cc: Ingo Molnar <mingo@redhat.com> Cc: Joel Fernandes <joelaf@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Paul Burton <paulburton@google.com> [ Fixed comment coding style ] Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-07-01ipc/util.c: use binary search for max_idxManfred Spraul
If semctl(), msgctl() and shmctl() are called with IPC_INFO, SEM_INFO, MSG_INFO or SHM_INFO, then the return value is the index of the highest used index in the kernel's internal array recording information about all SysV objects of the requested type for the current namespace. (This information can be used with repeated ..._STAT or ..._STAT_ANY operations to obtain information about all SysV objects on the system.) There is a cache for this value. But when the cache needs up be updated, then the highest used index is determined by looping over all possible values. With the introduction of IPCMNI_EXTEND_SHIFT, this could be a loop over 16 million entries. And due to /proc/sys/kernel/*next_id, the index values do not need to be consecutive. With <write 16000000 to msg_next_id>, msgget(), msgctl(,IPC_RMID) in a loop, I have observed a performance increase of around factor 13000. As there is no get_last() function for idr structures: Implement a "get_last()" using a binary search. As far as I see, ipc is the only user that needs get_last(), thus implement it in ipc/util.c and not in a central location. [akpm@linux-foundation.org: tweak comment, fix typo] Link: https://lkml.kernel.org/r/20210425075208.11777-2-manfred@colorfullife.com Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: <1vier1@web.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lockManfred Spraul
The patch solves three weaknesses in ipc/sem.c: 1) The initial read of use_global_lock in sem_lock() is an intentional race. KCSAN detects these accesses and prints a warning. 2) The code assumes that plain C read/writes are not mangled by the CPU or the compiler. 3) The comment it sysvipc_sem_proc_show() was hard to understand: The rest of the comments in ipc/sem.c speaks about sem_perm.lock, and suddenly this function speaks about ipc_lock_object(). To solve 1) and 2), use READ_ONCE()/WRITE_ONCE(). Plain C reads are used in code that owns sma->sem_perm.lock. The comment is updated to solve 3) [manfred@colorfullife.com: use READ_ONCE()/WRITE_ONCE() for use_global_lock] Link: https://lkml.kernel.org/r/20210627161919.3196-3-manfred@colorfullife.com Link: https://lkml.kernel.org/r/20210514175319.12195-1-manfred@colorfullife.com Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Davidlohr Bueso <dbueso@suse.de> Cc: <1vier1@web.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01ipc: use kmalloc for msg_queue and shmid_kernelVasily Averin
msg_queue and shmid_kernel are quite small objects, no need to use kvmalloc for them. mhocko@: "Both of them are 256B on most 64b systems." Previously these objects was allocated via ipc_alloc/ipc_rcu_alloc(), common function for several ipc objects. It had kvmalloc call inside(). Later, this function went away and was finally replaced by direct kvmalloc call, and now we can use more suitable kmalloc/kfree for them. Link: https://lkml.kernel.org/r/0d0b6c9b-8af3-29d8-34e2-a565c53780f3@virtuozzo.com Reported-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Vasily Averin <vvs@virtuozzo.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Roman Gushchin <guro@fb.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01ipc sem: use kvmalloc for sem_undo allocationVasily Averin
Patch series "ipc: allocations cleanup", v2. Some ipc objects use the wrong allocation functions: small objects can use kmalloc(), and vice versa, potentially large objects can use kmalloc(). This patch (of 2): Size of sem_undo can exceed one page and with the maximum possible nsems = 32000 it can grow up to 64Kb. Let's switch its allocation to kvmalloc to avoid user-triggered disruptive actions like OOM killer in case of high-order memory shortage. User triggerable high order allocations are quite a problem on heavily fragmented systems. They can be a DoS vector. Link: https://lkml.kernel.org/r/ebc3ac79-3190-520d-81ce-22ad194986ec@virtuozzo.com Link: https://lkml.kernel.org/r/a6354fd9-2d55-2e63-dd4d-fa7dc1d11134@virtuozzo.com Signed-off-by: Vasily Averin <vvs@virtuozzo.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Roman Gushchin <guro@fb.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/decompressors: remove set but not used variabled 'level'Yu Kuai
Fixes gcc '-Wunused-but-set-variable' warning: lib/decompress_unlzo.c:46:5: warning: variable `level' set but not used [-Wunused-but-set-variable] It is never used and so can be removed. [akpm@linux-foundation.org: warning: value computed is not used] Link: https://lkml.kernel.org/r/20210514062050.3532344-1-yukuai3@huawei.com Fixes: 7dd65feb6c60 ("lib: add support for LZO-compressed kernels") Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01selftests/vm/pkeys: exercise x86 XSAVE init stateDave Hansen
On x86, there is a set of instructions used to save and restore register state collectively known as the XSAVE architecture. There are about a dozen different features managed with XSAVE. The protection keys register, PKRU, is one of those features. The hardware optimizes XSAVE by tracking when the state has not changed from its initial (init) state. In this case, it can avoid the cost of writing state to memory (it would usually just be a bunch of 0's). When the pkey register is 0x0 the hardware optionally choose to track the register as being in the init state (optimize away the writes). AMD CPUs do this more aggressively compared to Intel. On x86, PKRU is rarely in its (very permissive) init state. Instead, the value defaults to something very restrictive. It is not surprising that bugs have popped up in the rare cases when PKRU reaches its init state. Add a protection key selftest which gets the protection keys register into its init state in a way that should work on Intel and AMD. Then, do a bunch of pkey register reads to watch for inadvertent changes. This adds "-mxsave" to CFLAGS for all the x86 vm selftests in order to allow use of the XSAVE instruction __builtin functions. This will make the builtins available on all of the vm selftests, but is expected to be harmless. Link: https://lkml.kernel.org/r/20210611164202.1849B712@viggo.jf.intel.com Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Ram Pai <linuxram@us.ibm.com> Cc: Sandipan Das <sandipan@linux.ibm.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01selftests/vm/pkeys: refill shadow register after implicit kernel writeDave Hansen
The pkey test code keeps a "shadow" of the pkey register around. This ensures that any bugs which might write to the register can be caught more quickly. Generally, userspace has a good idea when the kernel is going to write to the register. For instance, alloc_pkey() is passed a permission mask. The caller of alloc_pkey() can update the shadow based on the return value and the mask. But, the kernel can also modify the pkey register in a more sneaky way. For mprotect(PROT_EXEC) mappings, the kernel will allocate a pkey and write the pkey register to create an execute-only mapping. The kernel never tells userspace what key it uses for this. This can cause the test to fail with messages like: protection_keys_64.2: pkey-helpers.h:132: _read_pkey_reg: Assertion `pkey_reg == shadow_pkey_reg' failed. because the shadow was not updated with the new kernel-set value. Forcibly update the shadow value immediately after an mprotect(). Link: https://lkml.kernel.org/r/20210611164200.EF76AB73@viggo.jf.intel.com Fixes: 6af17cf89e99 ("x86/pkeys/selftests: Add PROT_EXEC test") Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Ram Pai <linuxram@us.ibm.com> Cc: Sandipan Das <sandipan@linux.ibm.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01selftests/vm/pkeys: handle negative sys_pkey_alloc() return codeDave Hansen
The alloc_pkey() sefltest function wraps the sys_pkey_alloc() system call. On success, it updates its "shadow" register value because sys_pkey_alloc() updates the real register. But, the success check is wrong. pkey_alloc() considers any non-zero return code to indicate success where the pkey register will be modified. This fails to take negative return codes into account. Consider only a positive return value as a successful call. Link: https://lkml.kernel.org/r/20210611164157.87AB4246@viggo.jf.intel.com Fixes: 5f23f6d082a9 ("x86/pkeys: Add self-tests") Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Ram Pai <linuxram@us.ibm.com> Cc: Sandipan Das <sandipan@linux.ibm.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01selftests/vm/pkeys: fix alloc_random_pkey() to make it really, really randomDave Hansen
Patch series "selftests/vm/pkeys: Bug fixes and a new test". There has been a lot of activity on the x86 front around the XSAVE architecture which is used to context-switch processor state (among other things). In addition, AMD has recently joined the protection keys club by adding processor support for PKU. The AMD implementation helped uncover a kernel bug around the PKRU "init state", which actually applied to Intel's implementation but was just harder to hit. This series adds a test which is expected to help find this class of bug both on AMD and Intel. All the work around pkeys on x86 also uncovered a few bugs in the selftest. This patch (of 4): The "random" pkey allocation code currently does the good old: srand((unsigned int)time(NULL)); *But*, it unfortunately does this on every random pkey allocation. There may be thousands of these a second. time() has a one second resolution. So, each time alloc_random_pkey() is called, the PRNG is *RESET* to time(). This is nasty. Normally, if you do: srand(<ANYTHING>); foo = rand(); bar = rand(); You'll be quite guaranteed that 'foo' and 'bar' are different. But, if you do: srand(1); foo = rand(); srand(1); bar = rand(); You are quite guaranteed that 'foo' and 'bar' are the *SAME*. The recent "fix" effectively forced the test case to use the same "random" pkey for the whole test, unless the test run crossed a second boundary. Only run srand() once at program startup. This explains some very odd and persistent test failures I've been seeing. Link: https://lkml.kernel.org/r/20210611164153.91B76FB8@viggo.jf.intel.com Link: https://lkml.kernel.org/r/20210611164155.192D00FF@viggo.jf.intel.com Fixes: 6e373263ce07 ("selftests/vm/pkeys: fix alloc_random_pkey() to make it really random") Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Ram Pai <linuxram@us.ibm.com> Cc: Sandipan Das <sandipan@linux.ibm.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01kcov: add __no_sanitize_coverage to fix noinstr for all architecturesMarco Elver
Until now no compiler supported an attribute to disable coverage instrumentation as used by KCOV. To work around this limitation on x86, noinstr functions have their coverage instrumentation turned into nops by objtool. However, this solution doesn't scale automatically to other architectures, such as arm64, which are migrating to use the generic entry code. Clang [1] and GCC [2] have added support for the attribute recently. [1] https://github.com/llvm/llvm-project/commit/280333021e9550d80f5c1152a34e33e81df1e178 [2] https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=cec4d4a6782c9bd8d071839c50a239c49caca689 The changes will appear in Clang 13 and GCC 12. Add __no_sanitize_coverage for both compilers, and add it to noinstr. Note: In the Clang case, __has_feature(coverage_sanitizer) is only true if the feature is enabled, and therefore we do not require an additional defined(CONFIG_KCOV) (like in the GCC case where __has_attribute(..) is always true) to avoid adding redundant attributes to functions if KCOV is off. That being said, compilers that support the attribute will not generate errors/warnings if the attribute is redundantly used; however, where possible let's avoid it as it reduces preprocessed code size and associated compile-time overheads. [elver@google.com: Implement __has_feature(coverage_sanitizer) in Clang] Link: https://lkml.kernel.org/r/20210527162655.3246381-1-elver@google.com [elver@google.com: add comment explaining __has_feature() in Clang] Link: https://lkml.kernel.org/r/20210527194448.3470080-1-elver@google.com Link: https://lkml.kernel.org/r/20210525175819.699786-1-elver@google.com Signed-off-by: Marco Elver <elver@google.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Miguel Ojeda <ojeda@kernel.org> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Kees Cook <keescook@chromium.org> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01exec: remove checks in __register_bimfmt()Alexey Dobriyan
Delete NULL check, all callers pass valid pointer. Delete ->load_binary check -- failure to provide hook in a custom module will be very noticeable at the very first execve call. Link: https://lkml.kernel.org/r/YK1Gy1qXaLAR+tPl@localhost.localdomain Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01x86: signal: don't do sas_ss_reset() until we are certain that sigframe ↵Al Viro
won't be abandoned Currently we handle SS_AUTODISARM as soon as we have stored the altstack settings into sigframe - that's the point when we have set the things up for eventual sigreturn to restore the old settings. And if we manage to set the sigframe up (we are not done with that yet), everything's fine. However, in case of failure we end up with sigframe-to-be abandoned and SIGSEGV force-delivered. And in that case we end up with inconsistent rules - late failures have altstack reset, early ones do not. It's trivial to get consistent behaviour - just handle SS_AUTODISARM once we have set the sigframe up and are committed to entering the handler, i.e. in signal_delivered(). Link: https://lore.kernel.org/lkml/20200404170604.GN23230@ZenIV.linux.org.uk/ Link: https://github.com/ClangBuiltLinux/linux/issues/876 Link: https://lkml.kernel.org/r/20210422230846.1756380-1-ndesaulniers@google.com Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Oleg Nesterov <oleg@redhat.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01hfsplus: report create_date to kstat.btimeChung-Chiang Cheng
The create_date field of inode in hfsplus is corresponding to kstat.btime and could be reported in statx. Link: https://lkml.kernel.org/r/20210416172147.8736-1-cccheng@synology.com Signed-off-by: Chung-Chiang Cheng <cccheng@synology.com> Reviewed-by: Viacheslav Dubeyko <slava@dubeyko.com> Cc: Christian Brauner <christian.brauner@ubuntu.com> Cc: James Morris <jamorris@linux.microsoft.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01hfsplus: remove unnecessary oom messageZhen Lei
Fixes scripts/checkpatch.pl warning: WARNING: Possible unnecessary 'out of memory' message Remove it can help us save a bit of memory. Link: https://lkml.kernel.org/r/20210617084944.1279-1-thunder.leizhen@huawei.com Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01nilfs2: remove redundant continue statement in a while-loopColin Ian King
The continue statement at the end of the while-loop is redundant, remove it. Addresses-Coverity: ("Continue has no effect") Link: https://lkml.kernel.org/r/20210621100519.10257-1-colin.king@canonical.com Link: https://lkml.kernel.org/r/1624557664-17159-1-git-send-email-konishi.ryusuke@gmail.com Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01kprobes: remove duplicated strong free_insn_page in x86 and s390Barry Song
free_insn_page() in x86 and s390 is same with the common weak function in kernel/kprobes.c. Plus, the comment "Recover page to RW mode before releasing it" in x86 seems insensible to be there since resetting mapping is done by common code in vfree() of module_memfree(). So drop these two duplicated strong functions and related comment, then mark the common one in kernel/kprobes.c strong. Link: https://lkml.kernel.org/r/20210608065736.32656-1-song.bao.hua@hisilicon.com Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Qi Liu <liuqi115@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01init: print out unknown kernel parametersAndrew Halaney
It is easy to foobar setting a kernel parameter on the command line without realizing it, there's not much output that you can use to assess what the kernel did with that parameter by default. Make it a little more explicit which parameters on the command line _looked_ like a valid parameter for the kernel, but did not match anything and ultimately got tossed to init. This is very similar to the unknown parameter message received when loading a module. This assumes the parameters are processed in a normal fashion, some parameters (dyndbg= for example) don't register their parameter with the rest of the kernel's parameters, and therefore always show up in this list (and are also given to init - like the rest of this list). Another example is BOOT_IMAGE= is highlighted as an offender, which it technically is, but is passed by LILO and GRUB so most systems will see that complaint. An example output where "foobared" and "unrecognized" are intentionally invalid parameters: Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.12-dirty debug log_buf_len=4M foobared unrecognized=foo Unknown command line parameters: foobared BOOT_IMAGE=/boot/vmlinuz-5.12-dirty unrecognized=foo Link: https://lkml.kernel.org/r/20210511211009.42259-1-ahalaney@redhat.com Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Suggested-by: Steven Rostedt <rostedt@goodmis.org> Suggested-by: Borislav Petkov <bp@suse.de> Acked-by: Borislav Petkov <bp@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01checkpatch: do not complain about positive return values starting with EPOLLGuenter Roeck
checkpatch complains about positive return values of poll functions. Example: WARNING: return of an errno should typically be negative (ie: return -EPOLLIN) + return EPOLLIN; Poll functions return positive values. The defines for the return values of poll functions all start with EPOLL, resulting in a number of false positives. An often used workaround is to assign poll function return values to variables and returning that variable, but that is a less than perfect solution. There is no error definition which starts with EPOLL, so it is safe to omit the warning for return values starting with EPOLL. Link: https://lkml.kernel.org/r/20210622004334.638680-1-linux@roeck-us.net Signed-off-by: Guenter Roeck <linux@roeck-us.net> Acked-by: Joe Perches <joe@perches.com> Cc: Ricardo Ribalda <ribalda@chromium.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01checkpatch: improve the indented label testJoe Perches
checkpatch identifies a label only when a terminating colon immediately follows an identifier. Bitfield definitions can appear to be labels so ignore any spaces between the identifier terminating colon and any digit that may be used to define a bitfield length. Miscellanea: o Improve the initial checkpatch comment o Use the more typical '&&' instead of 'and' o Require the initial label character to be a non-digit (Can't use $Ident here because $Ident allows ## concatenation) o Use $sline instead of $line to ignore comments o Use '$sline !~ /.../' instead of '!($line =~ /.../)' Link: https://lkml.kernel.org/r/b54d673e7cde7de5de0c9ba4dd57dd0858580ca4.camel@perches.com Signed-off-by: Joe Perches <joe@perches.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Manikishan Ghantasala <manikishanghantasala@gmail.com> Cc: Alex Elder <elder@ieee.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01checkpatch: scripts/spdxcheck.py now requires python3Guenter Roeck
Since commit d0259c42abff ("spdxcheck.py: Use Python 3"), spdxcheck.py explicitly expects to run as python3 script. If "python" still points to python v2.7 and the script is executed with "python scripts/spdxcheck.py", the following error may be seen even if git-python is installed for python3. Traceback (most recent call last): File "scripts/spdxcheck.py", line 10, in <module> import git ImportError: No module named git To fix the problem, check for the existence of python3, check if the script is executable and not just for its existence, and execute it directly. Link: https://lkml.kernel.org/r/20210505211720.447111-1-linux@roeck-us.net Signed-off-by: Guenter Roeck <linux@roeck-us.net> Cc: Joe Perches <joe@perches.com> Cc: Bert Vermeulen <bert@biot.com> Cc: Dwaipayan Ray <dwaipayanray1@gmail.com> Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/decompress_unlz4.c: correctly handle zero-padding around initrds.Dimitri John Ledkov
lz4 compatible decompressor is simple. The format is underspecified and relies on EOF notification to determine when to stop. Initramfs buffer format[1] explicitly states that it can have arbitrary number of zero padding. Thus when operating without a fill function, be extra careful to ensure that sizes less than 4, or apperantly empty chunksizes are treated as EOF. To test this I have created two cpio initrds, first a normal one, main.cpio. And second one with just a single /test-file with content "second" second.cpio. Then i compressed both of them with gzip, and with lz4 -l. Then I created a padding of 4 bytes (dd if=/dev/zero of=pad4 bs=1 count=4). To create four testcase initrds: 1) main.cpio.gzip + extra.cpio.gzip = pad0.gzip 2) main.cpio.lz4 + extra.cpio.lz4 = pad0.lz4 3) main.cpio.gzip + pad4 + extra.cpio.gzip = pad4.gzip 4) main.cpio.lz4 + pad4 + extra.cpio.lz4 = pad4.lz4 The pad4 test-cases replicate the initrd load by grub, as it pads and aligns every initrd it loads. All of the above boot, however /test-file was not accessible in the initrd for the testcase #4, as decoding in lz4 decompressor failed. Also an error message printed which usually is harmless. Whith a patched kernel, all of the above testcases now pass, and /test-file is accessible. This fixes lz4 initrd decompress warning on every boot with grub. And more importantly this fixes inability to load multiple lz4 compressed initrds with grub. This patch has been shipping in Ubuntu kernels since January 2021. [1] ./Documentation/driver-api/early-userspace/buffer-format.rst BugLink: https://bugs.launchpad.net/bugs/1835660 Link: https://lore.kernel.org/lkml/20210114200256.196589-1-xnox@ubuntu.com/ # v0 Link: https://lkml.kernel.org/r/20210513104831.432975-1-dimitri.ledkov@canonical.com Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Cc: Kyungsik Lee <kyungsik.lee@lge.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Bongkyu Kim <bongkyu.kim@lge.com> Cc: Kees Cook <keescook@chromium.org> Cc: Sven Schmidt <4sschmid@informatik.uni-hamburg.de> Cc: Rajat Asthana <thisisrast7@gmail.com> Cc: Nick Terrell <terrelln@fb.com> Cc: Gao Xiang <hsiangkao@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lz4_decompress: declare LZ4_decompress_safe_withPrefix64k staticRajat Asthana
Declare LZ4_decompress_safe_withPrefix64k as static to fix sparse warning: > warning: symbol 'LZ4_decompress_safe_withPrefix64k' was not declared. > Should it be static? Link: https://lkml.kernel.org/r/20210511154345.610569-1-thisisrast7@gmail.com Signed-off-by: Rajat Asthana <thisisrast7@gmail.com> Reviewed-by: Nick Terrell <terrelln@fb.com> Cc: Gao Xiang <hsiangkao@redhat.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01kernel.h: split out kstrtox() and simple_strtox() to a separate headerAndy Shevchenko
kernel.h is being used as a dump for all kinds of stuff for a long time. Here is the attempt to start cleaning it up by splitting out kstrtox() and simple_strtox() helpers. At the same time convert users in header and lib folders to use new header. Though for time being include new header back to kernel.h to avoid twisted indirected includes for existing users. [andy.shevchenko@gmail.com: fix documentation references] Link: https://lkml.kernel.org/r/20210615220003.377901-1-andy.shevchenko@gmail.com Link: https://lkml.kernel.org/r/20210611185815.44103-1-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Francis Laniel <laniel_francis@privacyrequired.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Kars Mulder <kerneldev@karsmulder.nl> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Anna Schumaker <anna.schumaker@netapp.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/test_string.c: allow module removalMatteo Croce
The test_string module can't be removed because it lacks an exit hook. Since there is no reason for it to be permanent, add an empty one to allow module removal. Link: https://lkml.kernel.org/r/20210616234503.28678-1-mcroce@linux.microsoft.com Signed-off-by: Matteo Croce <mcroce@microsoft.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib: uninline simple_strtoull()Alexey Dobriyan
Gcc inlines simple_strtoull() too agressively. Given that all 4 signatures match, everything very efficiently calls or tailcalls into simple_strtoull(): ffffffff81da0240 <simple_strtoll>: ffffffff81da0240: 80 3f 2d cmp BYTE PTR [rdi],0x2d ffffffff81da0243: 74 05 je ffffffff81da024a <simple_strtoll+0xa> ffffffff81da0245: e9 76 ff ff ff jmp simple_strtoull ffffffff81da024a: 48 83 c7 01 add rdi,0x1 ffffffff81da024e: e8 6d ff ff ff call simple_strtoull ffffffff81da0253: 48 f7 d8 neg rax ffffffff81da0256: c3 ret Space savings (on F34-ish .config) add/remove: 0/0 grow/shrink: 1/3 up/down: 52/-313 (-261) Function old new delta vsscanf 2167 2219 +52 simple_strtoul 72 2 -70 simple_strtoll 143 23 -120 simple_strtol 143 20 -123 Link: https://lkml.kernel.org/r/YMO2zoOQk2eF34tn@localhost.localdomain Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib: memscan() fixletAlexey Dobriyan
Generic version doesn't trucate second argument to char. Older brother memchr() does as do s390, sparc and i386 assembly versions. Fortunately, no code passes c >= 256. Link: https://lkml.kernel.org/r/YLv4cCf0t5UPdyK+@localhost.localdomain Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/mpi: fix spelling mistakesZhen Lei
Fix some spelling mistakes in comments: flaged ==> flagged bufer ==> buffer multipler ==> multiplier MULTIPLER ==> MULTIPLIER leaset ==> least chnage ==> change Link: https://lkml.kernel.org/r/20210604074401.12198-1-thunder.leizhen@huawei.com Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/decompressors: fix spelling mistakesZhen Lei
Fix some spelling mistakes in comments: sentinal ==> sentinel compresed ==> compressed dependeny ==> dependency immediatelly ==> immediately dervied ==> derived splitted ==> split nore ==> not independed ==> independent asumed ==> assumed Link: https://lkml.kernel.org/r/20210604085656.12257-1-thunder.leizhen@huawei.com Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/math/rational: add Kunit test casesTrent Piepho
Adds a number of test cases that cover a range of possible code paths. [akpm@linux-foundation.org: remove non-ascii characters, fix whitespace] [colin.king@canonical.com: fix spelling mistake "demominator" -> "denominator"] Link: https://lkml.kernel.org/r/20210526085049.6393-1-colin.king@canonical.com Link: https://lkml.kernel.org/r/20210525144250.214670-2-tpiepho@gmail.com Signed-off-by: Trent Piepho <tpiepho@gmail.com> Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Daniel Latypov <dlatypov@google.com> Cc: Oskar Schirmer <oskar@scara.com> Cc: Yiyuan Guo <yguoaz@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/math/rational.c: fix divide by zeroTrent Piepho
If the input is out of the range of the allowed values, either larger than the largest value or closer to zero than the smallest non-zero allowed value, then a division by zero would occur. In the case of input too large, the division by zero will occur on the first iteration. The best result (largest allowed value) will be found by always choosing the semi-convergent and excluding the denominator based limit when finding it. In the case of the input too small, the division by zero will occur on the second iteration. The numerator based semi-convergent should not be calculated to avoid the division by zero. But the semi-convergent vs previous convergent test is still needed, which effectively chooses between 0 (the previous convergent) vs the smallest allowed fraction (best semi-convergent) as the result. Link: https://lkml.kernel.org/r/20210525144250.214670-1-tpiepho@gmail.com Fixes: 323dd2c3ed0 ("lib/math/rational.c: fix possible incorrect result from rational fractions helper") Signed-off-by: Trent Piepho <tpiepho@gmail.com> Reported-by: Yiyuan Guo <yguoaz@gmail.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Oskar Schirmer <oskar@scara.com> Cc: Daniel Latypov <dlatypov@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01seq_file: drop unused *_escape_mem_ascii()Andy Shevchenko
There are no more users of the seq_escape_mem_ascii() followed by string_escape_mem_ascii(). Remove them for good. Link: https://lkml.kernel.org/r/20210504180819.73127-16-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01nfsd: avoid non-flexible API in seq_quote_mem()Andy Shevchenko
The seq_escape_mem_ascii() is completely non-flexible and shouldn't be used. Replace it with properly called seq_escape_mem(). Link: https://lkml.kernel.org/r/20210504180819.73127-15-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01seq_file: convert seq_escape() to use seq_escape_str()Andy Shevchenko
Convert seq_escape() to use seq_escape_str() rather than open coding it. Note, for now we leave it as an exported symbol due to some old code that can't tolerate ctype.h being (indirectly) included. Link: https://lkml.kernel.org/r/20210504180819.73127-14-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01seq_file: add seq_escape_str() as replica of string_escape_str()Andy Shevchenko
In some cases we want to escape characters from NULL-terminated strings. Add seq_escape_str() as replica of string_escape_str() for that. Link: https://lkml.kernel.org/r/20210504180819.73127-13-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01seq_file: introduce seq_escape_mem()Andy Shevchenko
Introduce seq_escape_mem() to allow users to pass additional parameters to string_escape_mem(). Link: https://lkml.kernel.org/r/20210504180819.73127-12-andriy.shevchenko@linux.intel.com Suggested-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01MAINTAINERS: add myself as designated reviewer for generic string libraryAndy Shevchenko
Add myself as designated reviewer for generic string library. Link: https://lkml.kernel.org/r/20210504180819.73127-11-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/test-string_helpers: add test cases for new featuresAndy Shevchenko
We have got new flags and hence new features of string_escape_mem(). Add test cases for that. Link: https://lkml.kernel.org/r/20210504180819.73127-10-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/test-string_helpers: get rid of trailing comma in terminatorsAndy Shevchenko
Terminators by definition shouldn't accept anything behind. Make them robust by removing trailing commas. Link: https://lkml.kernel.org/r/20210504180819.73127-9-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/test-string_helpers: print flags in hexadecimal formatAndy Shevchenko
Since flags are bitmapped, it's better to print them in hexadecimal format. Link: https://lkml.kernel.org/r/20210504180819.73127-8-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01lib/string_helpers: allow to append additional characters to be escapedAndy Shevchenko
Introduce a new flag to append additional characters, passed in 'only' parameter, to be escaped if they fall in the corresponding class. Link: https://lkml.kernel.org/r/20210504180819.73127-7-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>