summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2022-09-09resource: add define macro for register address resourcesColin Foster
DEFINE_RES_ macros have been created for the commonly used resource types, but not IORESOURCE_REG. Add the macro so it can be used in a similar manner to all other resource types. Signed-off-by: Colin Foster <colin.foster@in-advantage.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Lee Jones <lee@kernel.org> Link: https://lore.kernel.org/r/20220905162132.2943088-7-colin.foster@in-advantage.com
2022-09-09mfd: ocelot: Add helper to get regmap from a resourceColin Foster
Several ocelot-related modules are designed for MMIO / regmaps. As such, they often use a combination of devm_platform_get_and_ioremap_resource() and devm_regmap_init_mmio(). Operating in an MFD might be different, in that it could be memory mapped, or it could be SPI, I2C... In these cases a fallback to use IORESOURCE_REG instead of IORESOURCE_MEM becomes necessary. When this happens, there's redundant logic that needs to be implemented in every driver. In order to avoid this redundancy, utilize a single function that, if the MFD scenario is enabled, will perform this fallback logic. Signed-off-by: Colin Foster <colin.foster@in-advantage.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Lee Jones <lee@kernel.org> Link: https://lore.kernel.org/r/20220905162132.2943088-2-colin.foster@in-advantage.com
2022-09-08perf: RISC-V: exclude invalid pmu counters from SBI callsSergey Matyukevich
SBI firmware may not provide information for some counters in response to SBI_EXT_PMU_COUNTER_GET_INFO call. Exclude such counters from the subsequent SBI requests. For this purpose use global mask to keep track of fully specified counters. Signed-off-by: Sergey Matyukevich <sergey.matyukevich@syntacore.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20220830155306.301714-3-geomatsi@gmail.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-09-08vfio: Introduce the DMA logging feature supportYishai Hadas
Introduce the DMA logging feature support in the vfio core layer. It includes the processing of the device start/stop/report DMA logging UAPIs and calling the relevant driver 'op' to do the work. Specifically, Upon start, the core translates the given input ranges into an interval tree, checks for unexpected overlapping, non aligned ranges and then pass the translated input to the driver for start tracking the given ranges. Upon report, the core translates the given input user space bitmap and page size into an IOVA kernel bitmap iterator. Then it iterates it and call the driver to set the corresponding bits for the dirtied pages in a specific IOVA range. Upon stop, the driver is called to stop the previous started tracking. The next patches from the series will introduce the mlx5 driver implementation for the logging ops. Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Link: https://lore.kernel.org/r/20220908183448.195262-6-yishaih@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-09-08vfio: Add an IOVA bitmap supportJoao Martins
The new facility adds a bunch of wrappers that abstract how an IOVA range is represented in a bitmap that is granulated by a given page_size. So it translates all the lifting of dealing with user pointers into its corresponding kernel addresses backing said user memory into doing finally the (non-atomic) bitmap ops to change various bits. The formula for the bitmap is: data[(iova / page_size) / 64] & (1ULL << (iova % 64)) Where 64 is the number of bits in a unsigned long (depending on arch) It introduces an IOVA iterator that uses a windowing scheme to minimize the pinning overhead, as opposed to pinning it on demand 4K at a time. Assuming a 4K kernel page and 4K requested page size, we can use a single kernel page to hold 512 page pointers, mapping 2M of bitmap, representing 64G of IOVA space. An example usage of these helpers for a given @base_iova, @page_size, @length and __user @data: bitmap = iova_bitmap_alloc(base_iova, page_size, length, data); if (IS_ERR(bitmap)) return -ENOMEM; ret = iova_bitmap_for_each(bitmap, arg, dirty_reporter_fn); iova_bitmap_free(bitmap); Each iteration of the @dirty_reporter_fn is called with a unique @iova and @length argument, indicating the current range available through the iova_bitmap. The @dirty_reporter_fn uses iova_bitmap_set() to mark dirty areas (@iova_length) within that provided range, as following: iova_bitmap_set(bitmap, iova, iova_length); The facility is intended to be used for user bitmaps representing dirtied IOVAs by IOMMU (via IOMMUFD) and PCI Devices (via vfio-pci). Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Link: https://lore.kernel.org/r/20220908183448.195262-5-yishaih@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-09-08Merge tag 'spi-fix-v6.0-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi Pull spi fixes from Mark Brown: "Several fixes that came in since the merge window, the major one being a fix for the spi-mux driver which was broken by the performance optimisations due to it peering inside the core's data structures more than it should" * tag 'spi-fix-v6.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: spi: spi: Fix queue hang if previous transfer failed spi: mux: Fix mux interaction with fast path optimisations spi: cadence-quadspi: Disable irqs during indirect reads spi: bitbang: Fix lsb-first Rx
2022-09-08Merge remote-tracking branch 'mlx5/mlx5-vfio' into v6.1/vfio/nextAlex Williamson
Merge net/mlx5 depedencies for device DMA logging and mlx5 variant driver suppport. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2022-09-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netPaolo Abeni
drivers/net/ethernet/freescale/fec.h 7d650df99d52 ("net: fec: add pm_qos support on imx6q platform") 40c79ce13b03 ("net: fec: add stop mode support for imx8 platform") Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-09-08spi: Group cs_change and cs_off flags together in struct spi_transferAndy Shevchenko
The commit 5e0531f6b90a ("spi: Add capability to perform some transfer with chipselect off") added a new flag but squeezed it into a wrong group of struct spi_transfer members (note that SPI_NBITS_* are macros for easier interpretation of the tx_nbits and rx_nbits bitfields). Group cs_change and cs_off flags together and their doc strings. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220908130518.32186-1-andriy.shevchenko@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-08Merge tag 'scmi-fixes-6.0' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux into arm/fixes Arm SCMI fixes for v6.0 Few fixes addressing possible out of bound access violations by hardening them, incorrect asynchronous resets by restricting them, incorrect SCMI tracing message format by harmonizing them, missing kernel-doc in optee transport, missing SCMI PM driver remove routine by adding it to avoid warning when scmi driver is unloaded and finally improve checks in the info_get operations. * tag 'scmi-fixes-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux: firmware: arm_scmi: Harmonize SCMI tracing message format firmware: arm_scmi: Add SCMI PM driver remove routine firmware: arm_scmi: Fix the asynchronous reset requests firmware: arm_scmi: Harden accesses to the reset domains firmware: arm_scmi: Harden accesses to the sensor domains firmware: arm_scmi: Improve checks in the info_get operations firmware: arm_scmi: Fix missing kernel-doc in optee Link: https://lore.kernel.org/r/20220829174435.207911-1-sudeep.holla@arm.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-09-08Merge tag 'net-6.0-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from rxrpc, netfilter, wireless and bluetooth subtrees. Current release - regressions: - skb: export skb drop reaons to user by TRACE_DEFINE_ENUM - bluetooth: fix regression preventing ACL packet transmission Current release - new code bugs: - dsa: microchip: fix kernel oops on ksz8 switches - dsa: qca8k: fix NULL pointer dereference for of_device_get_match_data Previous releases - regressions: - netfilter: clean up hook list when offload flags check fails - wifi: mt76: fix crash in chip reset fail - rxrpc: fix ICMP/ICMP6 error handling - ice: fix DMA mappings leak - i40e: fix kernel crash during module removal Previous releases - always broken: - ipv6: sr: fix out-of-bounds read when setting HMAC data. - tcp: TX zerocopy should not sense pfmemalloc status - sch_sfb: don't assume the skb is still around after enqueueing to child - netfilter: drop dst references before setting - wifi: wilc1000: fix DMA on stack objects - rxrpc: fix an insufficiently large sglist in rxkad_verify_packet_2() - fec: use a spinlock to guard `fep->ptp_clk_on` Misc: - usb: qmi_wwan: add Quectel RM520N" * tag 'net-6.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (50 commits) sch_sfb: Also store skb len before calling child enqueue net: phy: lan87xx: change interrupt src of link_up to comm_ready net/smc: Fix possible access to freed memory in link clear net: ethernet: mtk_eth_soc: check max allowed hash in mtk_ppe_check_skb net: skb: export skb drop reaons to user by TRACE_DEFINE_ENUM net: ethernet: mtk_eth_soc: fix typo in __mtk_foe_entry_clear net: dsa: felix: access QSYS_TAG_CONFIG under tas_lock in vsc9959_sched_speed_set net: dsa: felix: disable cut-through forwarding for frames oversized for tc-taprio net: dsa: felix: tc-taprio intervals smaller than MTU should send at least one packet net: usb: qmi_wwan: add Quectel RM520N net: dsa: qca8k: fix NULL pointer dereference for of_device_get_match_data tcp: fix early ETIMEDOUT after spurious non-SACK RTO stmmac: intel: Simplify intel_eth_pci_remove() net: mvpp2: debugfs: fix memory leak when using debugfs_lookup() ipv6: sr: fix out-of-bounds read when setting HMAC data. bonding: accept unsolicited NA message bonding: add all node mcast address when slave up bonding: use unspecified address if no available link local address wifi: use struct_group to copy addresses wifi: mac80211_hwsim: check length for virtio packets ...
2022-09-08fs: only do a memory barrier for the first set_buffer_uptodate()Linus Torvalds
Commit d4252071b97d ("add barriers to buffer_uptodate and set_buffer_uptodate") added proper memory barriers to the buffer head BH_Uptodate bit, so that anybody who tests a buffer for being up-to-date will be guaranteed to actually see initialized state. However, that commit didn't _just_ add the memory barrier, it also ended up dropping the "was it already set" logic that the BUFFER_FNS() macro had. That's conceptually the right thing for a generic "this is a memory barrier" operation, but in the case of the buffer contents, we really only care about the memory barrier for the _first_ time we set the bit, in that the only memory ordering protection we need is to avoid anybody seeing uninitialized memory contents. Any other access ordering wouldn't be about the BH_Uptodate bit anyway, and would require some other proper lock (typically BH_Lock or the folio lock). A reader that races with somebody invalidating the buffer head isn't an issue wrt the memory ordering, it's a serialization issue. Now, you'd think that the buffer head operations don't matter in this day and age (and I certainly thought so), but apparently some loads still end up being heavy users of buffer heads. In particular, the kernel test robot reported that not having this bit access optimization in place caused a noticeable direct IO performance regression on ext4: fxmark.ssd_ext4_no_jnl_DWTL_54_directio.works/sec -26.5% regression although you presumably need a fast disk and a lot of cores to actually notice. Link: https://lore.kernel.org/all/Yw8L7HTZ%2FdE2%2Fo9C@xsang-OptiPlex-9020/ Reported-by: kernel test robot <oliver.sang@intel.com> Tested-by: Fengwei Yin <fengwei.yin@intel.com> Cc: Mikulas Patocka <mpatocka@redhat.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-09-08firmware: arm_ffa: Split up ffa_ops into info, message and memory operationsSudeep Holla
In preparation to make memory operations accessible for a non ffa_driver/device, it is better to split the ffa_ops into different categories of operations: info, message and memory. The info and memory are ffa_device independent and can be used without any associated ffa_device from a non ffa_driver. However, we don't export these info and memory APIs yet without the user. The first users of these APIs can export them. Link: https://lore.kernel.org/r/20220907145240.1683088-11-sudeep.holla@arm.com Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2022-09-08firmware: arm_ffa: Set up 32bit execution mode flag using partiion propertySudeep Holla
FF-A v1.1 adds a flag in the partition properties to indicate if the partition runs in the AArch32 or AArch64 execution state. Use the same to set-up the 32-bit execution flag mode in the ffa_dev automatically if the detected firmware version is above v1.0 and ignore any requests to do the same from the ffa_driver. Link: https://lore.kernel.org/r/20220907145240.1683088-10-sudeep.holla@arm.com Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2022-09-08firmware: arm_ffa: Add v1.1 get_partition_info supportSudeep Holla
FF-A v1.1 adds support to discovery the UUIDs of the partitions that was missing in v1.0 and which the driver workarounds by using UUIDs supplied by the ffa_drivers. Add the v1.1 get_partition_info support and disable the workaround if the detected FF-A version is greater than v1.0. Link: https://lore.kernel.org/r/20220907145240.1683088-9-sudeep.holla@arm.com Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2022-09-08firmware: arm_ffa: Rename ffa_dev_ops as ffa_opsSudeep Holla
Except the message APIs, all other APIs are ffa_device independent and can be used without any associated ffa_device from a non ffa_driver. In order to reflect the same, just rename ffa_dev_ops as ffa_ops to avoid any confusion or to keep it simple. Link: https://lore.kernel.org/r/20220907145240.1683088-8-sudeep.holla@arm.com Suggested-by: Sumit Garg <sumit.garg@linaro.org> Reviewed-by: Sumit Garg <sumit.garg@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2022-09-08firmware: arm_ffa: Make memory apis ffa_device independentSudeep Holla
There is a requirement to make memory APIs independent of the ffa_device. One of the use-case is to have a common memory driver that manages the memory for all the ffa_devices. That common memory driver won't be a ffa_driver or won't have any ffa_device associated with it. So having these memory APIs accessible without a ffa_device is needed and should be possible as most of these are handled by the partition manager(SPM or hypervisor). Drop the ffa_device argument to the memory APIs and make them ffa_device independent. Link: https://lore.kernel.org/r/20220907145240.1683088-7-sudeep.holla@arm.com Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2022-09-08firmware: arm_ffa: Remove ffa_dev_ops_get()Sudeep Holla
The only user of this exported ffa_dev_ops_get() was OPTEE driver which now uses ffa_dev->ops directly, there are no other users for this. Also, since any ffa driver can use ffa_dev->ops directly, there will be no need for ffa_dev_ops_get(), so just remove ffa_dev_ops_get(). Link: https://lore.kernel.org/r/20220907145240.1683088-4-sudeep.holla@arm.com Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2022-09-08firmware: arm_ffa: Add pointer to the ffa_dev_ops in struct ffa_devSudeep Holla
Currently ffa_dev_ops_get() is the way to fetch the ffa_dev_ops pointer. It checks if the ffa_dev structure pointer is valid before returning the ffa_dev_ops pointer. Instead, the pointer can be made part of the ffa_dev structure and since the core driver is incharge of creating ffa_device for each identified partition, there is no need to check for the validity explicitly if the pointer is embedded in the structure. Add the pointer to the ffa_dev_ops in the ffa_dev structure itself and initialise the same as part of creation of the device. Link: https://lore.kernel.org/r/20220907145240.1683088-2-sudeep.holla@arm.com Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2022-09-07bpf: Add helper macro bpf_for_each_reg_in_vstateKumar Kartikeya Dwivedi
For a lot of use cases in future patches, we will want to modify the state of registers part of some same 'group' (e.g. same ref_obj_id). It won't just be limited to releasing reference state, but setting a type flag dynamically based on certain actions, etc. Hence, we need a way to easily pass a callback to the function that iterates over all registers in current bpf_verifier_state in all frames upto (and including) the curframe. While in C++ we would be able to easily use a lambda to pass state and the callback together, sadly we aren't using C++ in the kernel. The next best thing to avoid defining a function for each case seems like statement expressions in GNU C. The kernel already uses them heavily, hence they can passed to the macro in the style of a lambda. The statement expression will then be substituted in the for loop bodies. Variables __state and __reg are set to current bpf_func_state and reg for each invocation of the expression inside the passed in verifier state. Then, convert mark_ptr_or_null_regs, clear_all_pkt_pointers, release_reference, find_good_pkt_pointers, find_equal_scalars to use bpf_for_each_reg_in_vstate. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220904204145.3089-16-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-09-07fortify: Add run-time WARN for cross-field memcpy()Kees Cook
Enable run-time checking of dynamic memcpy() and memmove() lengths, issuing a WARN when a write would exceed the size of the target struct member, when built with CONFIG_FORTIFY_SOURCE=y. This would have caught all of the memcpy()-based buffer overflows in the last 3 years, specifically covering all the cases where the destination buffer size is known at compile time. This change ONLY adds a run-time warning. As false positives are currently still expected, this will not block the overflow. The new warnings will look like this: memcpy: detected field-spanning write (size N) of single field "var->dest" (size M) WARNING: CPU: n PID: pppp at source/file/path.c:nr function+0xXX/0xXX [module] There may be false positives in the kernel where intentional field-spanning writes are happening. These need to be addressed similarly to how the compile-time cases were addressed: add a struct_group(), split the memcpy(), or some other refactoring. In order to make counting/investigating instances of added runtime checks easier, each instance includes the destination variable name as a WARN argument, prefixed with 'field "'. Therefore, on an x86_64 defconfig build, it is trivial to inspect the build artifacts to find instances. For example on an x86_64 defconfig build, there are 78 new run-time memcpy() bounds checks added: $ for i in vmlinux $(find . -name '*.ko'); do \ strings "$i" | grep '^field "'; done | wc -l 78 Simple cases where a destination buffer is known to be a dynamic size do not generate a WARN. For example: struct normal_flex_array { void *a; int b; u32 c; size_t array_size; u8 flex_array[]; }; struct normal_flex_array *instance; ... /* These will be ignored for run-time bounds checking. */ memcpy(instance, src, len); memcpy(instance->flex_array, src, len); However, one of the dynamic-sized destination cases is irritatingly unable to be detected by the compiler: when using memcpy() to target a composite struct member which contains a trailing flexible array struct. For example: struct wrapper { int foo; char bar; struct normal_flex_array embedded; }; struct wrapper *instance; ... /* This will incorrectly WARN when len > sizeof(instance->embedded) */ memcpy(&instance->embedded, src, len); These cases end up appearing to the compiler to be sized as if the flexible array had 0 elements. :( For more details see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101832 https://godbolt.org/z/vW6x8vh4P These "composite flexible array structure destination" cases will be need to be flushed out and addressed on a case-by-case basis. Regardless, for the general case of using memcpy() on flexible array destinations, future APIs will be created to handle common cases. Those can be used to migrate away from open-coded memcpy() so that proper error handling (instead of trapping) can be used. As mentioned, none of these bounds checks block any overflows currently. For users that have tested their workloads, do not encounter any warnings, and wish to make these checks stop any overflows, they can use a big hammer and set the sysctl panic_on_warn=1. Signed-off-by: Kees Cook <keescook@chromium.org>
2022-09-07fortify: Use SIZE_MAX instead of (size_t)-1Kees Cook
Clean up uses of "(size_t)-1" in favor of SIZE_MAX. Cc: linux-hardening@vger.kernel.org Suggested-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2022-09-07fortify: Fix __compiletime_strlen() under UBSAN_BOUNDS_LOCALKees Cook
With CONFIG_FORTIFY=y and CONFIG_UBSAN_LOCAL_BOUNDS=y enabled, we observe a runtime panic while running Android's Compatibility Test Suite's (CTS) android.hardware.input.cts.tests. This is stemming from a strlen() call in hidinput_allocate(). __compiletime_strlen() is implemented in terms of __builtin_object_size(), then does an array access to check for NUL-termination. A quirk of __builtin_object_size() is that for strings whose values are runtime dependent, __builtin_object_size(str, 1 or 0) returns the maximum size of possible values when those sizes are determinable at compile time. Example: static const char *v = "FOO BAR"; static const char *y = "FOO BA"; unsigned long x (int z) { // Returns 8, which is: // max(__builtin_object_size(v, 1), __builtin_object_size(y, 1)) return __builtin_object_size(z ? v : y, 1); } So when FORTIFY_SOURCE is enabled, the current implementation of __compiletime_strlen() will try to access beyond the end of y at runtime using the size of v. Mixed with UBSAN_LOCAL_BOUNDS we get a fault. hidinput_allocate() has a local C string whose value is control flow dependent on a switch statement, so __builtin_object_size(str, 1) evaluates to the maximum string length, making all other cases fault on the last character check. hidinput_allocate() could be cleaned up to avoid runtime calls to strlen() since the local variable can only have literal values, so there's no benefit to trying to fortify the strlen call site there. Perform a __builtin_constant_p() check against index 0 earlier in the macro to filter out the control-flow-dependant case. Add a KUnit test for checking the expected behavioral characteristics of FORTIFY_SOURCE internals. Cc: Nathan Chancellor <nathan@kernel.org> Cc: Tom Rix <trix@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: "Steven Rostedt (Google)" <rostedt@goodmis.org> Cc: David Gow <davidgow@google.com> Cc: Yury Norov <yury.norov@gmail.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Sander Vanheule <sander@svanheule.net> Cc: linux-hardening@vger.kernel.org Cc: llvm@lists.linux.dev Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Android Treehugger Robot Link: https://android-review.googlesource.com/c/kernel/common/+/2206839 Co-developed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2022-09-07string: Introduce strtomem() and strtomem_pad()Kees Cook
One of the "legitimate" uses of strncpy() is copying a NUL-terminated string into a fixed-size non-NUL-terminated character array. To avoid the weaknesses and ambiguity of intent when using strncpy(), provide replacement functions that explicitly distinguish between trailing padding and not, and require the destination buffer size be discoverable by the compiler. For example: struct obj { int foo; char small[4] __nonstring; char big[8] __nonstring; int bar; }; struct obj p; /* This will truncate to 4 chars with no trailing NUL */ strncpy(p.small, "hello", sizeof(p.small)); /* p.small contains 'h', 'e', 'l', 'l' */ /* This will NUL pad to 8 chars. */ strncpy(p.big, "hello", sizeof(p.big)); /* p.big contains 'h', 'e', 'l', 'l', 'o', '\0', '\0', '\0' */ When the "__nonstring" attributes are missing, the intent of the programmer becomes ambiguous for whether the lack of a trailing NUL in the p.small copy is a bug. Additionally, it's not clear whether the trailing padding in the p.big copy is _needed_. Both cases become unambiguous with: strtomem(p.small, "hello"); strtomem_pad(p.big, "hello", 0); See also https://github.com/KSPP/linux/issues/90 Expand the memcpy KUnit tests to include these functions. Cc: Wolfram Sang <wsa+renesas@sang-engineering.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Kees Cook <keescook@chromium.org>
2022-09-07overflow: Allow mixed type argumentsKees Cook
When the check_[op]_overflow() helpers were introduced, all arguments were required to be the same type to make the fallback macros simpler. However, now that the fallback macros have been removed[1], it is fine to allow mixed types, which makes using the helpers much more useful, as they can be used to test for type-based overflows (e.g. adding two large ints but storing into a u8), as would be handy in the drm core[2]. Remove the restriction, and add additional self-tests that exercise some of the mixed-type overflow cases, and double-check for accidental macro side-effects. [1] https://git.kernel.org/linus/4eb6bd55cfb22ffc20652732340c4962f3ac9a91 [2] https://lore.kernel.org/lkml/20220824084514.2261614-2-gwan-gyeong.mun@intel.com Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: linux-hardening@vger.kernel.org Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Tested-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2022-09-07perf: Add a few assertionsPeter Zijlstra
While auditing 6b959ba22d34 ("perf/core: Fix reentry problem in perf_output_read_group()") a few spots were found that wanted assertions. Notable for_each_sibling_event() relies on exclusion from modification. This would normally be holding either ctx->lock or ctx->mutex, however due to how things are constructed disabling IRQs is a valid and sufficient substitute for ctx->lock. Another possible site to add assertions would be the various pmu::{add,del,read,..}() methods, but that's not trivially expressable in C -- the best option is wrappers, but those are easy enough to forget. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2022-09-07arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCHAnshuman Khandual
Ensure all platform specific event flags are within PERF_EVENT_FLAG_ARCH. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: James Clark <james.clark@arm.com> Link: https://lkml.kernel.org/r/20220907091924.439193-4-anshuman.khandual@arm.com
2022-09-07perf/core: Assert PERF_EVENT_FLAG_ARCH does not overlap with generic flagsAnshuman Khandual
This just ensures that PERF_EVENT_FLAG_ARCH does not overlap with generic hardware event flags. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: James Clark <james.clark@arm.com> Link: https://lkml.kernel.org/r/20220907091924.439193-3-anshuman.khandual@arm.com
2022-09-07perf/core: Expand PERF_EVENT_FLAG_ARCHAnshuman Khandual
Two hardware event flags on x86 platform has overshot PERF_EVENT_FLAG_ARCH (0x0000ffff). These flags are PERF_X86_EVENT_PEBS_LAT_HYBRID (0x20000) and PERF_X86_EVENT_AMD_BRS (0x10000). Lets expand PERF_EVENT_FLAG_ARCH mask to accommodate those flags, and also create room for two more in the future. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: James Clark <james.clark@arm.com> Link: https://lkml.kernel.org/r/20220907091924.439193-2-anshuman.khandual@arm.com
2022-09-07perf: Consolidate branch sample filter helpersAnshuman Khandual
Besides the branch type filtering requests, 'event.attr.branch_sample_type' also contains various flags indicating which additional information should be captured, along with the base branch record. These flags help configure the underlying hardware, and capture the branch records appropriately when required e.g after PMU interrupt. But first, this moves an existing helper perf_sample_save_hw_index() into the header before adding some more helpers for other branch sample filter flags. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20220906084414.396220-1-anshuman.khandual@arm.com
2022-09-07sched: Show PF_flag holesPeter Zijlstra
Put placeholders in PF_flag so we can see how holey it is. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/YxctoffFFPXONESt@hirez.programming.kicks-ass.net
2022-09-07freezer,sched: Rewrite core freezer logicPeter Zijlstra
Rewrite the core freezer to behave better wrt thawing and be simpler in general. By replacing PF_FROZEN with TASK_FROZEN, a special block state, it is ensured frozen tasks stay frozen until thawed and don't randomly wake up early, as is currently possible. As such, it does away with PF_FROZEN and PF_FREEZER_SKIP, freeing up two PF_flags (yay!). Specifically; the current scheme works a little like: freezer_do_not_count(); schedule(); freezer_count(); And either the task is blocked, or it lands in try_to_freezer() through freezer_count(). Now, when it is blocked, the freezer considers it frozen and continues. However, on thawing, once pm_freezing is cleared, freezer_count() stops working, and any random/spurious wakeup will let a task run before its time. That is, thawing tries to thaw things in explicit order; kernel threads and workqueues before doing bringing SMP back before userspace etc.. However due to the above mentioned races it is entirely possible for userspace tasks to thaw (by accident) before SMP is back. This can be a fatal problem in asymmetric ISA architectures (eg ARMv9) where the userspace task requires a special CPU to run. As said; replace this with a special task state TASK_FROZEN and add the following state transitions: TASK_FREEZABLE -> TASK_FROZEN __TASK_STOPPED -> TASK_FROZEN __TASK_TRACED -> TASK_FROZEN The new TASK_FREEZABLE can be set on any state part of TASK_NORMAL (IOW. TASK_INTERRUPTIBLE and TASK_UNINTERRUPTIBLE) -- any such state is already required to deal with spurious wakeups and the freezer causes one such when thawing the task (since the original state is lost). The special __TASK_{STOPPED,TRACED} states *can* be restored since their canonical state is in ->jobctl. With this, frozen tasks need an explicit TASK_FROZEN wakeup and are free of undue (early / spurious) wakeups. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/20220822114649.055452969@infradead.org
2022-09-07sched: Widen TAKS_state literalsPeter Zijlstra
In preparation of adding more states, add a few 0s to the literals as we've just about ran out. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2022-09-07sched/wait: Add wait_event_state()Peter Zijlstra
Allows waiting with a custom @state. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20220822114648.989212021@infradead.org
2022-09-07sched/completion: Add wait_for_completion_state()Peter Zijlstra
Allows waiting with a custom @state. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20220822114648.922711674@infradead.org
2022-09-07sched: Add TASK_ANY for wait_task_inactive()Peter Zijlstra
Now that wait_task_inactive()'s @match_state argument is a mask (like ttwu()) it is possible to replace the special !match_state case with an 'all-states' value such that any blocked state will match. Suggested-by: Ingo Molnar (mingo@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/YxhkzfuFTvRnpUaH@hirez.programming.kicks-ass.net
2022-09-07freezer,umh: Clean up freezer/initrd interactionPeter Zijlstra
handle_initrd() marks itself as PF_FREEZER_SKIP in order to ensure that the UMH, which is going to freeze the system, doesn't indefinitely wait for it's caller. Rework things by adding UMH_FREEZABLE to indicate the completion is freezable. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/20220822114648.791019324@infradead.org
2022-09-07freezer: Have {,un}lock_system_sleep() save/restore flagsPeter Zijlstra
Rafael explained that the reason for having both PF_NOFREEZE and PF_FREEZER_SKIP is that {,un}lock_system_sleep() is callable from kthread context that has previously called set_freezable(). In preparation of merging the flags, have {,un}lock_system_slee() save and restore current->flags. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/20220822114648.725003428@infradead.org
2022-09-07bpf: Add zero_map_value to zero map value with special fieldsKumar Kartikeya Dwivedi
We need this helper to skip over special fields (bpf_spin_lock, bpf_timer, kptrs) while zeroing a map value. Use the same logic as copy_map_value but memset instead of memcpy. Currently, the code zeroing map value memory does not have to deal with special fields, hence this is a prerequisite for introducing such support. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220904204145.3089-4-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-09-07bpf: Add copy_map_value_long to copy to remote percpu memoryKumar Kartikeya Dwivedi
bpf_long_memcpy is used while copying to remote percpu regions from BPF syscall and helpers, so that the copy is atomic at word size granularity. This might not be possible when you copy from map value hosting kptrs from or to percpu maps, as the alignment or size in disjoint regions may not be multiple of word size. Hence, to avoid complicating the copy loop, we only use bpf_long_memcpy when special fields are not present, otherwise use normal memcpy to copy the disjoint regions. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220904204145.3089-2-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-09-07bpf/verifier: allow kfunc to return an allocated memBenjamin Tissoires
For drivers (outside of network), the incoming data is not statically defined in a struct. Most of the time the data buffer is kzalloc-ed and thus we can not rely on eBPF and BTF to explore the data. This commit allows to return an arbitrary memory, previously allocated by the driver. An interesting extra point is that the kfunc can mark the exported memory region as read only or read/write. So, when a kfunc is not returning a pointer to a struct but to a plain type, we can consider it is a valid allocated memory assuming that: - one of the arguments is either called rdonly_buf_size or rdwr_buf_size - and this argument is a const from the caller point of view We can then use this parameter as the size of the allocated memory. The memory is either read-only or read-write based on the name of the size parameter. Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Link: https://lore.kernel.org/r/20220906151303.2780789-7-benjamin.tissoires@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-09-07bpf: split btf_check_subprog_arg_match in twoBenjamin Tissoires
btf_check_subprog_arg_match() was used twice in verifier.c: - when checking for the type mismatches between a (sub)prog declaration and BTF - when checking the call of a subprog to see if the provided arguments are correct and valid This is problematic when we check if the first argument of a program (pointer to ctx) is correctly accessed: To be able to ensure we access a valid memory in the ctx, the verifier assumes the pointer to context is not null. This has the side effect of marking the program accessing the entire context, even if the context is never dereferenced. For example, by checking the context access with the current code, the following eBPF program would fail with -EINVAL if the ctx is set to null from the userspace: ``` SEC("syscall") int prog(struct my_ctx *args) { return 0; } ``` In that particular case, we do not want to actually check that the memory is correct while checking for the BTF validity, but we just want to ensure that the (sub)prog definition matches the BTF we have. So split btf_check_subprog_arg_match() in two so we can actually check for the memory used when in a call, and ignore that part when not. Note that a further patch is in preparation to disentangled btf_check_func_arg_match() from these two purposes, and so right now we just add a new hack around that by adding a boolean to this function. Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220906151303.2780789-3-benjamin.tissoires@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-09-07dyndbg: add drm.debug style (drm/parameters/debug) bitmap supportJim Cromie
Add kernel_param_ops and callbacks to use a class-map to validate and apply input to a sysfs-node, which allows users to control classes defined in that class-map. This supports uses like: echo 0x3 > /sys/module/drm/parameters/debug IE add these: - int param_set_dyndbg_classes() - int param_get_dyndbg_classes() - struct kernel_param_ops param_ops_dyndbg_classes Following the model of kernel/params.c STANDARD_PARAM_DEFS, these are non-static and exported. This might be unnecessary here. get/set use an augmented kernel_param; the arg refs a new struct ddebug_class_param, which contains: - A ptr to user's state-store; a union of &ulong for drm.debug, &int for nouveau level debug. By ref'g the client's bit-state _var, code coordinates with existing code (like drm_debug_enabled) which uses it, so existing/remaining calls can work unchanged. Changing drm.debug to a ulong allows use of BIT() etc. - FLAGS: dyndbg.flags toggled by changes to bitmap. Usually just "p". - MAP: a pointer to struct ddebug_classes_map, which maps those class-names to .class_ids 0..N that the module is using. This class-map is declared & initialized by DECLARE_DYNDBG_CLASSMAP. - map-type: 4 enums DD_CLASS_TYPE_* select 2 input forms and 2 meanings. numeric input: DD_CLASS_TYPE_DISJOINT_BITS integer input, independent bits. ie: drm.debug DD_CLASS_TYPE_LEVEL_NUM integer input, 0..N levels classnames-list (comma separated) input: DD_CLASS_TYPE_DISJOINT_NAMES each name affects a bit, others preserved DD_CLASS_TYPE_LEVEL_NAMES names have level meanings, like kern_levels.h _NAMES - comma-separated classnames (with optional +-) _NUM - numeric input, 0-N expected _BITS - numeric input, 0x1F bitmap form expected _DISJOINT - bits are independent _LEVEL - (x<y) on bit-pos. _DISJOINT treats input like a bit-vector (ala drm.debug), and sets each bit accordingly. LEVEL is layered on top of this. _LEVEL treats input like a bit-pos:N, then sets bits(0..N)=1, and bits(N+1..max)=0. This applies (bit<N) semantics on top of disjoint bits. USAGES: A potentially typical _DISJOINT_NAMES use: echo +DRM_UT_CORE,+DRM_UT_KMS,-DRM_UT_DRIVER,-DRM_UT_ATOMIC \ > /sys/module/drm/parameters/debug_catnames A naive _LEVEL_NAMES use, with one class, that sets all in the class-map according to (x<y): : problem seen echo +L7 > /sys/module/test_dynamic_debug/parameters/p_level_names : problem solved echo -L1 > /sys/module/test_dynamic_debug/parameters/p_level_names Note this artifact: : this is same as prev cmd (due to +/-) echo L0 > /sys/module/test_dynamic_debug/parameters/p_level_names : this is "even-more" off, but same wo __pr_debug_class(L0, ".."). echo -L0 > /sys/module/test_dynamic_debug/parameters/p_level_names A stress-test/make-work usage (kid toggling a light switch): echo +L7,L0,L7,L0,L7,L0,L7,L0,L7,L0,L7,L0,L7 \ > /sys/module/test_dynamic_debug/parameters/p_level_names ddebug_apply_class_bitmap(): inside-fn, works on bitmaps, receives new-bits, finds diffs vs client-bitvector holding "current" state, and issues exec_query to commit the adjustment. param_set_dyndbg_classes(): interface fn, sends _NAMES to param_set_dyndbg_classnames() and returns, falls thru to handle _BITS, _NUM internally, and calls ddebug_apply_class_bitmap(). Finishes by updating state. param_set_dyndbg_classnames(): handles classnames-list in loop, calls ddebug_apply_class_bitmap for each, then updates state. NOTES: _LEVEL_ is overlay on _DISJOINT_; inputs are converted to a bitmask, by the callbacks. IOW this is possible, and possibly confusing: echo class V3 +p > control echo class V1 -p > control IMO thats ok, relative verbosity is an interface property. _LEVEL_NUM maps still need class-names, even though the names are not usable at the sysfs interface (unlike with _NAMES style). The names are the only way to >control the classes. - It must have a "V0" name, something below "V1" to turn "V1" off. __pr_debug_cls(V0,..) is printk, don't do that. - "class names" is required at the >control interface. - relative levels are not enforced at >control _LEVEL_NAMES bear +/- signs, which alters the on-bit-pos by 1. IOW, +L2 means L0,L1,L2, and -L2 means just L0,L1. This kinda spoils the readback fidelity, since the L0 bit gets turned on by any use of any L*, except "-L0". All the interface uncertainty here pertains to the _NAMES features. Nobody has actually asked for this, so its practical (if a little tedious) to split it out. Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20220904214134.408619-21-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-07kernel/module: add __dyndbg_classes sectionJim Cromie
Add __dyndbg_classes section, using __dyndbg as a model. Use it: vmlinux.lds.h: KEEP the new section, which also silences orphan section warning on loadable modules. Add (__start_/__stop_)__dyndbg_classes linker symbols for the c externs (below). kernel/module/main.c: - fill new fields in find_module_sections(), using section_objs() - extend callchain prototypes to pass classes, length load_module(): pass new info to dynamic_debug_setup() dynamic_debug_setup(): new params, pass through to ddebug_add_module() dynamic_debug.c: - add externs to the linker symbols. ddebug_add_module(): - It currently builds a debug_table, and *will* find and attach classes. dynamic_debug_init(): - add class fields to the _ddebug_info cursor var: di. Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20220904214134.408619-16-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-07dyndbg: add DECLARE_DYNDBG_CLASSMAP macroJim Cromie
Using DECLARE_DYNDBG_CLASSMAP, modules can declare up to 31 classnames. By doing so, they authorize dyndbg to manipulate class'd prdbgs (ie: __pr_debug_cls, and soon drm_*dbg), ala:: :#> echo class DRM_UT_KMS +p > /proc/dynamic_debug/control The macro declares and initializes a static struct ddebug_class_map:: - maps approved class-names to class_ids used in module, by array order. forex: DRM_UT_* - class-name vals allow validation of "class FOO" queries using macro is opt-in - enum class_map_type - determines interface, behavior Each module has its own class-type and class_id space, and only known class-names will be authorized for a manipulation. Only DRM modules should know and respont to this: :#> echo class DRM_UT_CORE +p > control # across all modules pr_debugs (with default class_id) are still controllable as before. DECLARE_DYNDBG_CLASSMAP(_var, _maptype, _base, classes...) is:: _var: name of the static struct var. user passes to module_param_cb() if they want a sysfs node. _maptype: this is hard-coded to DD_CLASS_TYPE_DISJOINT_BITS for now. _base: usually 0, it allows splitting 31 classes into subranges, so that multiple classes / sysfs-nodes can share the module's class-id space. classes: list of class_name strings, these are mapped to class-ids starting at _base. This class-names list must have a corresponding ENUM, with SYMBOLS that match the literals, and 1st enum val = _base. enum class_map_type has 4 values, on 2 factors:: - classes are disjoint/independent vs relative/x<y/verbosity. disjoint is basis, verbosity by overlay. - input NUMBERS vs [+-]CLASS_NAMES uints, ideally hex. vs +DRM_UT_CORE,-DRM_UT_KMS DD_CLASS_TYPE_DISJOINT_BITS: classes are separate, one per bit. expecting hex input. built for drm.debug, basis for other types. DD_CLASS_TYPE_DISJOINT_NAMES: input is a CSV of [+-]CLASS_NAMES, classes are independent, like DISJOINT DD_CLASS_TYPE_LEVEL_NUM: input is numeric level, 0-N. 0 implies silence. use printk to break that. relative levels applied on bitmaps. DD_CLASS_TYPE_LEVEL_NAMES: input is a CSV of [+-]CLASS_NAMES, names like: ERR,WARNING,NOTICE,INFO,DEBUG avoiding EMERG,ALERT,CRIT,ERR - no point. NOTES: The macro places the initialized struct ddebug_class_map into the __dyndbg_classes section. That draws an 'orphan' warning which we handle in the next commit. The struct attributes are necessary: __aligned(8) fixed null-ptr derefs, and __used is needed by drm drivers, which declare class-maps, but don't also declare a sysfs-param, and thus dont ref the classmap. The __used insures that the linkage is made, then the class-map is found at load-time. While its possible to handle both NAMES and NUMBERS in the same sys-interface, there is ambiguity to avoid, by disallowing them together. Later, if ambiguities are resolved, 2 new enums can permit both inputs, on verbose & independent types separately, and authors can select the interface style they like. The plan is to implement LEVELS in the callbacks, outside of ddebug_exec_query(), which for simplicity will treat the CLASSES in the map as disjoint. The callbacks can see map-type, and apply ++/-- loops (or bitops) to force the relative meanings across the class-bitmap. RFC: That leaves 2 issues: 1. doing LEVELs in callbacks means that the native >control interface doesn't enforce the LEVELS relationship, so you could confusingly have V3 enabled, but V1 disabled. OTOH, the control iface already allows infinite tweaking of the underlying callsites; sysfs node readback can only tell the user what they previously wrote. 2. All dyndbg >control reduces to a query/command, includes +/-, which is at-root a kernel patching operation with +/- semantics. And the _NAMES handling exposes it to the user, making it API-adjacent. And its not just >control where +/- gets used (which is settled), the new place is with sysfs-nodes exposing _*_NAMES classes, and here its subtly different. _DISJOINT_NAMES: is simple, independent _LEVEL_NAMES: masks-on bits 0 .. N-1, N..max off # turn on L3,L2,L1 others off echo +L3 > /sys/module/test_dynamic_debug/parameters/p_level_names # turn on L2,L1 others off echo -L3 > /sys/module/test_dynamic_debug/parameters/p_level_names IOW, the - changes the threshold-on bitpos by 1. Alternatively, we could treat the +/- as half-duplex, where -L3 turns off L>2 (and ignores L1), and +L2 would turn on L<=2 (and ignore others). Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20220904214134.408619-15-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-07dyndbg: add __pr_debug_cls for testingJim Cromie
For selftest purposes, add __pr_debug_cls(class, fmt, ...) I didn't think we'd need to define this, since DRM effectively has it already in drm_dbg, drm_devdbg. But test_dynamic_debug needs it in order to demonstrate all the moving parts. Note the __ prefix; its not intended for general use, at least until a need emerges. ISTM the drm.debug model (macro wrappers inserting enum const 1st arg) is the baseline approach. That said, nouveau might want it for easy use in its debug macros. TBD. NB: it does require a builtin-constant class, __pr_debug_cls(i++, ...) is disallowed by compiler. Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20220904214134.408619-14-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-07dyndbg: add class_id to pr_debug callsitesJim Cromie
DRM issues ~10 exclusive categories of debug messages; to represent this directly in dyndbg, add a new 6-bit field: struct _ddebug.class_id. This gives us 64 classes, which should be more than enough. #> echo 0x012345678 > /sys/module/drm/parameters/debug All existing callsites are initialized with _DPRINTK_CLASS_DFLT, which is 2^6-1. This reserves 0-62 for use in new categorized/class'd pr_debugs, which fits perfectly with natural enums (ints: 0..N). Thats done by extending the init macro: DEFINE_DYNAMIC_DEBUG_METADATA() with _CLS(cls, ...) suffix, and redefing old name using extended name. Then extend the factory macro callchain with _cls() versions to provide the callsite.class_id, with old-names passing _DPRINTK_CLASS_DFLT. This sets us up to create class'd prdebug callsites (class'd callsites are those with .class_id != _DPRINTK_CLASS_DFLT). No behavior change. cc: Rasmus Villemoes <rasmus.villemoes@prevas.dk> Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20220904214134.408619-13-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-07dyndbg: gather __dyndbg[] state into struct _ddebug_infoJim Cromie
This new struct composes the linker provided (vector,len) section, and provides a place to add other __dyndbg[] state-data later: descs - the vector of descriptors in __dyndbg section. num_descs - length of the data/section. Use it, in several different ways, as follows: In lib/dynamic_debug.c: ddebug_add_module(): Alter params-list, replacing 2 args (array,index) with a struct _ddebug_info * containing them both, with room for expansion. This helps future-proof the function prototype against the looming addition of class-map info into the dyndbg-state, by providing a place to add more member fields later. NB: later add static struct _ddebug_info builtins_state declaration, not needed yet. ddebug_add_module() is called in 2 contexts: In dynamic_debug_init(), declare, init a struct _ddebug_info di auto-var to use as a cursor. Then iterate over the prdbg blocks of the builtin modules, and update the di cursor before calling _add_module for each. Its called from kernel/module/main.c:load_info() for each loaded module: In internal.h, alter struct load_info, replacing the dyndbg array,len fields with an embedded _ddebug_info containing them both; and populate its members in find_module_sections(). The 2 calling contexts differ in that _init deals with contiguous subranges of __dyndbgs[] section, packed together, while loadable modules are added one at a time. So rename ddebug_add_module() into outer/__inner fns, call __inner from _init, and provide the offset into the builtin __dyndbgs[] where the module's prdbgs reside. The cursor provides start, len of the subrange for each. The offset will be used later to pack the results of builtin __dyndbg_sites[] de-duplication, and is 0 and unneeded for loadable modules, Note: kernel/module/main.c includes <dynamic_debug.h> for struct _ddeubg_info. This might be prone to include loops, since its also included by printk.h. Nothing has broken in robot-land on this. cc: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20220904214134.408619-12-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-07dyndbg: drop EXPORTed dynamic_debug_exec_queriesJim Cromie
This exported fn is unused, and will not be needed. Lets dump it. The export was added to let drm control pr_debugs, as part of using them to avoid drm_debug_enabled overheads. But its better to just implement the drm.debug bitmap interface, then its available for everyone. Fixes: a2d375eda771 ("dyndbg: refine export, rename to dynamic_debug_exec_queries()") Fixes: 4c0d77828d4f ("dyndbg: export ddebug_exec_queries") Acked-by: Jason Baron <jbaron@akamai.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20220904214134.408619-10-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-07dyndbg: fix module.dyndbg handlingJim Cromie
For CONFIG_DYNAMIC_DEBUG=N, the ddebug_dyndbg_module_param_cb() stub-fn is too permissive: bash-5.1# modprobe drm JUNKdyndbg bash-5.1# modprobe drm dyndbgJUNK [ 42.933220] dyndbg param is supported only in CONFIG_DYNAMIC_DEBUG builds [ 42.937484] ACPI: bus type drm_connector registered This caused no ill effects, because unknown parameters are either ignored by default with an "unknown parameter" warning, or ignored because dyndbg allows its no-effect use on non-dyndbg builds. But since the code has an explicit feedback message, it should be issued accurately. Fix with strcmp for exact param-name match. Fixes: b48420c1d301 dynamic_debug: make dynamic-debug work for module initialization Reported-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by: Jason Baron <jbaron@akamai.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20220904214134.408619-3-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>