summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-06-19bcachefs: Fix array-index-out-of-boundsKent Overstreet
We use 0 size arrays as markers, but ubsan doesn't know that - cast them to a pointer to fix the splat. Also, make sure this code gets tested a bit more. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-06-19bcachefs: Fix initialization order for srcu barrierKent Overstreet
btree_iter_init() needs to happen before key_cache_init(), to initialize btree_trans_barrier Reported-by: syzbot+3cca837c2183f8f6fcaf@syzkaller.appspotmail.com Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-06-19drm/amdgpu: init TA fw for psp v14Likun Gao
Add support to init TA firmware for psp v14. Signed-off-by: Likun Gao <Likun.Gao@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19drm/amdgpu: cleanup MES11 command submissionChristian König
The approach of having a separate WB slot for each submission doesn't really work well and for example breaks GPU reset. Use a status query packet for the fence update instead since those should always succeed we can use the fence of the original packet to signal the state of the operation. While at it cleanup the coding style. Fixes: eef016ba8986 ("drm/amdgpu/mes11: Use a separate fence per transaction") Reviewed-by: Mukul Joshi <mukul.joshi@amd.com> Signed-off-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19drm/amdgpu: fix UBSAN warning in kv_dpm.cAlex Deucher
Adds bounds check for sumo_vid_mapping_entry. Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3392 Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2024-06-19drm/radeon: fix UBSAN warning in kv_dpm.cAlex Deucher
Adds bounds check for sumo_vid_mapping_entry. Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2024-06-19cifs: fix typo in module parameter enable_gcm_256Steve French
enable_gcm_256 (which allows the server to require the strongest encryption) is enabled by default, but the modinfo description incorrectly showed it disabled by default. Fix the typo. Cc: stable@vger.kernel.org Fixes: fee742b50289 ("smb3.1.1: enable negotiating stronger encryption by default") Signed-off-by: Steve French <stfrench@microsoft.com>
2024-06-19vfs: link_path_walk: do '.' and '..' detection while hashingLinus Torvalds
Instead of loading the name again to detect '.' and '..', just use the fact that we already had the masked last word available when as we created the name hash. Which is exactly what we'd then test for. Dealing with big-endian word ordering needs a bit of care, particularly since we have the byte-at-a-time loop as a fallback that doesn't do BE word loads. But not a big deal. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19vfs: link_path_walk: clarify and improve name hashing interfaceLinus Torvalds
Now that we clearly only care about the length of the name we just parsed, we can simplify and clarify the interface to "name_hash()", and move the actual nd->last field setting in there. That makes everything simpler, and this way don't mix the hash and the length together only to then immediately unmix them again. We still eventually want the combined mixed "hashlen" for when we look things up in the dentry cache, but inside link_path_walk() it's simpler and clearer to just deal with the path component length. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19vfs: link_path_walk: simplify name hash flowLinus Torvalds
This is one of those hot functions in path walking, and it's doing things in just the wrong order that causes slightly unnecessary extra work. Move the name pointer update and the setting of 'nd->last' up a bit, so that the (unlikely) filesystem-specific hashing can run on them in place, instead of having to set up a copy on the stack and copy things back and forth. Because even when the hashing is not run, it causes the stack frame of the function to be bigger to hold the unnecessary temporary copy. This also means that we never then reference the full "hashlen" field after calculating it, and can clarify the code with just using the length part. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19arm64: word-at-a-time: improve byte count calculations for LELinus Torvalds
Do the same optimization as x86-64: do __ffs() on the intermediate value that found whether there is a zero byte, before we've actually computed the final byte mask. The logic is: has_zero(): Check if the word has a zero byte in it, which indicates the end of the loop, and prepare a value to be used for the rest of the sequence. The standard LE implementation just creates a word that has the high bit set in each byte of the word that was zero. Example: 0xaa00bbccdd00eeff -> 0x0080000000800000 prep_zero_mask(): Possibly do more prep to then clean up the initial fast result from has_zero, so that it can be combined with another zero mask with a simple logical "or" to create a final mask. This is only used on big-endian machines that use a different algorithm, and is a no-op here. create_zero_mask(): This is "step 1" of creating the count and the mask, and is meant for any common operations between the two. In the old implementation, this actually created the zero mask, that was then used for masking and for counting the number of bits in the mask. In the new implementation, this is a no-op. count_zero(): This takes the mask bits, and counts the number of bytes before the first zero byte. In the old implementation, it counted the number of bits in the final byte mask (which was the same as the C standard "find last set bit" that uses the silly "starts at one" counting) and shifted the value down by three. In the new implementation, we know the intermediate mask isn't zero, and it just does "find first set" with the sane semantics without any off-by-one issues, and again shifts by three (which also masks off the bit offset in the zero byte itself). Example: 0x0080000000800000 -> 2 zero_bytemask(): This takes the mask bits, and turns it into an actual byte mask of the bytes preceding the first zero byte. In the old implementation, this was a no-op, because the work had already been done by create_zero_mask(). In the new implementation, this does what create_zero_mask() used to do. Example: 0x0080000000800000 -> 0x000000000000ffff The difference between the old and the new implementation is that "count_zero()" ends up scheduling better because it is being done on a value that is available earlier (before the final mask). But more importantly, it can be implemented without the insane semantics of the standard bit finding helpers that have the off-by-one issue and have to special-case the zero mask situation. On arm64, the new "count_zero()" ends up just "rbit + clz" plus the shift right that then ends up being subsumed by the "add to final length". Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19x86-64: word-at-a-time: improve byte count calculationsLinus Torvalds
This switches x86-64 over to using 'tzcount' instead of the integer multiply trick to turn the bytemask information into actual byte counts. We even had a comment saying that a fast bit count instruction is better than a multiply, but x86 bit counting has traditionally been "questionably fast", and so avoiding it was the right thing back in the days. Now, on any half-way modern core, using bit counting is cheaper and smaller than the large constant multiply, so let's just switch over. Note that as part of switching over to counting bits, we also do it at a different point. We used to create the byte count from the final byte mask, but once you use the 'tzcount' instruction (aka 'bsf' on older CPU's), you can actually count the leading zeroes using a value we have available earlier. In fact, we can just use the very first mask of bits that tells us whether we have any zero bytes at all. The zero bytes in the word will have the high bit set, so just doing 'tzcount' on that value and dividing by 8 will give the number of bytes that precede the first NUL character, which is exactly what we want. Note also that the input value to the tzcount is by definition not zero, since that is the condition that we already used to check the whole "do we have any zero bytes at all". So we don't need to worry about the legacy instruction behavior of pre-lzcount days when 'bsf' didn't have a result for zero input. The 32-bit code continues to use the bimple bit op trick that is faster even on newer cores, but particularly on the older 32-bit-only ones. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19runtime constants: add x86 architecture supportLinus Torvalds
This implements the runtime constant infrastructure for x86, allowing the dcache d_hash() function to be generated using as a constant for hash table address followed by shift by a constant of the hash index. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19runtime constants: add default dummy infrastructureLinus Torvalds
This adds the initial dummy support for 'runtime constants' for when an architecture doesn't actually support an implementation of fixing up said runtime constants. This ends up being the fallback to just using the variables as regular __ro_after_init variables, and changes the dcache d_hash() function to use this model. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19vfs: dcache: move hashlen_hash() from callers into d_hash()Linus Torvalds
Both __d_lookup_rcu() and __d_lookup_rcu_op_compare() have the full 'name_hash' value of the qstr that they want to look up, and mask it off to just the low 32-bit hash before calling down to d_hash(). Other callers just load the 32-bit hash and pass it as the argument. If we move the masking into d_hash() itself, it simplifies the two callers that currently do the masking, and is a no-op for the other cases. It doesn't actually change the generated code since the compiler will inline d_hash() and see that the end result is the same. [ Technically, since the parse tree changes, the code generation may not be 100% the same, and for me on x86-64, this does result in gcc switching the operands around for one 'cmpl' instruction. So not necessarily the exact same code generation, but equivalent ] However, this does encapsulate the 'd_hash()' operation more, and makes the shift operation in particular be a "shift 32 bits right, return full word". Which matches the instruction semantics on both x86-64 and arm64 better, since a 32-bit shift will clear the upper bits. That makes the next step of introducing a "shift by runtime constant" more obvious and generates the shift with no extraneous type masking. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19arm64: start using 'asm goto' for put_user()Linus Torvalds
This generates noticeably better code since we don't need to test the error register etc, the exception just jumps to the error handling directly. Unlike get_user(), there's no need to worry about old compilers. All supported compilers support the regular non-output 'asm goto', as pointed out by Nathan Chancellor. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19arm64: start using 'asm goto' for get_user() when availableLinus Torvalds
This generates noticeably better code with compilers that support it, since we don't need to test the error register etc, the exception just jumps to the error handling directly. Note that this also marks SW_TTBR0_PAN incompatible with KCSAN support, since KCSAN wants to save and restore the user access state. KCSAN and SW_TTBR0_PAN were probably always incompatible, but it became obvious only when implementing the unsafe user access functions. At that point the default empty user_access_save/restore() functions weren't provided by the default fallback functions. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-19arm64: dts: rockchip: fix PMIC interrupt pin on ROCK Pi EFUKAUMI Naoki
use GPIO0_A2 as interrupt pin for PMIC. GPIO2_A6 was used for pre-production board. Fixes: b918e81f2145 ("arm64: dts: rockchip: rk3328: Add Radxa ROCK Pi E") Signed-off-by: FUKAUMI Naoki <naoki@radxa.com> Link: https://lore.kernel.org/r/20240619050047.1217-1-naoki@radxa.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2024-06-19drm/amd/display: Disable CONFIG_DRM_AMD_DC_FP for RISC-V with clangNathan Chancellor
Commit 77acc6b55ae4 ("riscv: add support for kernel-mode FPU") and commit a28e4b672f04 ("drm/amd/display: use ARCH_HAS_KERNEL_FPU_SUPPORT") enabled support for CONFIG_DRM_AMD_DC_FP with RISC-V. Unfortunately, this exposed -Wframe-larger-than warnings (which become fatal with CONFIG_WERROR=y) when building ARCH=riscv allmodconfig with clang: drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn32/display_mode_vba_32.c:58:13: error: stack frame size (2448) exceeds limit (2048) in 'DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation' [-Werror,-Wframe-larger-than] 58 | static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation( | ^ 1 error generated. Many functions in this file use a large number of parameters, which must be passed on the stack at a certain pointer due to register exhaustion, which can cause high stack usage when inlining and issues with stack slot analysis get involved. While the compiler can and should do better (as GCC uses less than half the amount of stack space for the same function), it is not as simple as a fix as adjusting the functions not to take a large number of parameters. Unfortunately, modifying these files to avoid the problem is a difficult to justify approach because any revisions to the files in the kernel tree never make it back to the original source (so copies of the code for newer hardware revisions just reintroduce the issue) and the files are hard to read/modify due to being "gcc-parsable HW gospel, coming straight from HW engineers". Avoid building the problematic code for RISC-V by modifying the existing condition for arm64 that exists for the same reason. Factor out the logical not to make the condition a little more readable naturally. Fixes: a28e4b672f04 ("drm/amd/display: use ARCH_HAS_KERNEL_FPU_SUPPORT") Reported-by: Palmer Dabbelt <palmer@rivosinc.com> Closes: https://lore.kernel.org/20240530145741.7506-2-palmer@rivosinc.com/ Reviewed-by: Harry Wentland <harry.wentland@amd.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19drm/amd/display: Attempt to avoid empty TUs when endpoint is DPIAMichael Strauss
[WHY] Empty SST TUs are illegal to transmit over a USB4 DP tunnel. Current policy is to configure stream encoder to pack 2 pixels per pclk even when ODM combine is not in use, allowing seamless dynamic ODM reconfiguration. However, in extreme edge cases where average pixel count per TU is less than 2, this can lead to unexpected empty TU generation during compliance testing. For example, VIC 1 with a 1xHBR3 link configuration will average 1.98 pix/TU. [HOW] Calculate average pixel count per TU, and block 2 pixels per clock if endpoint is a DPIA tunnel and pixel clock is low enough that we will never require 2:1 ODM combine. Cc: stable@vger.kernel.org # 6.6+ Reviewed-by: Wenjing Liu <wenjing.liu@amd.com> Acked-by: Hamza Mahfooz <hamza.mahfooz@amd.com> Signed-off-by: Michael Strauss <michael.strauss@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19drm/amd/display: change dram_clock_latency to 34us for dcn35Paul Hsieh
[Why & How] Current DRAM setting would cause underflow on customer platform. Modify dram_clock_change_latency_us from 11.72 to 34.0 us as per recommendation from HW team Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Acked-by: Zaeem Mohamed <zaeem.mohamed@amd.com> Signed-off-by: Paul Hsieh <paul.hsieh@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19drm/amd/display: Change dram_clock_latency to 34us for dcn351Daniel Miess
[Why] Intermittent underflow observed when using 4k144 display on dcn351 [How] Update dram_clock_change_latency_us from 11.72us to 34us Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Acked-by: Zaeem Mohamed <zaeem.mohamed@amd.com> Signed-off-by: Daniel Miess <daniel.miess@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19drm/amdgpu: revert "take runtime pm reference when we attach a buffer" v2Christian König
This reverts commit b8c415e3bf98 ("drm/amdgpu: take runtime pm reference when we attach a buffer") and commit 425285d39afd ("drm/amdgpu: add amdgpu runpm usage trace for separate funcs"). Taking a runtime pm reference for DMA-buf is actually completely unnecessary and even dangerous. The problem is that calling pm_runtime_get_sync() from the DMA-buf callbacks is illegal because we have the reservation locked here which is also taken during resume. So this would deadlock. When the buffer is in GTT it is still accessible even when the GPU is powered down and when it is in VRAM the buffer gets migrated to GTT before powering down. The only use case which would make it mandatory to keep the runtime pm reference would be if we pin the buffer into VRAM, and that's not something we currently do. v2: improve the commit message Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> CC: stable@vger.kernel.org
2024-06-19drm/amdgpu: Indicate CU havest info to CPHarish Kasiviswanathan
To achieve full occupancy CP hardware needs to know if CUs in SE are symmetrically or asymmetrically harvested v2: Reset is_symmetric_cus for each loop Signed-off-by: Harish Kasiviswanathan <Harish.Kasiviswanathan@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19drm/amd/display: prevent register access while in IPSHamza Mahfooz
We can't read/write to DCN registers while in IPS. Since, that can cause the system to hang. So, before proceeding with the access in that scenario, force the system out of IPS. Cc: stable@vger.kernel.org # 6.6+ Reviewed-by: Roman Li <roman.li@amd.com> Signed-off-by: Hamza Mahfooz <hamza.mahfooz@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19drm/amdgpu: fix locking scope when flushing tlbYunxiang Li
Which method is used to flush tlb does not depend on whether a reset is in progress or not. We should skip flush altogether if the GPU will get reset. So put both path under reset_domain read lock. Signed-off-by: Yunxiang Li <Yunxiang.Li@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> CC: stable@vger.kernel.org
2024-06-19drm/amd/display: Remove redundant idle optimization checkRoman Li
[Why] Disable idle optimization for each atomic commit is unnecessary, and can lead to a potential race condition. [How] Remove idle optimization check from amdgpu_dm_atomic_commit_tail() Fixes: 196107eb1e15 ("drm/amd/display: Add IPS checks before dcn register access") Cc: stable@vger.kernel.org Reviewed-by: Hamza Mahfooz <hamza.mahfooz@amd.com> Acked-by: Roman Li <roman.li@amd.com> Signed-off-by: Roman Li <roman.li@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-06-19workqueue: Avoid nr_active manipulation in grabbing inactive itemsLai Jiangshan
Current try_to_grab_pending() activates the inactive item and subsequently treats it as though it were a standard activated item. This approach prevents duplicating handling logic for both active and inactive items, yet the premature activation of an inactive item triggers trace_workqueue_activate_work(), yielding an unintended user space visible side effect. And the unnecessary increment of the nr_active, which is not a simple counter now, followed by a counteracted decrement, is inefficient and complicates the code. Just remove the nr_active manipulation code in grabbing inactive items. Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2024-06-19selftest/cgroup: Update test_cpuset_prs.sh to match changesWaiman Long
Unlike the list of isolated CPUs, it is not easy to programamatically determine what sched domains are being created by the scheduler just by examinng the data in various kernfs filesystems. The easiest way to get this information is by enabling /sys/kernel/debug/sched/verbose file to make those information displayed in the console. This is also what the test_cpuset_prs.sh script is doing when the -v flag is given. It is rather hard to fetch the data from the console and compare it to the expected result. An easier way is to dump the expected sched-domain information out to the console so that they can be visually compared with the actual sched domain data. However, this have to be done manually by visual inspection and so will only be done once in a while. Moreover the preceding cpuset commits also change the cpuset behavior requiring corresponding chanages in some test cases as well as new test cases to test the newly added functionality. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2024-06-19cgroup/cpuset: Make cpuset.cpus.exclusive independent of cpuset.cpusWaiman Long
The "cpuset.cpus.exclusive.effective" value is currently limited to a subset of its "cpuset.cpus". This makes the exclusive CPUs distribution hierarchy subsumed within the larger "cpuset.cpus" hierarchy. We have to decide on what CPUs are used locally and what CPUs can be passed down as exclusive CPUs down the hierarchy and combine them into "cpuset.cpus". The advantage of the current scheme is to have only one hierarchy to worry about. However, it make it harder to use as all the "cpuset.cpus" values have to be properly set along the way down to the designated remote partition root. It also makes it more cumbersome to find out what CPUs can be used locally. Make creation of remote partition simpler by breaking the dependency of "cpuset.cpus.exclusive" on "cpuset.cpus" and make them independent entities. Now we have two separate hierarchies - one for setting "cpuset.cpus.effective" and the other one for setting "cpuset.cpus.exclusive.effective". We may not need to set "cpuset.cpus" when we activate a partition root anymore. Also update Documentation/admin-guide/cgroup-v2.rst and cpuset.c comment to document this change. Suggested-by: Petr Malat <oss@malat.biz> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2024-06-19cgroup/cpuset: Delay setting of CS_CPU_EXCLUSIVE until valid partitionWaiman Long
The CS_CPU_EXCLUSIVE flag is currently set whenever cpuset.cpus.exclusive is set to make sure that the exclusivity test will be run to ensure its exclusiveness. At the same time, this flag can be changed whenever the partition root state is changed. For example, the CS_CPU_EXCLUSIVE flag will be reset whenever a partition root becomes invalid. This makes using CS_CPU_EXCLUSIVE to ensure exclusiveness a bit fragile. The current scheme also makes setting up a cpuset.cpus.exclusive hierarchy to enable remote partition harder as cpuset.cpus.exclusive cannot overlap with any cpuset.cpus of sibling cpusets if their cpuset.cpus.exclusive aren't set. Solve these issues by deferring the setting of CS_CPU_EXCLUSIVE flag until the cpuset become a valid partition root while adding new checks in validate_change() to ensure that cpuset.cpus.exclusive of sibling cpusets cannot overlap. An additional check is also added to validate_change() to make sure that cpuset.cpus of one cpuset cannot be a subset of cpuset.cpus.exclusive of a sibling cpuset to avoid the problem that none of those CPUs will be available when these exclusive CPUs are extracted out to a newly enabled partition root. The Documentation/admin-guide/cgroup-v2.rst file is updated to document the new constraints. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2024-06-19selftest/cgroup: Fix test_cpuset_prs.sh problems reported by test robotWaiman Long
The test robot reported two different problems when running the test_cpuset_prs.sh test. # ./test_cpuset_prs.sh: line 106: echo: write error: Input/output error # : # Effective cpus changed to 0-1,4-7 after test 4! The write error is caused by writing to /dev/console. It looks like some systems may not have /dev/console configured or in a writeable state. Fix this by checking the existence of /dev/console before attempting to write it. After the completion of each test run, the script will check if the cpuset state is reset back to the original state. That usually takes a while to happen. The test script inserts some artificial delay to make sure that the reset has completed. The current setting is about 80ms. That may not be enough in some cases especially if the test system is slow. Double it to 160ms to minimize the chance of this type of failure. Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202406141712.dbbaa8fd-oliver.sang@intel.com Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2024-06-19cgroup/cpuset: Fix remote root partition creation problemWaiman Long
Since commit 181c8e091aae ("cgroup/cpuset: Introduce remote partition"), a remote partition can be created underneath a non-partition root cpuset as long as its exclusive_cpus are set to distribute exclusive CPUs down to its children. The generate_sched_domains() function, however, doesn't take into account this new behavior and hence will fail to create the sched domain needed for a remote root (non-isolated) partition. There are two issues related to remote partition support. First of all, generate_sched_domains() has a fast path that is activated if root_load_balance is true and top_cpuset.nr_subparts is non-zero. The later condition isn't quite correct for remote partitions as nr_subparts just shows the number of local child partitions underneath it. There can be no local child partition under top_cpuset even if there are remote partitions further down the hierarchy. Fix that by checking for subpartitions_cpus which contains exclusive CPUs allocated to both local and remote partitions. Secondly, the valid partition check for subtree skipping in the csa[] generation loop isn't enough as remote partition does not need to have a partition root parent. Fix this problem by breaking csa[] array generation loop of generate_sched_domains() into v1 and v2 specific parts and checking a cpuset's exclusive_cpus before skipping its subtree in the v2 case. Also simplify generate_sched_domains() for cgroup v2 as only non-isolating partition roots should be included in building the cpuset array and none of the v1 scheduling attributes other than a different way to create an isolated partition are supported. Fixes: 181c8e091aae ("cgroup/cpuset: Introduce remote partition") Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2024-06-19ASoC: amd: acp: move chip->flag variable assignmentVijendar Mukunda
chip->flag variable assignment will be skipped when acp platform device creation is skipped. In this case chip>flag value will not be set. chip->flag variable should be assigned along with other structure variables for 'chip' structure. Move chip->flag variable assignment prior to acp platform device creation. Fixes: 3a94c8ad0aae ("ASoC: amd: acp: add code for scanning acp pdm controller") Signed-off-by: Vijendar Mukunda <Vijendar.Mukunda@amd.com> Link: https://msgid.link/r/20240617072844.871468-3-Vijendar.Mukunda@amd.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-06-19ASoC: amd: acp: remove i2s configuration check in acp_i2s_probe()Vijendar Mukunda
ACP supports different pin configurations for I2S IO. Checking ACP pin configuration value against specific value breaks the functionality for other I2S pin configurations. This check is no longer required in i2s dai driver probe call as i2s configuration check will be verified during acp platform device creation sequence. Remove i2s_mode check in acp_i2s_probe() function. Fixes: b24484c18b10 ("ASoC: amd: acp: ACP code generic to support newer platforms") Signed-off-by: Vijendar Mukunda <Vijendar.Mukunda@amd.com> Link: https://msgid.link/r/20240617072844.871468-2-Vijendar.Mukunda@amd.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-06-19ASoC: amd: acp: add a null check for chip_pdev structureVijendar Mukunda
When acp platform device creation is skipped, chip->chip_pdev value will remain NULL. Add NULL check for chip->chip_pdev structure in snd_acp_resume() function to avoid null pointer dereference. Fixes: 088a40980efb ("ASoC: amd: acp: add pm ops support for acp pci driver") Signed-off-by: Vijendar Mukunda <Vijendar.Mukunda@amd.com> Link: https://msgid.link/r/20240617072844.871468-1-Vijendar.Mukunda@amd.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-06-19Merge tag 'probes-fixes-v6.10-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull probes fix from Masami Hiramatsu: - Restrict gen-API tests for synthetic and kprobe events to only be built as modules, as they generate dynamic events that cannot be removed, causing ftracetest and startup selftests to fail * tag 'probes-fixes-v6.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: tracing: Build event generation tests only as modules
2024-06-19Merge tag 'mips-fixes_6.10_1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Thomas Bogendoerfer: - fix for BCM6538 boards - fix RB532 PCI workaround * tag 'mips-fixes_6.10_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: Revert "MIPS: pci: lantiq: restore reset gpio polarity" mips: bmips: BCM6358: make sure CBR is correctly set MIPS: pci: lantiq: restore reset gpio polarity MIPS: Routerboard 532: Fix vendor retry check code
2024-06-19cgroup: avoid the unnecessary list_add(dying_tasks) in cgroup_exit()Oleg Nesterov
cgroup_exit() needs to do this only if the exiting task is a leader and it is not the last live thread. The patch doesn't use delay_group_leader(), atomic_read(signal->live) matches the code css_task_iter_advance() more. cgroup_release() can now check list_empty(task->cg_list) before it takes css_set_lock and calls ss_set_skip_task_iters(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2024-06-19selftests: add selftest for the SRv6 End.DX6 behavior with netfilterJianguo Wu
this selftest is designed for evaluating the SRv6 End.DX6 behavior used with netfilter(rpfilter), in this example, for implementing IPv6 L3 VPN use cases. Signed-off-by: Jianguo Wu <wujianguo@chinatelecom.cn> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-06-19selftests: add selftest for the SRv6 End.DX4 behavior with netfilterJianguo Wu
this selftest is designed for evaluating the SRv6 End.DX4 behavior used with netfilter(rpfilter), in this example, for implementing IPv4 L3 VPN use cases. Signed-off-by: Jianguo Wu <wujianguo@chinatelecom.cn> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-06-19netfilter: move the sysctl nf_hooks_lwtunnel into the netfilter coreJianguo Wu
Currently, the sysctl net.netfilter.nf_hooks_lwtunnel depends on the nf_conntrack module, but the nf_conntrack module is not always loaded. Therefore, accessing net.netfilter.nf_hooks_lwtunnel may have an error. Move sysctl nf_hooks_lwtunnel into the netfilter core. Fixes: 7a3f5b0de364 ("netfilter: add netfilter hooks to SRv6 data plane") Suggested-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Jianguo Wu <wujianguo@chinatelecom.cn> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-06-19drm/fbdev-dma: Only set smem_start is enable per module optionThomas Zimmermann
Only export struct fb_info.fix.smem_start if that is required by the user and the memory does not come from vmalloc(). Setting struct fb_info.fix.smem_start breaks systems where DMA memory is backed by vmalloc address space. An example error is shown below. [ 3.536043] ------------[ cut here ]------------ [ 3.540716] virt_to_phys used for non-linear address: 000000007fc4f540 (0xffff800086001000) [ 3.552628] WARNING: CPU: 4 PID: 61 at arch/arm64/mm/physaddr.c:12 __virt_to_phys+0x68/0x98 [ 3.565455] Modules linked in: [ 3.568525] CPU: 4 PID: 61 Comm: kworker/u12:5 Not tainted 6.6.23-06226-g4986cc3e1b75-dirty #250 [ 3.577310] Hardware name: NXP i.MX95 19X19 board (DT) [ 3.582452] Workqueue: events_unbound deferred_probe_work_func [ 3.588291] pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 3.595233] pc : __virt_to_phys+0x68/0x98 [ 3.599246] lr : __virt_to_phys+0x68/0x98 [ 3.603276] sp : ffff800083603990 [ 3.677939] Call trace: [ 3.680393] __virt_to_phys+0x68/0x98 [ 3.684067] drm_fbdev_dma_helper_fb_probe+0x138/0x238 [ 3.689214] __drm_fb_helper_initial_config_and_unlock+0x2b0/0x4c0 [ 3.695385] drm_fb_helper_initial_config+0x4c/0x68 [ 3.700264] drm_fbdev_dma_client_hotplug+0x8c/0xe0 [ 3.705161] drm_client_register+0x60/0xb0 [ 3.709269] drm_fbdev_dma_setup+0x94/0x148 Additionally, DMA memory is assumed to by contiguous in physical address space, which is not guaranteed by vmalloc(). Resolve this by checking the module flag drm_leak_fbdev_smem when DRM allocated the instance of struct fb_info. Fbdev-dma then only sets smem_start only if required (via FBINFO_HIDE_SMEM_START). Also guarantee that the framebuffer is not located in vmalloc address space. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Reported-by: Peng Fan (OSS) <peng.fan@oss.nxp.com> Closes: https://lore.kernel.org/dri-devel/20240604080328.4024838-1-peng.fan@oss.nxp.com/ Reported-by: Geert Uytterhoeven <geert+renesas@glider.be> Closes: https://lore.kernel.org/dri-devel/CAMuHMdX3N0szUvt1VTbroa2zrT1Nye_VzPb5qqCZ7z5gSm7HGw@mail.gmail.com/ Fixes: a51c7663f144 ("drm/fb-helper: Consolidate CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM") Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: <stable@vger.kernel.org> # v6.4+ Link: https://patchwork.freedesktop.org/patch/msgid/20240617152843.11886-1-tzimmermann@suse.de
2024-06-19io_uring: Allocate only necessary memory in io_probeGabriel Krisman Bertazi
We write at most IORING_OP_LAST entries in the probe buffer, so we don't need to allocate temporary space for more than that. As a side effect, we no longer can overflow "size". Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Link: https://lore.kernel.org/r/20240619020620.5301-3-krisman@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-06-19io_uring: Fix probe of disabled operationsGabriel Krisman Bertazi
io_probe checks io_issue_def->not_supported, but we never really set that field, as we mark non-supported functions through a specific ->prep handler. This means we end up returning IO_URING_OP_SUPPORTED, even for disabled operations. Fix it by just checking the prep handler itself. Fixes: 66f4af93da57 ("io_uring: add support for probing opcodes") Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Link: https://lore.kernel.org/r/20240619020620.5301-2-krisman@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-06-19thermal: int340x: processor_thermal: Support shared interruptsSrinivas Pandruvada
On some systems the processor thermal device interrupt is shared with other PCI devices. In this case return IRQ_NONE from the interrupt handler when the interrupt is not for the processor thermal device. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Fixes: f0658708e863 ("thermal: int340x: processor_thermal: Use non MSI interrupts by default") Cc: 6.7+ <stable@vger.kernel.org> # 6.7+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-06-19seg6: fix parameter passing when calling NF_HOOK() in End.DX4 and End.DX6 ↵Jianguo Wu
behaviors input_action_end_dx4() and input_action_end_dx6() are called NF_HOOK() for PREROUTING hook, in PREROUTING hook, we should passing a valid indev, and a NULL outdev to NF_HOOK(), otherwise may trigger a NULL pointer dereference, as below: [74830.647293] BUG: kernel NULL pointer dereference, address: 0000000000000090 [74830.655633] #PF: supervisor read access in kernel mode [74830.657888] #PF: error_code(0x0000) - not-present page [74830.659500] PGD 0 P4D 0 [74830.660450] Oops: 0000 [#1] PREEMPT SMP PTI ... [74830.664953] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [74830.666569] RIP: 0010:rpfilter_mt+0x44/0x15e [ipt_rpfilter] ... [74830.689725] Call Trace: [74830.690402] <IRQ> [74830.690953] ? show_trace_log_lvl+0x1c4/0x2df [74830.692020] ? show_trace_log_lvl+0x1c4/0x2df [74830.693095] ? ipt_do_table+0x286/0x710 [ip_tables] [74830.694275] ? __die_body.cold+0x8/0xd [74830.695205] ? page_fault_oops+0xac/0x140 [74830.696244] ? exc_page_fault+0x62/0x150 [74830.697225] ? asm_exc_page_fault+0x22/0x30 [74830.698344] ? rpfilter_mt+0x44/0x15e [ipt_rpfilter] [74830.699540] ipt_do_table+0x286/0x710 [ip_tables] [74830.700758] ? ip6_route_input+0x19d/0x240 [74830.701752] nf_hook_slow+0x3f/0xb0 [74830.702678] input_action_end_dx4+0x19b/0x1e0 [74830.703735] ? input_action_end_t+0xe0/0xe0 [74830.704734] seg6_local_input_core+0x2d/0x60 [74830.705782] lwtunnel_input+0x5b/0xb0 [74830.706690] __netif_receive_skb_one_core+0x63/0xa0 [74830.707825] process_backlog+0x99/0x140 [74830.709538] __napi_poll+0x2c/0x160 [74830.710673] net_rx_action+0x296/0x350 [74830.711860] __do_softirq+0xcb/0x2ac [74830.713049] do_softirq+0x63/0x90 input_action_end_dx4() passing a NULL indev to NF_HOOK(), and finally trigger a NULL dereference in rpfilter_mt()->rpfilter_is_loopback(): static bool rpfilter_is_loopback(const struct sk_buff *skb, const struct net_device *in) { // in is NULL return skb->pkt_type == PACKET_LOOPBACK || in->flags & IFF_LOOPBACK; } Fixes: 7a3f5b0de364 ("netfilter: add netfilter hooks to SRv6 data plane") Signed-off-by: Jianguo Wu <wujianguo@chinatelecom.cn> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-06-19Merge branch 'for-6.11/block-limits' into for-6.11/blockJens Axboe
Merge in last round of queue limits changes from Christoph. * for-6.11/block-limits: (26 commits) block: move the bounce flag into the features field block: move the skip_tagset_quiesce flag to queue_limits block: move the pci_p2pdma flag to queue_limits block: move the zone_resetall flag to queue_limits block: move the zoned flag into the features field block: move the poll flag to queue_limits block: move the dax flag to queue_limits block: move the nowait flag to queue_limits block: move the synchronous flag to queue_limits block: move the stable_writes flag to queue_limits block: move the io_stat flag setting to queue_limits block: move the add_random flag to queue_limits block: move the nonrot flag to queue_limits block: move cache control settings out of queue->flags block: remove blk_flush_policy block: freeze the queue in queue_attr_store nbd: move setting the cache control flags to __nbd_set_size virtio_blk: remove virtblk_update_cache_mode loop: fold loop_update_rotational into loop_reconfigure_limits loop: also use the default block size from an underlying block device ... Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-06-19iomap: don't increase i_size in iomap_write_end()Zhang Yi
This reverts commit '0841ea4a3b41 ("iomap: keep on increasing i_size in iomap_write_end()")'. After xfs could zero out the tail blocks aligned to the allocation unitsize and convert the tail blocks to unwritten for realtime inode on truncate down, it couldn't expose any stale data when unaligned truncate down realtime inodes, so we could keep on keeping i_size for IOMAP_UNSHARE and IOMAP_ZERO in iomap_write_end(). Signed-off-by: Zhang Yi <yi.zhang@huawei.com> Link: https://lore.kernel.org/r/20240618142112.1315279-3-yi.zhang@huaweicloud.com Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2024-06-19block: move the bounce flag into the features fieldChristoph Hellwig
Move the bounce flag into the features field to reclaim a little bit of space. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20240617060532.127975-27-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>