summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-10-31perf script: Fix PERF_SAMPLE_WEIGHT_STRUCT supportKan Liang
-F weight in perf script is broken. # ./perf mem record # ./perf script -F weight Samples for 'dummy:HG' event do not have WEIGHT attribute set. Cannot print 'weight' field. The sample type, PERF_SAMPLE_WEIGHT_STRUCT, is an alternative of the PERF_SAMPLE_WEIGHT sample type. They share the same space, weight. The lower 32 bits are exactly the same for both sample type. The higher 32 bits may be different for different architecture. For a new kernel on x86, the PERF_SAMPLE_WEIGHT_STRUCT is used. For an old kernel or other ARCHs, the PERF_SAMPLE_WEIGHT is used. With -F weight, current perf script will only check the input string "weight" with the PERF_SAMPLE_WEIGHT sample type. Because the commit ea8d0ed6eae3 ("perf tools: Support PERF_SAMPLE_WEIGHT_STRUCT") didn't update the PERF_SAMPLE_WEIGHT_STRUCT sample type for perf script. For a new kernel on x86, the check fails. Use PERF_SAMPLE_WEIGHT_TYPE, which supports both sample types, to replace PERF_SAMPLE_WEIGHT Fixes: ea8d0ed6eae37b01 ("perf tools: Support PERF_SAMPLE_WEIGHT_STRUCT") Reported-by: Joe Mario <jmario@redhat.com> Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Joe Mario <jmario@redhat.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Joe Mario <jmario@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Link: https://lore.kernel.org/r/1632929894-102778-1-git-send-email-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-31perf callchain: Fix compilation on powerpc with gcc11+Jiri Olsa
Got following build fail on powerpc: CC arch/powerpc/util/skip-callchain-idx.o In function ‘check_return_reg’, inlined from ‘check_return_addr’ at arch/powerpc/util/skip-callchain-idx.c:213:7, inlined from ‘arch_skip_callchain_idx’ at arch/powerpc/util/skip-callchain-idx.c:265:7: arch/powerpc/util/skip-callchain-idx.c:54:18: error: ‘dwarf_frame_register’ accessing 96 bytes \ in a region of size 64 [-Werror=stringop-overflow=] 54 | result = dwarf_frame_register(frame, ra_regno, ops_mem, &ops, &nops); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/powerpc/util/skip-callchain-idx.c: In function ‘arch_skip_callchain_idx’: arch/powerpc/util/skip-callchain-idx.c:54:18: note: referencing argument 3 of type ‘Dwarf_Op *’ In file included from /usr/include/elfutils/libdwfl.h:32, from arch/powerpc/util/skip-callchain-idx.c:10: /usr/include/elfutils/libdw.h:1069:12: note: in a call to function ‘dwarf_frame_register’ 1069 | extern int dwarf_frame_register (Dwarf_Frame *frame, int regno, | ^~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors The dwarf_frame_register args changed with [1], Updating ops_mem accordingly. [1] https://sourceware.org/git/?p=elfutils.git;a=commit;h=5621fe5443da23112170235dd5cac161e5c75e65 Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Mark Wieelard <mjw@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Link: https://lore.kernel.org/r/20210928195253.1267023-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-31perf script: Check session->header.env.arch before using itSong Liu
When perf.data is not written cleanly, we would like to process existing data as much as possible (please see f_header.data.size == 0 condition in perf_session__read_header). However, perf.data with partial data may crash perf. Specifically, we see crash in 'perf script' for NULL session->header.env.arch. Fix this by checking session->header.env.arch before using it to determine native_arch. Also split the if condition so it is easier to read. Committer notes: If it is a pipe, we already assume is a native arch, so no need to check session->header.env.arch. Signed-off-by: Song Liu <songliubraving@fb.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: kernel-team@fb.com Cc: stable@vger.kernel.org Link: http://lore.kernel.org/lkml/20211004053238.514936-1-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-31perf build: Suppress 'rm dlfilter' build messageAdrian Hunter
The following build message: rm dlfilters/dlfilter-test-api-v0.o is unwanted. The object file is being treated as an intermediate file and being automatically removed. Mark the object file as .SECONDARY to prevent removal and hence the message. Requested-by: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20210930062849.110416-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-31erofs: don't trigger WARN() when decompression failsGao Xiang
syzbot reported a WARNING [1] due to corrupted compressed data. As Dmitry said, "If this is not a kernel bug, then the code should not use WARN. WARN if for kernel bugs and is recognized as such by all testing systems and humans." [1] https://lore.kernel.org/r/000000000000b3586105cf0ff45e@google.com Link: https://lore.kernel.org/r/20211025074311.130395-1-hsiangkao@linux.alibaba.com Cc: Dmitry Vyukov <dvyukov@google.com> Reviewed-by: Chao Yu <chao@kernel.org> Reported-by: syzbot+d8aaffc3719597e8cfb4@syzkaller.appspotmail.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2021-10-31sched/fair: Cleanup newidle_balanceVincent Guittot
update_next_balance() uses sd->last_balance which is not modified by load_balance() so we can merge the 2 calls in one place. No functional change Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: Mel Gorman <mgorman@suse.de> Link: https://lore.kernel.org/r/20211019123537.17146-6-vincent.guittot@linaro.org
2021-10-31sched/fair: Remove sysctl_sched_migration_cost conditionVincent Guittot
With a default value of 500us, sysctl_sched_migration_cost is significanlty higher than the cost of load_balance. Remove the condition and rely on the sd->max_newidle_lb_cost to abort newidle_balance. Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: Mel Gorman <mgorman@suse.de> Link: https://lore.kernel.org/r/20211019123537.17146-5-vincent.guittot@linaro.org
2021-10-31sched/fair: Wait before decaying max_newidle_lb_costVincent Guittot
Decay max_newidle_lb_cost only when it has not been updated for a while and ensure to not decay a recently changed value. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: Mel Gorman <mgorman@suse.de> Link: https://lore.kernel.org/r/20211019123537.17146-4-vincent.guittot@linaro.org
2021-10-31sched/fair: Skip update_blocked_averages if we are defering load balanceVincent Guittot
In newidle_balance(), the scheduler skips load balance to the new idle cpu when the 1st sd of this_rq is: this_rq->avg_idle < sd->max_newidle_lb_cost Doing a costly call to update_blocked_averages() will not be useful and simply adds overhead when this condition is true. Check the condition early in newidle_balance() to skip update_blocked_averages() when possible. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: Mel Gorman <mgorman@suse.de> Link: https://lore.kernel.org/r/20211019123537.17146-3-vincent.guittot@linaro.org
2021-10-31sched/fair: Account update_blocked_averages in newidle_balance costVincent Guittot
The time spent to update the blocked load can be significant depending of the complexity fo the cgroup hierarchy. Take this time into account in the cost of the 1st load balance of a newly idle cpu. Also reduce the number of call to sched_clock_cpu() and track more actual work. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: Mel Gorman <mgorman@suse.de> Link: https://lore.kernel.org/r/20211019123537.17146-2-vincent.guittot@linaro.org
2021-10-30Merge tag 'scsi-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: "Three small fixes, all in drivers, and one sizeable update to the UFS driver to remove the HPB 2.0 feature that has been objected to by Jens and Christoph. Although the UFS patch is large and last minute, it's essentially the least intrusive way of resolving the objections in time for the 5.15 release" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: ufs: ufshpb: Remove HPB2.0 flows scsi: mpt3sas: Fix reference tag handling for WRITE_INSERT scsi: ufs: ufs-exynos: Correct timeout value setting registers scsi: ibmvfc: Fix up duplicate response detection
2021-10-30selftests/x86/iopl: Adjust to the faked iopl CLI/STI usageBorislav Petkov
Commit in Fixes changed the iopl emulation to not #GP on CLI and STI because it would break some insane luserspace tools which would toggle interrupts. The corresponding selftest would rely on the fact that executing CLI/STI would trigger a #GP and thus detect it this way but since that #GP is not happening anymore, the detection is now wrong too. Extend the test to actually look at the IF flag and whether executing those insns had any effect on it. The STI detection needs to have the fact that interrupts were previously disabled, passed in so do that from the previous CLI test, i.e., STI test needs to follow a previous CLI one for it to make sense. Fixes: b968e84b509d ("x86/iopl: Fake iopl(3) CLI/STI usage") Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20211030083939.13073-1-bp@alien8.de
2021-10-30task_stack: Fix end_of_stack() for architectures with upwards-growing stackHelge Deller
The function end_of_stack() returns a pointer to the last entry of a stack. For architectures like parisc where the stack grows upwards return the pointer to the highest address in the stack. Without this change I faced a crash on parisc, because the stackleak functionality wrote STACKLEAK_POISON to the lowest address and thus overwrote the first 4 bytes of the task_struct which included the TIF_FLAGS. Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: Use PRIV_USER instead of 3 in entry.SHelge Deller
Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: Use FRAME_SIZE and FRAME_ALIGN from assembly.hHelge Deller
Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: Allocate task struct with stack frame alignmentHelge Deller
We will put the stack directly behind the task struct, so make sure that we allocate it with an alignment of 64 bytes. Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: Define FRAME_ALIGN and PRIV_USER/PRIV_KERNEL in assembly.hHelge Deller
Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: fix warning in flush_tlb_allSven Schnelle
I've got the following splat after enabling preemption: [ 3.724721] BUG: using __this_cpu_add() in preemptible [00000000] code: swapper/0/1 [ 3.734630] caller is __this_cpu_preempt_check+0x38/0x50 [ 3.740635] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.15.0-rc4-64bit+ #324 [ 3.744605] Hardware name: 9000/785/C8000 [ 3.744605] Backtrace: [ 3.744605] [<00000000401d9d58>] show_stack+0x74/0xb0 [ 3.744605] [<0000000040c27bd4>] dump_stack_lvl+0x10c/0x188 [ 3.744605] [<0000000040c27c84>] dump_stack+0x34/0x48 [ 3.744605] [<0000000040c33438>] check_preemption_disabled+0x178/0x1b0 [ 3.744605] [<0000000040c334f8>] __this_cpu_preempt_check+0x38/0x50 [ 3.744605] [<00000000401d632c>] flush_tlb_all+0x58/0x2e0 [ 3.744605] [<00000000401075c0>] 0x401075c0 [ 3.744605] [<000000004010b8fc>] 0x4010b8fc [ 3.744605] [<00000000401080fc>] 0x401080fc [ 3.744605] [<00000000401d5224>] do_one_initcall+0x128/0x378 [ 3.744605] [<0000000040102de8>] 0x40102de8 [ 3.744605] [<0000000040c33864>] kernel_init+0x60/0x3a8 [ 3.744605] [<00000000401d1020>] ret_from_kernel_thread+0x20/0x28 [ 3.744605] Fix this by moving the __inc_irq_stat() into the locked section. Signed-off-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: disable preemption in send_IPI_allbutself()Sven Schnelle
Otherwise we might not stop all other CPUs. Signed-off-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: fix preempt_count() check in entry.SSven Schnelle
preempt_count in struct thread_info is unsigned int, but the entry.S code used LDREG, which generates a 64 bit load when compiled for 64 bit. Fix this to use an ldw and also change the condition in the compare one line below to only compares 32 bits, although ldw zero extends, and that should work with a 64 bit compare. Signed-off-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: deduplicate code in flush_cache_mm() and flush_cache_range()Sven Schnelle
Parts of both functions are the same, so deduplicate them. No functional change. Signed-off-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: disable preemption during local tlb flushSven Schnelle
flush_cache_mm() and flush_cache_range() fetch %sr3 via mfsp(). If it matches mm->context, they flush caches and the TLB. However, the TLB is cpu-local, so if the code gets preempted shortly after the mfsp(), and later resumed on another CPU, the wrong TLB is flushed. Signed-off-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: Add KFENCE supportHelge Deller
Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: Switch to ARCH_STACKWALK implementationHelge Deller
It's shorter and kfence currently depends on this stack unwinding implementation. Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc: make parisc_acctyp() available outside of faults.cHelge Deller
When adding kfence support, we need to tell kfence_handle_page_fault() if the interrupted assembler statement is a read or write operation. Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30parisc/unwind: use copy_from_kernel_nofault()Sven Schnelle
I have no idea why get_user() is used there, but we're unwinding the kernel stack, so we should use copy_from_kernel_nofault(). Signed-off-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Helge Deller <deller@gmx.de>
2021-10-30Merge tag 'clk-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux Pull clk fix from Stephen Boyd: "One fix for the composite clk that broke when we changed this clk type to use the determine_rate instead of round_rate clk op by default. This caused lots of problems on Rockchip SoCs because they heavily use the composite clk code to model the clk tree" * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: clk: composite: Also consider .determine_rate for rate + mux composites
2021-10-30Merge tag 'riscv-for-linus-5.15-rc8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V fixes from Palmer Dabbelt: "These are pretty late, but they do fix concrete issues. - ensure the trap vector's address is aligned. - avoid re-populating the KASAN shadow memory. - allow kasan to build without warnings, which have recently become errors" * tag 'riscv-for-linus-5.15-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: riscv: Fix asan-stack clang build riscv: Do not re-populate shadow memory with kasan_populate_early_shadow riscv: fix misalgned trap vector base address
2021-10-30locking: Remove spin_lock_flags() etcArnd Bergmann
parisc, ia64 and powerpc32 are the only remaining architectures that provide custom arch_{spin,read,write}_lock_flags() functions, which are meant to re-enable interrupts while waiting for a spinlock. However, none of these can actually run into this codepath, because it is only called on architectures without CONFIG_GENERIC_LOCKBREAK, or when CONFIG_DEBUG_LOCK_ALLOC is set without CONFIG_LOCKDEP, and none of those combinations are possible on the three architectures. Going back in the git history, it appears that arch/mn10300 may have been able to run into this code path, but there is a good chance that it never worked. On the architectures that still exist, it was already impossible to hit back in 2008 after the introduction of CONFIG_GENERIC_LOCKBREAK, and possibly earlier. As this is all dead code, just remove it and the helper functions built around it. For arch/ia64, the inline asm could be cleaned up, but it seems safer to leave it untouched. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Helge Deller <deller@gmx.de> # parisc Link: https://lore.kernel.org/r/20211022120058.1031690-1-arnd@kernel.org
2021-10-30perf/x86/intel: Fix ICL/SPR INST_RETIRED.PREC_DIST encodingsStephane Eranian
This patch fixes the encoding for INST_RETIRED.PREC_DIST as published by Intel (download.01.org/perfmon/) for Icelake. The official encoding is event code 0x00 umask 0x1, a change from Skylake where it was code 0xc0 umask 0x1. With this patch applied it is possible to run: $ perf record -a -e cpu/event=0x00,umask=0x1/pp ..... Whereas before this would fail. To avoid problems with tools which may use the old code, we maintain the old encoding for Icelake. Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20211014001214.2680534-1-eranian@google.com
2021-10-30scsi: ufs: ufshpb: Remove HPB2.0 flowsAvri Altman
The Host Performance Buffer feature allows UFS read commands to carry the physical media addresses along with the LBAs, thus allowing less internal L2P-table switches in the device. HPB1.0 allowed a single LBA, while HPB2.0 increases this capacity up to 255 blocks. Carrying more than a single record, the read operation is no longer purely of type "read" but a "hybrid" command: Writing the physical address to the device in one operation and reading back the required payload in another. The JEDEC HPB spec defines two commands for this operation: HPB-WRITE-BUFFER (0x2) to write the physical addresses to device, and HPB-READ to read the payload. With the current HPB design the UFS driver has no alternative but to divide the READ request into 2 separate commands: HPB-WRITE-BUFFER and HPB-READ. This causes a great deal of aggravation to the block layer guys who demanded that we completely revert the entire HPB driver regardless of the huge amount of corporate effort already invested in it. As a compromise, remove only the pieces that implement the 2.0 specification. This is done as a matter of urgency for the final 5.15 release. Link: https://lore.kernel.org/r/20211030062301.248-1-avri.altman@wdc.com Tested-by: Avri Altman <avri.altman@wdc.com> Tested-by: Bean Huo <beanhuo@micron.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Bean Huo <beanhuo@micron.com> Co-developed-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Avri Altman <avri.altman@wdc.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-10-29net: bridge: switchdev: fix shim definition for br_switchdev_mdb_notifyVladimir Oltean
br_switchdev_mdb_notify() is conditionally compiled only when CONFIG_NET_SWITCHDEV=y and CONFIG_BRIDGE_IGMP_SNOOPING=y. It is called from br_mdb.c, which is conditionally compiled only when CONFIG_BRIDGE_IGMP_SNOOPING=y. The shim definition of br_switchdev_mdb_notify() is therefore needed for the case where CONFIG_NET_SWITCHDEV=n, however we mistakenly put it there for the case where CONFIG_BRIDGE_IGMP_SNOOPING=n. This results in build failures when CONFIG_BRIDGE_IGMP_SNOOPING=y and CONFIG_NET_SWITCHDEV=n. To fix this, put the shim definition right next to br_switchdev_fdb_notify(), which is properly guarded by NET_SWITCHDEV=n. Since this is called only from br_mdb.c, we need not take any extra safety precautions, when NET_SWITCHDEV=n and BRIDGE_IGMP_SNOOPING=n, this shim definition will be absent but nobody will be needing it. Fixes: 9776457c784f ("net: bridge: mdb: move all switchdev logic to br_switchdev.c") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20211029223606.3450523-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-29Merge branch '1GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2021-10-29 This series contains updates to igc driver only. Sasha removes an unnecessary media type check, adds a new device ID, and changes a device reset to a port reset command. ==================== Link: https://lore.kernel.org/r/20211029174101.2970935-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-29bnxt_en: Remove not used other ULP defineLeon Romanovsky
There is only one bnxt ULP in the upstream kernel and definition for other ULP can be safely removed. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/3a8ea720b28ec4574648012d2a00208f1144eff5.1635527693.git.leonro@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-29Merge branch '40GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== 40GbE Intel Wired LAN Driver Updates 2021-10-29 This series contains updates to i40e, ice, igb, and ixgbevf drivers. Yang Li simplifies return statements of bool values for i40e and ice. Jan Kundrát corrects problems with I2C bit-banging for igb. Colin Ian King removes unneeded variable initialization for ixgbevf. ==================== Link: https://lore.kernel.org/r/20211029164641.2714265-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-29netdevsim: remove max_vfs dentryJakub Kicinski
Commit d395381909a3 ("netdevsim: Add max_vfs to bus_dev") added this file and saved the dentry for no apparent reason. Link: https://lore.kernel.org/r/20211028211753.22612-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-29mailbox: imx: support i.MX8ULP S4 MUPeng Fan
Like i.MX8 SCU, i.MX8ULP S4 also has vendor specific protocol. - bind SCU/S4 MU part to share one tx/rx/init API to make code simple. - S4 msg max size is very large, so alloc the space at driver probe, not use local on stack variable. - S4 MU has 8 TR and 4 RR which is different with i.MX8 MU, so adapt code to reflect this. Tested on i.MX8MP, i.MX8ULP Signed-off-by: Peng Fan <peng.fan@nxp.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29dt-bindings: mailbox: imx-mu: add i.MX8ULP S400 MU supportPeng Fan
Similar to i.MX8QM/QXP SCU, i.MX8ULP SCU MU is dedicated for communication between S400 and Cortex-A cores from hardware design, it could not be reused for other purpose. To use S400 MU more effectivly, add "fsl,imx8ulp-mu-s4" compatile to support fast IPC. Signed-off-by: Peng Fan <peng.fan@nxp.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29ACPI/PCC: Add maintainer for PCC mailbox driverSudeep Holla
Not much functionality is added since PCC driver was added 5 years ago. There is need to restructure the driver while adding support for PCC Extended subspaces type 3&4. There is more rework needed as more users adopt PCC on arm64 platforms. In order to ease the same, I would like to take responsibility to maintain this driver. Acked-by: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Move bulk of PCCT parsing into pcc_mbox_probeSudeep Holla
Move the PCCT subspace parsing and allocation into pcc_mbox_probe so that we can get rid of global PCC channel and mailbox controller data. It also helps to make use of devm_* APIs for all the allocations. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Add support for PCCT extended PCC subspaces(type 3/4)Sudeep Holla
With all the plumbing in place to avoid accessing PCCT type and other fields directly from the PCCT table all the time, let us now add the support for extended PCC subspaces(type 3 and 4). Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Drop handling invalid bit-width in {read,write}_registerSudeep Holla
pcc_chan_reg_init now checks if the register bit width is within the list [8, 16, 32, 64] and flags error if that is not the case. Therefore there is no need to handling invalid bit-width in both read_register and write_register. We can drop that along with the return values for these 2 functions. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Avoid accessing PCCT table in pcc_send_data and pcc_mbox_irqSudeep Holla
Now that the con_priv is availvale solely for PCC mailbox controller driver, let us use the same to save the channel specific information in it so that we can it whenever required instead of parsing the PCCT table entries every time in both pcc_send_data and pcc_mbox_irq. We can now use the newly introduces PCC register bundle to simplify both saving of channel specific information and accessing them without repeated checks for the subspace type. Reviewed-by: Cristian Marussi <cristian.marussi@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Add PCC register bundle and associated accessor functionsSudeep Holla
Extended PCC subspaces introduces more registers into the PCCT. In order to consolidate access to these registers and to keep all the details contained in one place, let us introduce PCC register bundle that holds the ACPI Generic Address Structure as well as the virtual address for the same if it is mapped in the OS. It also contains the various masks used to access the register and the associated read, write and read-modify-write accessors. We can also clean up the initialisations by having a helper function for the same. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Rename doorbell ack to platform interrupt ack registerSudeep Holla
The specification refers this register and associated bitmask as platform interrupt acknowledge register. Let us rename it so that it is easier to map and understand. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Use PCC mailbox channel pointer instead of standardSudeep Holla
Now that we have all the shared memory region information populated in the pcc_mbox_chan, let us propagate the pointer to the same as the return value to pcc_mbox_request channel. This eliminates the need for the individual users of PCC mailbox to parse the PCCT subspace entries and fetch the shmem information. This also eliminates the need for PCC mailbox controller to set con_priv to PCCT subspace entries. This is required as con_priv is private to the controller driver to attach private data associated with the channel and not meant to be used by the mailbox client/users. Let us convert all the users of pcc_mbox_{request,free}_channel to use new interface. Cc: Jean Delvare <jdelvare@suse.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Wolfram Sang <wsa@kernel.org> Acked-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Add pcc_mbox_chan structure to hold shared memory region infoSudeep Holla
Currently PCC mailbox controller sets con_priv in each channel to hold the pointer to pcct subspace entry it corresponds to. The mailbox user will then fetch this pointer from the channel descriptor they get when they request for the channel. Using that pointer they then parse the pcct entry again to fetch all the information about shared memory region. In order to remove individual users of PCC mailbox parsing the PCCT subspace entries to fetch same information, let us consolidate the same in pcc mailbox controller by parsing all the shared memory region information into a structure that can also hold the mbox_chan pointer it represent. This can then be used as main PCC mailbox channel pointer that we can return as part of pcc_mbox_request_channel instead of standard mailbox channel pointer. Reviewed-by: Cristian Marussi <cristian.marussi@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Consolidate subspace doorbell register parsingSudeep Holla
Extended PCC subspaces(Type 3 and 4) differ from generic(Type 0) and HW-Reduced Communication(Type 1 and 2) subspace structures. However some fields share same offsets and same type of structure can be use to extract the fields. In order to simplify that, let us move all the doorbell register parsing into pcc_parse_subspace_db_reg and consolidate there. It will be easier to extend it if required within the same. Reviewed-by: Cristian Marussi <cristian.marussi@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Consolidate subspace interrupt information parsingSudeep Holla
Extended PCC subspaces(Type 3 and 4) differ from generic(Type 0) and HW-Reduced Communication(Type 1 and 2) subspace structures. However some fields share same offsets and same type of structure can be use to extract the fields. In order to simplify that, let us move all the IRQ related information parsing into pcc_parse_subspace_irq and consolidate there. It will be easier to extend it if required within the same. Reviewed-by: Cristian Marussi <cristian.marussi@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2021-10-29mailbox: pcc: Refactor all PCC channel information into a structureSudeep Holla
Currently all the PCC channel specific information are stored/maintained in global individual arrays for each of those information. It is not scalable and not clean if we have to stash more channel specific information. Couple of reasons to stash more information are to extend the support to Type 3/4 PCCT subspace and also to avoid accessing the PCCT table entries themselves each time we need the information. This patch moves all those PCC channel specific information into a separate structure pcc_chan_info. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>