summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-03-22bcache: optimize barrier usage for Rmw atomic bitopsDavidlohr Bueso
We can avoid the unnecessary barrier on non LL/SC architectures, such as x86. Instead, use the smp_mb__after_atomic(). Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: Use scnprintf() for avoiding potential buffer overflowTakashi Iwai
Since snprintf() returns the would-be-output size instead of the actual output size, the succeeding calls may go beyond the given buffer limit. Fix it by replacing with scnprintf(). Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: make bch_sectors_dirty_init() to be multithreadedColy Li
When attaching a cached device (a.k.a backing device) to a cache device, bch_sectors_dirty_init() is called to count dirty sectors and stripes (see what bcache_dev_sectors_dirty_add() does) on the cache device. The counting is done by a single thread recursive function bch_btree_map_keys() to iterate all the bcache btree nodes. If the btree has huge number of nodes, bch_sectors_dirty_init() will take quite long time. In my testing, if the registering cache set has a existed UUID which matches a already registered cached device, the automatical attachment during the registration may take more than 55 minutes. This is too long for waiting the bcache to work in real deployment. Fortunately when bch_sectors_dirty_init() is called, no other thread will access the btree yet, it is safe to do a read-only parallelized dirty sectors counting by multiple threads. This patch tries to create multiple threads, and each thread tries to one-by-one count dirty sectors from the sub-tree indexed by a root node key which the thread fetched. After the sub-tree is counted, the counting thread will continue to fetch another root node key, until the fetched key is NULL. How many threads in parallel depends on the number of keys from the btree root node, and the number of online CPU core. The thread number will be the less number but no more than BCH_DIRTY_INIT_THRD_MAX. If there are only 2 keys in root node, it can only be 2x times faster by this patch. But if there are 10 keys in the root node, with this patch it can be 10x times faster. Signed-off-by: Coly Li <colyli@suse.de> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: make bch_btree_check() to be multithreadedColy Li
When registering a cache device, bch_btree_check() is called to check all btree nodes, to make sure the btree is consistent and not corrupted. bch_btree_check() is recursively executed in a single thread, when there are a lot of data cached and the btree is huge, it may take very long time to check all the btree nodes. In my testing, I observed it took around 50 minutes to finish bch_btree_check(). When checking the bcache btree nodes, the cache set is not running yet, and indeed the whole tree is in read-only state, it is safe to create multiple threads to check the btree in parallel. This patch tries to create multiple threads, and each thread tries to one-by-one check the sub-tree indexed by a key from the btree root node. The parallel thread number depends on how many keys in the btree root node. At most BCH_BTR_CHKTHREAD_MAX (64) threads can be created, but in practice is should be min(cpu-number/2, root-node-keys-number). Signed-off-by: Coly Li <colyli@suse.de> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: add bcache_ prefix to btree_root() and btree() macrosColy Li
This patch changes macro btree_root() and btree() to bcache_btree_root() and bcache_btree(), to avoid potential generic name clash in future. NOTE: for product kernel maintainers, this patch can be skipped if you feel the rename stuffs introduce inconvenince to patch backport. Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: move macro btree() and btree_root() into btree.hColy Li
In order to accelerate bcache registration speed, the macro btree() and btree_root() will be referenced out of btree.c. This patch moves them from btree.c into btree.h with other relative function declaration in btree.h, for the following changes. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22irqchip/gic-v3: Move irq_domain_update_bus_token to after checking for NULL ↵luanshi
domain irq_domain_update_bus_token should be called after checking for NULL domain. Signed-off-by: Liguang Zhang <zhangliguang@linux.alibaba.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1583983255-44115-1-git-send-email-zhangliguang@linux.alibaba.com
2020-03-22irqchip/xilinx: Do not call irq_set_default_host()Mubin Sayyed
Using a default domain on DT based platforms is unnecessary. Signed-off-by: Mubin Sayyed <mubin.usman.sayyed@xilinx.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200317125600.15913-5-mubin.usman.sayyed@xilinx.com
2020-03-22irqchip/xilinx: Enable generic irq multi handlerMichal Simek
Register default arch handler via driver instead of directly pointing to xilinx intc controller. This patch makes architecture code more generic. Driver calls generic domain specific irq handler which does the most of things self. Also get rid of concurrent_irq counting which hasn't been exported anywhere. Based on this loop was also optimized by using do/while loop instead of goto loop. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Stefan Asserhall <stefan.asserhall@xilinx.com> Link: https://lore.kernel.org/r/20200317125600.15913-4-mubin.usman.sayyed@xilinx.com
2020-03-22irqchip/xilinx: Fill error code when irq domain registration failsMichal Simek
There is no ret filled in case of irq_domain_add_linear() failure. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Stefan Asserhall <stefan.asserhall@xilinx.com> Link: https://lore.kernel.org/r/20200317125600.15913-3-mubin.usman.sayyed@xilinx.com
2020-03-22irqchip/xilinx: Add support for multiple instancesMubin Sayyed
Added support for cascaded interrupt controllers. Following cascaded configurations have been tested, - peripheral->xilinx-intc->xilinx-intc->gic->Cortexa53 processor on zcu102 board - peripheral->xilinx-intc->xilinx-intc->microblaze processor on kcu105 board Signed-off-by: Mubin Sayyed <mubin.usman.sayyed@xilinx.com> Signed-off-by: Anirudha Sarangi <anirudha.sarangi@xilinx.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200317125600.15913-2-mubin.usman.sayyed@xilinx.com
2020-03-22irqchip/ingenic: Add support for TCU of X1000.周琰杰 (Zhou Yanjie)
Enable TCU support for Ingenic X1000, which can be supported by the existing driver. Signed-off-by: 周琰杰 (Zhou Yanjie) <zhouyanjie@wanyeetech.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1584456160-40060-3-git-send-email-zhouyanjie@wanyeetech.com
2020-03-22irqchip/qcom-irq-combiner: Replace zero-length array with flexible-array memberGustavo A. R. Silva
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200319214531.GA21326@embeddedor.com
2020-03-22irqchip/irq-bcm7038-l1: Replace zero-length array with flexible-array memberGustavo A. R. Silva
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200319214438.GA21123@embeddedor.com
2020-03-22irqchip/versatile-fpga: Apply clear-mask earlierSungbo Eo
Clear its own IRQs before the parent IRQ get enabled, so that the remaining IRQs do not accidentally interrupt the parent IRQ controller. This patch also fixes a reboot bug on OX820 SoC, where the remaining rps-timer IRQ raises a GIC interrupt that is left pending. After that, the rps-timer IRQ is cleared during driver initialization, and there's no IRQ left in rps-irq when local_irq_enable() is called, which evokes an error message "unexpected IRQ trap". Fixes: bdd272cbb97a ("irqchip: versatile FPGA: support cascaded interrupts from DT") Signed-off-by: Sungbo Eo <mans0n@gorani.run> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200321133842.2408823-1-mans0n@gorani.run
2020-03-22ath10k: Fill GCMP MIC length for PMFYingying Tang
GCMP MIC length is not filled for GCMP/GCMP-256 cipher suites in PMF enabled case. Due to mismatch in MIC length, deauth/disassoc frames are unencrypted. This patch fills proper MIC length for GCMP/GCMP-256 cipher suites. Tested HW: QCA9984, QCA9888 Tested FW: 10.4-3.6-00104 Signed-off-by: Yingying Tang <yintang@codeaurora.org> Co-developed-by: Sowmiya Sree Elavalagan <ssreeela@codeaurora.org> Signed-off-by: Sowmiya Sree Elavalagan <ssreeela@codeaurora.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2020-03-22x86/mce/amd: Add PPIN support for AMD MCEWei Huang
Newer AMD CPUs support a feature called protected processor identification number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23]. However, CPUID alone is not enough to read the processor identification number - MSR_AMD_PPIN_CTL also needs to be configured properly. If, for any reason, MSR_AMD_PPIN_CTL[PPIN_EN] can not be turned on, such as disabled in BIOS, the CPU capability bit X86_FEATURE_AMD_PPIN needs to be cleared. When the X86_FEATURE_AMD_PPIN capability is available, the identification number is issued together with the MCE error info in order to keep track of the source of MCE errors. [ bp: Massage. ] Co-developed-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> Signed-off-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> Signed-off-by: Wei Huang <wei.huang2@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200321193800.3666964-1-wei.huang2@amd.com
2020-03-22ACPI: PM: s2idle: Fix comment in acpi_s2idle_prepare_late()Rafael J. Wysocki
Fix a comment in acpi_s2idle_prepare_late() that has become outdated after commit f0ac20c3f613 ("ACPI: EC: Fix flushing of pending work"). Fixes: f0ac20c3f613 ("ACPI: EC: Fix flushing of pending work") Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-03-21selftests/net: add definition for SOL_DCCP to fix compilation errors for old ↵Alan Maguire
libc Many systems build/test up-to-date kernels with older libcs, and an older glibc (2.17) lacks the definition of SOL_DCCP in /usr/include/bits/socket.h (it was added in the 4.6 timeframe). Adding the definition to the test program avoids a compilation failure that gets in the way of building tools/testing/selftests/net. The test itself will work once the definition is added; either skipping due to DCCP not being configured in the kernel under test or passing, so there are no other more up-to-date glibc dependencies here it seems beyond that missing definition. Fixes: 11fb60d1089f ("selftests: net: reuseport_addr_any: add DCCP") Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21Merge branch 'net-hns3-add-three-optimizations-for-mailbox-handling'David S. Miller
Huazhong Tan says: ==================== net: hns3: add three optimizations for mailbox handling This patchset includes three code optimizations for mailbox handling. [patch 1] adds a response code conversion. [patch 2] refactors some structure definitions about PF and VF mailbox. [patch 3] refactors the condition whether PF responds VF's mailbox. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net: hns3: refactor mailbox response scheme between PF and VFHuazhong Tan
Currently, PF responds to VF depending on what mailbox it is handling, it is a bit inflexible. The correct way is, PF should check the mbx_need_resp field to decide whether gives response to VF. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net: hns3: refactor the mailbox message between PF and VFYufeng Mo
For making the code more readable, this adds several new structure to replace the msg field in structure hclge_mbx_vf_to_pf_cmd and hclge_mbx_pf_to_vf_cmd. Also uses macro to instead of some magic number. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net: hns3: add a conversion for mailbox's response codeJian Shen
Currently, when mailbox handling fails, the PF driver just responds 1 to the VF driver. It is not sufficient for the VF driver to find out why its mailbox fails. So the error should be responded to VF, but the error is type int and the response field in struct hclge_mbx_pf_to_vf_cmd is type u16, a conversion is needed. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21mptcp: Remove set but not used variable 'can_ack'YueHaibing
Fixes gcc '-Wunused-but-set-variable' warning: net/mptcp/options.c: In function 'mptcp_established_options_dss': net/mptcp/options.c:338:7: warning: variable 'can_ack' set but not used [-Wunused-but-set-variable] commit dc093db5cc05 ("mptcp: drop unneeded checks") leave behind this unused, remove it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net: bcmgenet: always enable status blocksDoug Berger
The hardware offloading of the NETIF_F_HW_CSUM and NETIF_F_RXCSUM features requires the use of Transmit Status Blocks before transmit frame data and Receive Status Blocks before receive frame data to carry the checksum information. Unfortunately, these status blocks are currently only enabled when the NETIF_F_HW_CSUM feature is enabled. As a result NETIF_F_RXCSUM will not actually be offloaded to the hardware unless both it and NETIF_F_HW_CSUM are enabled. Fortunately, that is the default configuration. This commit addresses this issue by always enabling the use of status blocks on both transmit and receive frames. Further, it replaces the use of a dedicated flag within the driver private data structure with direct use of the netdev features flags. Fixes: 810155397890 ("net: bcmgenet: use CHECKSUM_COMPLETE for NETIF_F_RXCSUM") Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21Merge branch 'selftests-expand-txtimestamp-with-new-features'David S. Miller
Jian Yang says: ==================== selftests: expand txtimestamp with new features Current txtimestamp selftest issues requests with no delay, or fixed 50 usec delay. Nsec granularity is useful to measure fine-grained latency. A configurable delay is useful to simulate the case with cold cachelines. This patchset adds new flags and features to the txtimestamp selftest, including: - Printing in nsec (-N) - Polling interval (-b, -S) - Using epoll (-E, -e) - Printing statistics - Running individual tests in txtimestamp.sh ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21selftests: txtimestamp: print statistics for timestamp events.Jian Yang
Statistics on timestamps is useful to quantify average and tail latency. Print timestamp statistics in count/avg/min/max format. Signed-off-by: Jian Yang <jianyang@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21selftests: txtimestamp: add support for epoll().Jian Yang
Add the following new flags: -e: use level-triggered epoll() instead of poll(). -E: use event-triggered epoll() instead of poll(). Signed-off-by: Jian Yang <jianyang@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21selftests: txtimestamp: add new command-line flags.Jian Yang
A longer sleep duration between sendmsg()s makes more cachelines to be evicted and results in higher latency. Making the duration configurable. Add the following new flags: -S: Configurable sleep duration. -b: Busy loop instead of poll(). Remove the following flag: -D: No delay between packets: subsumed by -S. Signed-off-by: Jian Yang <jianyang@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21selftests: txtimestamp: allow printing latencies in nsec.Jian Yang
Txtimestamp reports latencies in uses resolution, while nsec is needed in cases such as measuring latencies on localhost. Add the following new flag: -N: print timestamps and durations in nsec (instead of usec) Signed-off-by: Jian Yang <jianyang@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21selftests: txtimestamp: allow individual txtimestamp tests.Jian Yang
The wrapper script txtimestamp.sh executes a pre-defined list of testcases sequentially without configuration options available. Add an option (-r/--run) to setup the test namespace and pass remaining arguments to txtimestamp binary. The script still runs all tests when no argument is passed. Signed-off-by: Jian Yang <jianyang@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net: phy: dp83867: w/a for fld detect threshold bootstrapping issueGrygorii Strashko
When the DP83867 PHY is strapped to enable Fast Link Drop (FLD) feature STRAP_STS2.STRAP_ FLD (reg 0x006F bit 10), the Energy Lost Threshold for FLD Energy Lost Mode FLD_THR_CFG.ENERGY_LOST_FLD_THR (reg 0x002e bits 2:0) will be defaulted to 0x2. This may cause the phy link to be unstable. The new DP83867 DM recommends to always restore ENERGY_LOST_FLD_THR to 0x1. Hence, restore default value of FLD_THR_CFG.ENERGY_LOST_FLD_THR to 0x1 when FLD is enabled by bootstrapping as recommended by DM. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21Merge branch 'net-tls-Annotate-lockless-access-to-sk_prot'David S. Miller
Jakub Sitnicki says: ==================== net/tls: Annotate lockless access to sk_prot We have recently noticed that there is a case of lockless read/write to sk->sk_prot [0]. sockmap code on psock tear-down writes to sk->sk_prot, while holding sk_callback_lock. Concurrently, tcp can access it. Usually to read out the sk_prot pointer and invoke one of the ops, sk->sk_prot->handler(). The lockless write (lockless in regard to concurrent reads) happens on the following paths: tcp_bpf_{recvmsg|sendmsg} / sock_map_unref sk_psock_put sk_psock_drop sk_psock_restore_proto WRITE_ONCE(sk->sk_prot, proto) To prevent load/store tearing [1], and to make tooling aware of intentional shared access [2], we need to annotate sites that access sk_prot with READ_ONCE/WRITE_ONCE. This series kicks off the effort to do it. Starting with net/tls. [0] https://lore.kernel.org/bpf/a6bf279e-a998-84ab-4371-cd6c1ccbca5d@gmail.com/ [1] https://lwn.net/Articles/793253/ [2] https://github.com/google/ktsan/wiki/READ_ONCE-and-WRITE_ONCE ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net/tls: Annotate access to sk_prot with READ_ONCE/WRITE_ONCEJakub Sitnicki
sockmap performs lockless writes to sk->sk_prot on the following paths: tcp_bpf_{recvmsg|sendmsg} / sock_map_unref sk_psock_put sk_psock_drop sk_psock_restore_proto WRITE_ONCE(sk->sk_prot, proto) To prevent load/store tearing [1], and to make tooling aware of intentional shared access [2], we need to annotate other sites that access sk_prot with READ_ONCE/WRITE_ONCE macros. Change done with Coccinelle with following semantic patch: @@ expression E; identifier I; struct sock *sk; identifier sk_prot =~ "^sk_prot$"; @@ ( E = -sk->sk_prot +READ_ONCE(sk->sk_prot) | -sk->sk_prot = E +WRITE_ONCE(sk->sk_prot, E) | -sk->sk_prot +READ_ONCE(sk->sk_prot) ->I ) Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net/tls: Read sk_prot once when building tls proto opsJakub Sitnicki
Apart from being a "tremendous" win when it comes to generated machine code (see bloat-o-meter output for x86-64 below) this mainly prepares ground for annotating access to sk_prot with READ_ONCE, so that we don't pepper the code with access annotations and needlessly repeat loads. add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-46 (-46) Function old new delta tls_init 851 805 -46 Total: Before=21063, After=21017, chg -0.22% Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net/tls: Constify base proto ops used for building tls protoJakub Sitnicki
The helper that builds kTLS proto ops doesn't need to and should not modify the base proto ops. Annotate the parameter as read-only. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21net: stmmac: dwmac-rk: fix error path in rk_gmac_probeEmil Renner Berthing
Make sure we clean up devicetree related configuration also when clock init fails. Fixes: fecd4d7eef8b ("net: stmmac: dwmac-rk: Add integrated PHY support") Signed-off-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21slcan: not call free_netdev before rtnl_unlock in slcan_openOliver Hartkopp
As the description before netdev_run_todo, we cannot call free_netdev before rtnl_unlock, fix it by reorder the code. This patch is a 1:1 copy of upstream slip.c commit f596c87005f7 ("slip: not call free_netdev before rtnl_unlock in slip_open"). Reported-by: yangerkun <yangerkun@huawei.com> Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21Merge branch 'ionic-error-recovery-fixes'David S. Miller
Shannon Nelson says: ==================== ionic error recovery fixes These are a few little patches to make error recovery a little more safe and successful. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21ionic: check for NULL structs on teardownShannon Nelson
Make sure the queue structs exist before trying to tear them down to make for safer error recovery. Fixes: 0f3154e6bcb3 ("ionic: Add Tx and Rx handling") Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21ionic: clean irq affinity on queue deinitShannon Nelson
Add a little more cleanup when tearing down the queues. Fixes: 1d062b7b6f64 ("ionic: Add basic adminq support") Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21ionic: ignore eexist on rx filter addShannon Nelson
Don't worry if the rx filter add firmware request fails on EEXIST, at least we know the filter is there. Same for the delete request, at least we know it isn't there. Fixes: 2a654540be10 ("ionic: Add Rx filter and rx_mode ndo support") Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21ionic: only save good lif dentryShannon Nelson
Don't save the lif->dentry until we know we have a good value. Fixes: 1a58e196467f ("ionic: Add basic lif support") Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21ionic: leave dev cmd request contents alone on FW timeoutShannon Nelson
It is possible (but unlikely) that FW was busy and missed a heartbeat check but is still alive and will process the pending request, so don't clean the dev_cmd in this case. This occasionally occurs when working with a card that is supporting many devices and is trying to shut them all down at once, but still wants to see that last LIF disable request. Fixes: 97ca486592c0 ("ionic: add heartbeat check") Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21ionic: add timeout error checking for queue disableShannon Nelson
Short circuit the cleanup if we get a timeout error from ionic_qcq_disable() so as to not have to wait too long on shutdown when we already know the FW is not responding. Fixes: 0f3154e6bcb3 ("ionic: Add Tx and Rx handling") Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21ionic: make spdxcheck.py happyLukas Bulwahn
Headers ionic_if.h and ionic_regs.h are licensed under three alternative licenses and the used SPDX-License-Identifier expression makes ./scripts/spdxcheck.py complain: drivers/net/ethernet/pensando/ionic/ionic_if.h: 1:52 Syntax error: OR drivers/net/ethernet/pensando/ionic/ionic_regs.h: 1:52 Syntax error: OR As OR is associative, it is irrelevant if the parentheses are put around the first or the second OR-expression. Simply add parentheses to make spdxcheck.py happy. Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21hsr: fix general protection fault in hsr_addr_is_self()Taehee Yoo
The port->hsr is used in the hsr_handle_frame(), which is a callback of rx_handler. hsr master and slaves are initialized in hsr_add_port(). This function initializes several pointers, which includes port->hsr after registering rx_handler. So, in the rx_handler routine, un-initialized pointer would be used. In order to fix this, pointers should be initialized before registering rx_handler. Test commands: ip netns del left ip netns del right modprobe -rv veth modprobe -rv hsr killall ping modprobe hsr ip netns add left ip netns add right ip link add veth0 type veth peer name veth1 ip link add veth2 type veth peer name veth3 ip link add veth4 type veth peer name veth5 ip link set veth1 netns left ip link set veth3 netns right ip link set veth4 netns left ip link set veth5 netns right ip link set veth0 up ip link set veth2 up ip link set veth0 address fc:00:00:00:00:01 ip link set veth2 address fc:00:00:00:00:02 ip netns exec left ip link set veth1 up ip netns exec left ip link set veth4 up ip netns exec right ip link set veth3 up ip netns exec right ip link set veth5 up ip link add hsr0 type hsr slave1 veth0 slave2 veth2 ip a a 192.168.100.1/24 dev hsr0 ip link set hsr0 up ip netns exec left ip link add hsr1 type hsr slave1 veth1 slave2 veth4 ip netns exec left ip a a 192.168.100.2/24 dev hsr1 ip netns exec left ip link set hsr1 up ip netns exec left ip n a 192.168.100.1 dev hsr1 lladdr \ fc:00:00:00:00:01 nud permanent ip netns exec left ip n r 192.168.100.1 dev hsr1 lladdr \ fc:00:00:00:00:01 nud permanent for i in {1..100} do ip netns exec left ping 192.168.100.1 & done ip netns exec left hping3 192.168.100.1 -2 --flood & ip netns exec right ip link add hsr2 type hsr slave1 veth3 slave2 veth5 ip netns exec right ip a a 192.168.100.3/24 dev hsr2 ip netns exec right ip link set hsr2 up ip netns exec right ip n a 192.168.100.1 dev hsr2 lladdr \ fc:00:00:00:00:02 nud permanent ip netns exec right ip n r 192.168.100.1 dev hsr2 lladdr \ fc:00:00:00:00:02 nud permanent for i in {1..100} do ip netns exec right ping 192.168.100.1 & done ip netns exec right hping3 192.168.100.1 -2 --flood & while : do ip link add hsr0 type hsr slave1 veth0 slave2 veth2 ip a a 192.168.100.1/24 dev hsr0 ip link set hsr0 up ip link del hsr0 done Splat looks like: [ 120.954938][ C0] general protection fault, probably for non-canonical address 0xdffffc0000000006: 0000 [#1]I [ 120.957761][ C0] KASAN: null-ptr-deref in range [0x0000000000000030-0x0000000000000037] [ 120.959064][ C0] CPU: 0 PID: 1511 Comm: hping3 Not tainted 5.6.0-rc5+ #460 [ 120.960054][ C0] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [ 120.962261][ C0] RIP: 0010:hsr_addr_is_self+0x65/0x2a0 [hsr] [ 120.963149][ C0] Code: 44 24 18 70 73 2f c0 48 c1 eb 03 48 8d 04 13 c7 00 f1 f1 f1 f1 c7 40 04 00 f2 f2 f2 4 [ 120.966277][ C0] RSP: 0018:ffff8880d9c09af0 EFLAGS: 00010206 [ 120.967293][ C0] RAX: 0000000000000006 RBX: 1ffff1101b38135f RCX: 0000000000000000 [ 120.968516][ C0] RDX: dffffc0000000000 RSI: ffff8880d17cb208 RDI: 0000000000000000 [ 120.969718][ C0] RBP: 0000000000000030 R08: ffffed101b3c0e3c R09: 0000000000000001 [ 120.972203][ C0] R10: 0000000000000001 R11: ffffed101b3c0e3b R12: 0000000000000000 [ 120.973379][ C0] R13: ffff8880aaf80100 R14: ffff8880aaf800f2 R15: ffff8880aaf80040 [ 120.974410][ C0] FS: 00007f58e693f740(0000) GS:ffff8880d9c00000(0000) knlGS:0000000000000000 [ 120.979794][ C0] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 120.980773][ C0] CR2: 00007ffcb8b38f29 CR3: 00000000afe8e001 CR4: 00000000000606f0 [ 120.981945][ C0] Call Trace: [ 120.982411][ C0] <IRQ> [ 120.982848][ C0] ? hsr_add_node+0x8c0/0x8c0 [hsr] [ 120.983522][ C0] ? rcu_read_lock_held+0x90/0xa0 [ 120.984159][ C0] ? rcu_read_lock_sched_held+0xc0/0xc0 [ 120.984944][ C0] hsr_handle_frame+0x1db/0x4e0 [hsr] [ 120.985597][ C0] ? hsr_nl_nodedown+0x2b0/0x2b0 [hsr] [ 120.986289][ C0] __netif_receive_skb_core+0x6bf/0x3170 [ 120.992513][ C0] ? check_chain_key+0x236/0x5d0 [ 120.993223][ C0] ? do_xdp_generic+0x1460/0x1460 [ 120.993875][ C0] ? register_lock_class+0x14d0/0x14d0 [ 120.994609][ C0] ? __netif_receive_skb_one_core+0x8d/0x160 [ 120.995377][ C0] __netif_receive_skb_one_core+0x8d/0x160 [ 120.996204][ C0] ? __netif_receive_skb_core+0x3170/0x3170 [ ... ] Reported-by: syzbot+fcf5dd39282ceb27108d@syzkaller.appspotmail.com Fixes: c5a759117210 ("net/hsr: Use list_head (and rcu) instead of array for slave devices.") Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21soc: qcom: ipa: kill IPA_RX_BUFFER_ORDERAlex Elder
Don't assume the receive buffer size is a power-of-2 number of pages. Instead, define the receive buffer size independently, and then compute the page order from that size when needed. This fixes a build problem that arises when the ARM64_PAGE_SHIFT config option is set to have a page size greater than 4KB. The problem was identified by Linux Kernel Functional Testing. The IPA code basically assumed the page size to be 4KB. A larger page size caused the receive buffer size to become correspondingly larger (32KB or 128KB for ARM64_16K_PAGES and ARM64_64K_PAGES, respectively). The receive buffer size is used to compute an "aggregation byte limit" value that gets programmed into the hardware, and the large page sizes caused that limit value to be too big to fit in a 5 bit field. This triggered a BUILD_BUG_ON() call in ipa_endpoint_validate_build(). This fix causes a lot of receive buffer memory to be wasted if system is configured for page size greater than 4KB. But such a misguided configuration will now build successfully. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Alex Elder <elder@linaro.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21Merge branch 'hinic-BugFixes'David S. Miller
Luo bin says: ==================== hinic: BugFixes Fix a number of bugs which have been present since the first commit. The bugs fixed in these patchs are hardly exposed unless given very specific conditions. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-21hinic: fix wrong value of MIN_SKB_LENLuo bin
the minimum value of skb len that hw supports is 32 rather than 17 Signed-off-by: Luo bin <luobin9@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>