summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-04-14xfs: bump XFS_IOC_FSGEOMETRY to v5 structuresDave Chinner
Unfortunately, the V4 XFS_IOC_FSGEOMETRY structure is out of space so we can't just add a new field to it. Hence we need to bump the definition to V5 and and treat the V4 ioctl and structure similar to v1 to v3. While doing this, clean up all the definitions associated with the XFS_IOC_FSGEOMETRY ioctl. Signed-Off-By: Dave Chinner <dchinner@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> [darrick: forward port to 5.1, expand structure size to 256 bytes] Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
2019-04-14xfs: clear BAD_SUMMARY if unmounting an unhealthy filesystemDarrick J. Wong
If we know the filesystem metadata isn't healthy during unmount, we want to encourage the administrator to run xfs_repair right away. We can't do this if BAD_SUMMARY will cause an unclean log unmount to force summary recalculation, so turn it off if the fs is bad. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
2019-04-14xfs: replace the BAD_SUMMARY mount flag with the equivalent health codeDarrick J. Wong
Replace the BAD_SUMMARY mount flag with calls to the equivalent health tracking code. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
2019-04-14xfs: track metadata health statusDarrick J. Wong
Add the necessary in-core metadata fields to keep track of which parts of the filesystem have been observed and which parts were observed to be unhealthy, and print a warning at unmount time if we have unfixed problems. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
2019-04-14xfs,fstrim: fix to return correct minlenWang Shilong
This patch tries to address two problems: 1) return @minlen we used to trim to user space. 2) return EINVAL if granularity is larger than avg size, even most of cases, granularity is small(4K), but if devices return a lager granularity for some reaons (testing, bugs etc), fstrim should return failure directly. Signed-off-by: Wang Shilong <wshilong@ddn.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2019-04-14xfs: don't account extra agfl blocks as availableBrian Foster
The block allocation AG selection code has parameters that allow a caller to perform multiple allocations from a single AG and transaction (under certain conditions). The parameters specify the total block allocation count required by the transaction and the AG selection code selects and locks an AG that will be able to satisfy the overall requirement. If the available block accounting calculation turns out to be inaccurate and a subsequent allocation call fails with -ENOSPC, the resulting transaction cancel leads to filesystem shutdown because the transaction is dirty. This exact problem can be reproduced with a highly parallel space consumer and fsstress workload running long enough to a large filesystem against -ENOSPC conditions. A bmbt block allocation request made for inode extent to bmap format conversion after an extent allocation is expected to be satisfied by the same AG and the same transaction as the extent allocation. The bmbt block allocation fails, however, because the block availability of the AG has changed since the AG was selected (outside of the blocks used for the extent itself). The inconsistent block availability calculation is caused by the deferred block freeing behavior of the AGFL. This immediately removes extra blocks from the AGFL to free up AGFL slots, but rather than immediately freeing such blocks as was done in the past, the block free is deferred such that said blocks are not available for allocation until the current transaction commits. The AG selection logic currently considers all AGFL blocks as available and executes shortly before any extra AGFL blocks are freed. This means the block availability of the current AG can change before the first allocation even occurs, but in practice a failure is more likely to manifest via a subsequent allocation because extent allocation usually has a contiguity requirement larger than a single block that can't be satisfied from the AGFL. In general, XFS prefers operational robustness to absolute allocation efficiency. In other words, we prefer to return -ENOSPC slightly earlier at the expense of not being able to allocate every last block in an AG to avoid this kind of problem. As such, update the AG block availability calculation to consider extra AGFL blocks as unavailable since they are immediately removed following the calculation and will not become available until the current transaction commits. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2019-04-14xfs: shutdown after buf release in iflush cluster abort pathBrian Foster
If xfs_iflush_cluster() fails due to corruption, the error path issues a shutdown and simulates an I/O completion to release the buffer. This code has a couple small problems. First, the shutdown sequence can issue a synchronous log force, which is unsafe to do with buffer locks held. Second, the simulated I/O completion does not guarantee the buffer is async and thus is unlocked and released. For example, if the last operation on the buffer was a read off disk prior to the corruption event, XBF_ASYNC is not set and the buffer is left locked and held upon return. This results in a memory leak as shown by the following message on module unload: BUG xfs_buf (...): Objects remaining in xfs_buf on __kmem_cache_shutdown() Fix both of these problems by setting XBF_ASYNC on the buffer prior to the simulated I/O error and performing the shutdown immediately after ioend processing when the buffer has been released. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2019-04-14xfs: wake commit waiters on CIL abort before log item abortBrian Foster
XFS shutdown deadlocks have been reproduced by fstest generic/475. The deadlock signature involves log I/O completion running error handling to abort logged items and waiting for an inode cluster buffer lock in the buffer item unpin handler. The buffer lock is held by xfsaild attempting to flush an inode. The buffer happens to be pinned and so xfs_iflush() triggers an async log force to begin work required to get it unpinned. The log force is blocked waiting on the commit completion, which never occurs and thus leaves the filesystem deadlocked. The root problem is that aborted log I/O completion pots commit completion behind callback completion, which is unexpected for async log forces. Under normal running conditions, an async log force returns to the caller once the CIL ctx has been formatted/submitted and the commit completion event triggered at the tail end of xlog_cil_push(). If the filesystem has shutdown, however, we rely on xlog_cil_committed() to trigger the completion event and it happens to do so after running log item unpin callbacks. This makes it unsafe to invoke an async log force from contexts that hold locks that might also be required in log completion processing. To address this problem, wake commit completion waiters before aborting log items in the log I/O completion handler. This ensures that an async log force will not deadlock on held locks if the filesystem happens to shutdown. Note that it is still unsafe to issue a sync log force while holding such locks because a sync log force explicitly waits on the force completion, which occurs after log I/O completion processing. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2019-04-14xfs: fix use after free in buf log item unlock assertBrian Foster
The xfs_buf_log_item ->iop_unlock() callback asserts that the buffer is unlocked when either non-stale or aborted. This assert occurs after the bli refcount has been dropped and the log item potentially freed. The aborted check is thus a potential use after free. This problem has been reproduced with KASAN enabled via generic/475. Fix up xfs_buf_item_unlock() to query aborted state before the bli reference is dropped to prevent a potential use after free. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2019-04-14Linux 5.1-rc5v5.1-rc5Linus Torvalds
2019-04-14Merge branch 'page-refs' (page ref overflow)Linus Torvalds
Merge page ref overflow branch. Jann Horn reported that he can overflow the page ref count with sufficient memory (and a filesystem that is intentionally extremely slow). Admittedly it's not exactly easy. To have more than four billion references to a page requires a minimum of 32GB of kernel memory just for the pointers to the pages, much less any metadata to keep track of those pointers. Jann needed a total of 140GB of memory and a specially crafted filesystem that leaves all reads pending (in order to not ever free the page references and just keep adding more). Still, we have a fairly straightforward way to limit the two obvious user-controllable sources of page references: direct-IO like page references gotten through get_user_pages(), and the splice pipe page duplication. So let's just do that. * branch page-refs: fs: prevent page refcount overflow in pipe_buf_get mm: prevent get_user_pages() from overflowing page refcount mm: add 'try_get_page()' helper function mm: make page ref count overflow check tighter and more explicit
2019-04-14Merge tag 'mlx5-fixes-2019-04-09' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== Mellanox, mlx5 fixes 2019-04-09 This series provides some fixes to mlx5 driver. I've cc'ed some of the checksum fixes to Eric Dumazet and i would like to get his feedback before you pull. For -stable v4.19 ('net/mlx5: FPGA, tls, idr remove on flow delete') ('net/mlx5: FPGA, tls, hold rcu read lock a bit longer') For -stable v4.20 ('net/mlx5e: Rx, Check ip headers sanity') ('Revert "net/mlx5e: Enable reporting checksum unnecessary also for L3 packets"') ('net/mlx5e: Rx, Fixup skb checksum for packets with tail padding') For -stable v5.0 ('net/mlx5e: Switch to Toeplitz RSS hash by default') ('net/mlx5e: Protect against non-uplink representor for encap') ('net/mlx5e: XDP, Avoid checksum complete when XDP prog is loaded') ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14rtnetlink: fix rtnl_valid_stats_req() nlmsg_len checkEric Dumazet
Jakub forgot to either use nlmsg_len() or nlmsg_msg_size(), allowing KMSAN to detect a possible uninit-value in rtnl_stats_get BUG: KMSAN: uninit-value in rtnl_stats_get+0x6d9/0x11d0 net/core/rtnetlink.c:4997 CPU: 0 PID: 10428 Comm: syz-executor034 Not tainted 5.1.0-rc2+ #24 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x173/0x1d0 lib/dump_stack.c:113 kmsan_report+0x131/0x2a0 mm/kmsan/kmsan.c:619 __msan_warning+0x7a/0xf0 mm/kmsan/kmsan_instr.c:310 rtnl_stats_get+0x6d9/0x11d0 net/core/rtnetlink.c:4997 rtnetlink_rcv_msg+0x115b/0x1550 net/core/rtnetlink.c:5192 netlink_rcv_skb+0x431/0x620 net/netlink/af_netlink.c:2485 rtnetlink_rcv+0x50/0x60 net/core/rtnetlink.c:5210 netlink_unicast_kernel net/netlink/af_netlink.c:1310 [inline] netlink_unicast+0xf3e/0x1020 net/netlink/af_netlink.c:1336 netlink_sendmsg+0x127f/0x1300 net/netlink/af_netlink.c:1925 sock_sendmsg_nosec net/socket.c:622 [inline] sock_sendmsg net/socket.c:632 [inline] ___sys_sendmsg+0xdb3/0x1220 net/socket.c:2137 __sys_sendmsg net/socket.c:2175 [inline] __do_sys_sendmsg net/socket.c:2184 [inline] __se_sys_sendmsg+0x305/0x460 net/socket.c:2182 __x64_sys_sendmsg+0x4a/0x70 net/socket.c:2182 do_syscall_64+0xbc/0xf0 arch/x86/entry/common.c:291 entry_SYSCALL_64_after_hwframe+0x63/0xe7 Fixes: 51bc860d4a99 ("rtnetlink: stats: validate attributes in get as well as dumps") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Cc: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14x86/speculation: Prevent deadlock on ssb_state::lockThomas Gleixner
Mikhail reported a lockdep splat related to the AMD specific ssb_state lock: CPU0 CPU1 lock(&st->lock); local_irq_disable(); lock(&(&sighand->siglock)->rlock); lock(&st->lock); <Interrupt> lock(&(&sighand->siglock)->rlock); *** DEADLOCK *** The connection between sighand->siglock and st->lock comes through seccomp, which takes st->lock while holding sighand->siglock. Make sure interrupts are disabled when __speculation_ctrl_update() is invoked via prctl() -> speculation_ctrl_update(). Add a lockdep assert to catch future offenders. Fixes: 1f50ddb4f418 ("x86/speculation: Handle HT correctly on AMD") Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Cc: Thomas Lendacky <thomas.lendacky@amd.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1904141948200.4917@nanos.tec.linutronix.de
2019-04-14Merge branch 'qed-doorbell-overflow-recovery'David S. Miller
Denis Bolotin says: ==================== qed: Fix the Doorbell Overflow Recovery mechanism This patch series fixes and improves the doorbell recovery mechanism. The main goals of this series are to fix missing attentions from the doorbells block (DORQ) or not handling them properly, and execute the recovery from periodic handler instead of the attention handler. Please consider applying the series to net. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14qed: Fix the DORQ's attentions handlingDenis Bolotin
Separate the overflow handling from the hardware interrupt status analysis. The interrupt status is a single register and is common for all PFs. The first PF reading the register is not necessarily the one who overflowed. All PFs must check their overflow status on every attention. In this change we clear the sticky indication in the attention handler to allow doorbells to be processed again as soon as possible, but running the doorbell recovery is scheduled for the periodic handler to reduce the time spent in the attention handler. Checking the need for DORQ flush was changed to "db_bar_no_edpm" because qed_edpm_enabled()'s result could change dynamically and might have prevented a needed flush. Signed-off-by: Denis Bolotin <dbolotin@marvell.com> Signed-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14qed: Fix missing DORQ attentionsDenis Bolotin
When the DORQ (doorbell block) is overflowed, all PFs get attentions at the same time. If one PF finished handling the attention before another PF even started, the second PF might miss the DORQ's attention bit and not handle the attention at all. If the DORQ attention is missed and the issue is not resolved, another attention will not be sent, therefore each attention is treated as a potential DORQ attention. As a result, the attention callback is called more frequently so the debug print was moved to reduce its quantity. The number of periodic doorbell recovery handler schedules was reduced because it was the previous way to mitigating the missed attention issue. Signed-off-by: Denis Bolotin <dbolotin@marvell.com> Signed-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14qed: Fix the doorbell address sanity checkDenis Bolotin
Fix the condition which verifies that doorbell address is inside the doorbell bar by checking that the end of the address is within range as well. Signed-off-by: Denis Bolotin <dbolotin@marvell.com> Signed-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14qed: Delete redundant doorbell recovery typesDenis Bolotin
DB_REC_DRY_RUN (running doorbell recovery without sending doorbells) is never used. DB_REC_ONCE (send a single doorbell from the doorbell recovery) is not needed anymore because by running the periodic handler we make sure we check the overflow status later instead. This patch is needed because in the next patches, the only doorbell recovery type being used is DB_REC_REAL_DEAL, and the fixes are much cleaner without this enum. Signed-off-by: Denis Bolotin <dbolotin@marvell.com> Signed-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14r8169: change irq handler to always trigger NAPI pollingHeiner Kallweit
This check isn't really needed and we can simplify the code and save some CPU cycles by removing it. Only in case of an error none of these bits are set, and calling the NAPI callback doesn't hurt in this case. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14Merge branch 'r8169-phy-func-ptr-arrays'David S. Miller
Heiner Kallweit says: ==================== r8169: create function pointer arrays for PHY and chip hw init functions Using function pointer arrays makes the code easier to read and better maintainable. AFAIK function pointer arrays cause some performance drawback due to Spectre mitigation, but we're not in a hot path. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14r8169: create function pointer array for chip hw init functionsHeiner Kallweit
Using a function pointer array makes this easier to read and better maintainable. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14r8169: create function pointer array for PHY init functionsHeiner Kallweit
Using a function pointer array makes this easier to read and better maintainable. AFAIK function pointer arrays cause some performance drawback due to Spectre mitigation, but we're not in a hot path here. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14Merge branch 'hns3-next'David S. Miller
Huazhong Tan says: ==================== code optimizations & bugfixes for HNS3 driver This patch-set includes code optimizations and bugfixes for the HNS3 ethernet controller driver. [patch 1/12 - 4/12] optimizes the VLAN freature and adds support for port based VLAN, fixes some related bugs about the current implementation. [patch 5/12 - 12/12] includes some other code optimizations for the HNS3 ethernet controller driver. Change log: V1->V2: modifies some patches' commint log and code. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: code optimization for command queue' spin lockPeng Li
This patch removes some redundant BH disable when initializing and uninitializing command queue. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: free the pending skb when clean RX ringPeng Li
If there is pending skb in RX flow when close the port, and the pending buffer is not cleaned, the new packet will be added to the pending skb when the port opens again, and the first new packet has error data. This patch cleans the pending skb when clean RX ring. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: do not initialize MDIO bus when PHY is inexistentJian Shen
For some cases, PHY may not be connected to MDIO bus, then the driver will initialize fail since MDIO bus initialization fails. This patch fixes it by skipping the MDIO bus initialization when PHY is inexistent. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: set dividual reset level for all RAS and MSI-X errorsWeihang Li
According to hardware description, reset level that should be triggered are not consistent in a module. For example, in SSU common errors, the first two bits has no need to do reset, but the other bits need global reset. This patch sets separate reset level for all RAS and MSI-X interrupts by adding a reset_lvel field in struct hclge_hw_error, and fixes some incorrect reset level. Signed-off-by: Weihang Li <liweihang@hisilicon.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: divide shared buffer between TCYunsheng Lin
Currently hardware may have not enough buffer to receive packet when it has used more than two MPS(maximum packet size) of buffer, but there are still a lot of shared buffer left unused when TC num is small. This patch divides shared buffer to be used between TC when the port supports DCB, and adjusts the waterline and threshold according to user manual for the port that does not support DCB. This patch also change hclge_get_tc_num's return type to u32 to avoid signed-unsigned mix with divide. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: always assume no drop TC for performance reasonYunsheng Lin
Currently RX shared buffer' threshold size for speific TC is set to smaller value when the TC's PFC is not enabled, which may cause performance problem because hardware may not have enough hardware buffer when PFC is not enabled. This patch sets the same threshold size for all TC no matter if the specific TC's PFC is enabled. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: add hns3_gro_complete for HW GRO processYunsheng Lin
When a GRO packet is received by driver, the cwr field in the struct tcphdr needs to be checked to decide whether to set the SKB_GSO_TCP_ECN for skb_shinfo(skb)->gso_type. So this patch adds hns3_gro_complete to do that, and adds the hns3_handle_bdinfo to handle the hns3_gro_complete and hns3_rx_checksum. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: minor refactor for hns3_rx_checksumYunsheng Lin
Change the parameters of hns3_rx_checksum to be more specific to what is used internally, rather than passing in a pointer to the whole hns3_desc. Reduces duplicate code and bring this function inline with the approach used in hns3_set_gro_param. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: fix set port based VLAN issue for VFJian Shen
In original codes, ndo_set_vf_vlan() in hns3 driver was implemented wrong. It adds or removes VLAN into VLAN filter for VF, but VF is unaware of it. This patch fixes it. When VF loads up, it firstly queries the port based VLAN state from PF. When user change port based VLAN state from PF, PF firstly checks whether the VF is alive. If the VF is alive, then PF notifies the VF the modification; otherwise PF configure the port based VLAN state directly. Fixes: 46a3df9f9718 ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: fix set port based VLAN for PFJian Shen
In original codes, ndo_set_vf_vlan() in hns3 driver was implemented wrong. It adds or removes VLAN into VLAN filter for VF, but VF is unaware of it. Indeed, ndo_set_vf_vlan() is expected to enable or disable port based VLAN (hardware inserts a specified VLAN tag to all TX packets for a specified VF) . When enable port based VLAN, we use port based VLAN id as VLAN filter entry. When disable port based VLAN, we use VLAN id of VLAN device. This patch fixes it for PF, enable/disable port based VLAN when calls ndo_set_vf_vlan(). Fixes: 46a3df9f9718 ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: fix VLAN offload handle for VLAN inserted by portJian Shen
Currently, in TX direction, driver implements the TX VLAN offload by checking the VLAN header in skb, and filling it into TX descriptor. Usually it works well, but if enable inserting VLAN header based on port, it may conflict when out_tag field of TX descriptor is already used, and cause RAS error. In RX direction, hardware supports stripping max two VLAN headers. For vlan_tci in skb can only store one VLAN tag, when RX VLAN offload enabled, driver tells hardware to strip one VLAN header from RX packet; when RX VLAN offload disabled, driver tells hardware not to strip VLAN header from RX packet. Now if port based insert VLAN enabled, all RX packets will have the port based VLAN header. This header is useless for stack, driver needs to ask hardware to strip it. Unfortunately, hardware can't drop this VLAN header, and always fill it into RX descriptor, so driver has to identify and drop it. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: hns3: modify VLAN initialization to be compatible with port based VLANJian Shen
Our hardware supports inserting a specified VLAN header for each function when sending packets. User can enable it with command "ip link set <devname> vf <vfid> vlan <vlan id>". For this VLAN header is inserted by hardware, not from stack, hardware also needs to strip it from received packets before sending to stack. In this case, driver needs to tell hardware which VLAN to insert or strip. The current VLAN initialization doesn't allow inserting VLAN header by hardware, this patch modifies it, in order be compatible with VLAN inserted base on port. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14ipv4: ensure rcu_read_lock() in ipv4_link_failure()Eric Dumazet
fib_compute_spec_dst() needs to be called under rcu protection. syzbot reported : WARNING: suspicious RCU usage 5.1.0-rc4+ #165 Not tainted include/linux/inetdevice.h:220 suspicious rcu_dereference_check() usage! other info that might help us debug this: rcu_scheduler_active = 2, debug_locks = 1 1 lock held by swapper/0/0: #0: 0000000051b67925 ((&n->timer)){+.-.}, at: lockdep_copy_map include/linux/lockdep.h:170 [inline] #0: 0000000051b67925 ((&n->timer)){+.-.}, at: call_timer_fn+0xda/0x720 kernel/time/timer.c:1315 stack backtrace: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.1.0-rc4+ #165 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: <IRQ> __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x172/0x1f0 lib/dump_stack.c:113 lockdep_rcu_suspicious+0x153/0x15d kernel/locking/lockdep.c:5162 __in_dev_get_rcu include/linux/inetdevice.h:220 [inline] fib_compute_spec_dst+0xbbd/0x1030 net/ipv4/fib_frontend.c:294 spec_dst_fill net/ipv4/ip_options.c:245 [inline] __ip_options_compile+0x15a7/0x1a10 net/ipv4/ip_options.c:343 ipv4_link_failure+0x172/0x400 net/ipv4/route.c:1195 dst_link_failure include/net/dst.h:427 [inline] arp_error_report+0xd1/0x1c0 net/ipv4/arp.c:297 neigh_invalidate+0x24b/0x570 net/core/neighbour.c:995 neigh_timer_handler+0xc35/0xf30 net/core/neighbour.c:1081 call_timer_fn+0x190/0x720 kernel/time/timer.c:1325 expire_timers kernel/time/timer.c:1362 [inline] __run_timers kernel/time/timer.c:1681 [inline] __run_timers kernel/time/timer.c:1649 [inline] run_timer_softirq+0x652/0x1700 kernel/time/timer.c:1694 __do_softirq+0x266/0x95a kernel/softirq.c:293 invoke_softirq kernel/softirq.c:374 [inline] irq_exit+0x180/0x1d0 kernel/softirq.c:414 exiting_irq arch/x86/include/asm/apic.h:536 [inline] smp_apic_timer_interrupt+0x14a/0x570 arch/x86/kernel/apic/apic.c:1062 apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:807 Fixes: ed0de45a1008 ("ipv4: recompile ip options in ipv4_link_failure") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Cc: Stephen Suryaputra <ssuryaextr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14Merge branch 'net-phy-shrink-PHY-settings-array-and-add-200Gbps-support'David S. Miller
Heiner Kallweit says: ==================== net: phy: shrink PHY settings array and add 200Gbps support The definition of array settings[] is quite lengthy meanwhile. Add a macro to shrink the definition. When doing this I saw that the new 200Gbps modes and few 100Gbps/50Gbps modes aren't supported in phylib yet. So add this. To avoid ethtool and phylib mode definitions getting out of sync, add a build bug to check for this. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14phy: warn if phylib and ethtool PHY mode definitions are out of syncHeiner Kallweit
If new PHY modes are added people may miss to update all relevant places in the kernel. Therefore add a build bug check for new modes in enum ethtool_link_mode_bit_indices that haven't been added to phylib yet. Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: phy: add support for new modes in phylibHeiner Kallweit
Recently new modes have been added to ethtool.h, but the related extension to phylib hasn't been done yet. So add support for these modes. v2: - add missing 100Gbps and 50Gbps modes Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14net: phy: shrink PHY settings arrayHeiner Kallweit
The definition of array settings[] is quite lengthy meanwhile. Add a macro to shrink the definition. v2: - Fix an indentation issue Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-14tracing: Remove the ULONG_MAX stack trace hackeryThomas Gleixner
No architecture terminates the stack trace with ULONG_MAX anymore. As the code checks the number of entries stored anyway there is no point in keeping all that ULONG_MAX magic around. The histogram code zeroes the storage before saving the stack, so if the trace is shorter than the maximum number of entries it can terminate the print loop if a zero entry is detected. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Link: https://lkml.kernel.org/r/20190410103645.048761764@linutronix.de
2019-04-14drm: Remove the ULONG_MAX stack trace hackeryThomas Gleixner
No architecture terminates the stack trace with ULONG_MAX anymore. Remove the cruft. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Cc: intel-gfx@lists.freedesktop.org Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: dri-devel@lists.freedesktop.org Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://lkml.kernel.org/r/20190410103644.945059666@linutronix.de
2019-04-14latency_top: Remove the ULONG_MAX stack trace hackeryThomas Gleixner
No architecture terminates the stack trace with ULONG_MAX anymore. The consumer terminates on the first zero entry or at the number of entries, so no functional change. Remove the cruft. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Link: https://lkml.kernel.org/r/20190410103644.853527514@linutronix.de
2019-04-14mm/kasan: Remove the ULONG_MAX stack trace hackeryThomas Gleixner
No architecture terminates the stack trace with ULONG_MAX anymore. Remove the cruft. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Dmitry Vyukov <dvyukov@google.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: kasan-dev@googlegroups.com Cc: linux-mm@kvack.org Link: https://lkml.kernel.org/r/20190410103644.750219625@linutronix.de
2019-04-14mm/page_owner: Remove the ULONG_MAX stack trace hackeryThomas Gleixner
No architecture terminates the stack trace with ULONG_MAX anymore. Remove the cruft. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Cc: Michal Hocko <mhocko@suse.com> Cc: linux-mm@kvack.org Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lkml.kernel.org/r/20190410103644.661974663@linutronix.de
2019-04-14mm/slub: Remove the ULONG_MAX stack trace hackeryThomas Gleixner
No architecture terminates the stack trace with ULONG_MAX anymore. Remove the cruft. While at it remove the pointless loop of clearing the stack array completely. It's sufficient to clear the last entry as the consumers break out on the first zeroed entry anyway. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: linux-mm@kvack.org Cc: David Rientjes <rientjes@google.com> Cc: Christoph Lameter <cl@linux.com> Link: https://lkml.kernel.org/r/20190410103644.574058244@linutronix.de
2019-04-14lockdep: Remove the ULONG_MAX stack trace hackeryThomas Gleixner
No architecture terminates the stack trace with ULONG_MAX anymore. Remove the cruft. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Cc: Will Deacon <will.deacon@arm.com> Link: https://lkml.kernel.org/r/20190410103644.485737321@linutronix.de
2019-04-14s390/stacktrace: Remove the pointless ULONG_MAX markerThomas Gleixner
Terminating the last trace entry with ULONG_MAX is a completely pointless exercise and none of the consumers can rely on it because it's inconsistently implemented across architectures. In fact quite some of the callers remove the entry and adjust stack_trace.nr_entries afterwards. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: linux-s390@vger.kernel.org Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Link: https://lkml.kernel.org/r/20190410103644.396788431@linutronix.de
2019-04-14parisc/stacktrace: Remove the pointless ULONG_MAX markerThomas Gleixner
Terminating the last trace entry with ULONG_MAX is a completely pointless exercise and none of the consumers can rely on it because it's inconsistently implemented across architectures. In fact quite some of the callers remove the entry and adjust stack_trace.nr_entries afterwards. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Alexander Potapenko <glider@google.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: linux-parisc@vger.kernel.org Link: https://lkml.kernel.org/r/20190410103644.308534788@linutronix.de