summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-07-09net: netsec: remove superfluous if statementIlias Apalodimas
While freeing tx buffers the memory has to be unmapped if the packet was an skb or was used for .ndo_xdp_xmit using the same arguments. Get rid of the unneeded extra 'else if' statement Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09Merge branch 'nf-hw-offload'David S. Miller
Pablo Neira Ayuso says: ==================== netfilter: add hardware offload infrastructure This patchset adds support for Netfilter hardware offloads. This patchset reuses the existing block infrastructure, the netdev_ops->ndo_setup_tc() interface, TC_SETUP_CLSFLOWER classifier and the flow rule API. Patch #1 adds flow_block_cb_setup_simple(), most drivers do the same thing to set up flow blocks, to reduce the number of changes, consolidate codebase. Use _simple() postfix as requested by Jakub Kicinski. This new function resides in net/core/flow_offload.c Patch #2 renames TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BIND. Patch #3 renames TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_*. Patch #4 adds flow_block_cb_alloc() and flow_block_cb_free() helper functions, this is the first patch of the flow block API. Patch #5 adds the helper to deal with list operations in the flow block API. This includes flow_block_cb_lookup(), flow_block_cb_add() and flow_block_cb_remove(). Patch #6 adds flow_block_cb_priv(), flow_block_cb_incref() and flow_block_cb_decref() which completes the flow block API. Patch #7 updates the cls_api to use the flow block API from the new tcf_block_setup(). This infrastructure transports these objects via list (through the tc_block_offload object) back to the core for registration. CLS_API DRIVER TC_SETUP_BLOCK ----------> setup flow_block_cb object & it adds object to flow_block_offload->cb_list | CLS_API <-----------------------' registers list with flow blocks flow_block_cb & travels back to calls ->reoffload the core for registration drivers allocate and sets up (configure the blocks), then registration happens from the core (cls_api and netfilter). Patch #8 updates drivers to use the flow block API. Patch #9 removes the tcf block callback API, which is replaced by the flow block API. Patch #10 adds the flow_block_cb_is_busy() helper to check if the block is already used by a subsystem. This helper is invoked from drivers. Once drivers are updated to support for multiple subsystems, they can remove this check. Patch #11 rename tc structure and definitions for the block bind/unbind path. Patch #12 introduces basic netfilter hardware offload infrastructure for the ingress chain. This includes 5-tuple exact matching and accept / drop rule actions. Only basechains are supported at this stage, no .reoffload callback is implemented either. Default policy to "accept" is only supported for now. table netdev filter { chain ingress { type filter hook ingress device eth0 priority 0; flags offload; ip daddr 192.168.0.10 tcp dport 22 drop } } This patchset reuses the existing tcf block callback API and it places it in the flow block callback API in net/core/flow_offload.c. This series aims to address Jakub and Jiri's feedback, please see specific patches in this batch for changelog in this v4. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09netfilter: nf_tables: add hardware offload supportPablo Neira Ayuso
This patch adds hardware offload support for nftables through the existing netdev_ops->ndo_setup_tc() interface, the TC_SETUP_CLSFLOWER classifier and the flow rule API. This hardware offload support is available for the NFPROTO_NETDEV family and the ingress hook. Each nftables expression has a new ->offload interface, that is used to populate the flow rule object that is attached to the transaction object. There is a new per-table NFT_TABLE_F_HW flag, that is set on to offload an entire table, including all of its chains. This patch supports for basic metadata (layer 3 and 4 protocol numbers), 5-tuple payload matching and the accept/drop actions; this also includes basechain hardware offload only. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: rename tc_cls_flower_offload to flow_cls_offloadPablo Neira Ayuso
And any other existing fields in this structure that refer to tc. Specifically: * tc_cls_flower_offload_flow_rule() to flow_cls_offload_flow_rule(). * TC_CLSFLOWER_* to FLOW_CLS_*. * tc_cls_common_offload to tc_cls_common_offload. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: add flow_block_cb_is_busy() and use itPablo Neira Ayuso
This patch adds a function to check if flow block callback is already in use. Call this new function from flow_block_cb_setup_simple() and from drivers. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: sched: remove tcf block APIPablo Neira Ayuso
Unused, now replaced by flow block API. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09drivers: net: use flow block APIPablo Neira Ayuso
This patch updates flow_block_cb_setup_simple() to use the flow block API. Several drivers are also adjusted to use it. This patch introduces the per-driver list of flow blocks to account for blocks that are already in use. Remove tc_block_offload alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: sched: use flow block APIPablo Neira Ayuso
This patch adds tcf_block_setup() which uses the flow block API. This infrastructure takes the flow block callbacks coming from the driver and register/unregister to/from the cls_api core. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: add flow_block_cb_{priv, incref, decref}()Pablo Neira Ayuso
This patch completes the flow block API to introduce: * flow_block_cb_priv() to access callback private data. * flow_block_cb_incref() to bump reference counter on this flow block. * flow_block_cb_decref() to decrement the reference counter. These functions are taken from the existing tcf_block_cb_priv(), tcf_block_cb_incref() and tcf_block_cb_decref(). Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: add list handling functionsPablo Neira Ayuso
This patch adds the list handling functions for the flow block API: * flow_block_cb_lookup() allows drivers to look up for existing flow blocks. * flow_block_cb_add() adds a flow block to the per driver list to be registered by the core. * flow_block_cb_remove() to remove a flow block from the list of existing flow blocks per driver and to request the core to unregister this. The flow block API also annotates the netns this flow block belongs to. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: add flow_block_cb_alloc() and flow_block_cb_free()Pablo Neira Ayuso
Add a new helper function to allocate flow_block_cb objects. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: rename TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_*Pablo Neira Ayuso
Rename from TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_* and remove temporary tcf_block_binder_type alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: rename TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BINDPablo Neira Ayuso
Rename from TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BIND and remove temporary tc_block_command alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: add flow_block_cb_setup_simple()Pablo Neira Ayuso
Most drivers do the same thing to set up the flow block callbacks, this patch adds a helper function to do this. This preparation patch reduces the number of changes to adapt the existing drivers to use the flow block callback API. This new helper function takes a flow block list per-driver, which is set to NULL until this driver list is used. This patch also introduces the flow_block_command and flow_block_binder_type enumerations, which are renamed to use FLOW_BLOCK_* in follow up patches. There are three definitions (aliases) in order to reduce the number of updates in this patch, which go away once drivers are fully adapted to use this flow block API. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nvme-fc: fix module unloads while lports still pendingJames Smart
Current code allows the module to be unloaded even if there are pending data structures, such as localports and controllers on the localports, that have yet to hit their reference counting to remove them. Fix by having exit entrypoint explicitly delete every controller, which in turn will remove references on the remoteports and localports causing them to be deleted as well. The exit entrypoint, after initiating the deletes, will wait for the last localport to be deleted before continuing. Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09Merge branch 'net-hisilicon-Add-support-for-HI13X1-to-hip04_eth'David S. Miller
Jiangfeng Xiao says: ==================== net: hisilicon: Add support for HI13X1 to hip04_eth The main purpose of this patch series is to extend the hip04_eth driver to support HI13X1_GMAC. The offset and bitmap of some registers of HI13X1_GMAC are different from hip04_eth common soc. In addition, the definition of send descriptor and parsing descriptor are different from hip04_eth common soc. So the macro of the register offset is redefined to adapt the HI13X1_GMAC. Clean up the sparse warning by the way. Change since v1: * Add a cover letter. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Add an tx_desc to adapt HI13X1_GMACJiangfeng Xiao
HI13X1 changed the offsets and bitmaps for tx_desc registers in the same peripheral device on different models of the hip04_eth. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Add an rx_desc to adapt HI13X1_GMACJiangfeng Xiao
HI13X1 changed the offsets and bitmaps for rx_desc registers in the same peripheral device on different models of the hip04_eth. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Offset buf address to adapt HI13X1_GMACJiangfeng Xiao
The buf unit size of HI13X1_GMAC is cache_line_size, which is 64, so the address we write to the buf register needs to be shifted right by 6 bits. The 31st bit of the PPE_CFG_CPU_ADD_ADDR register of HI13X1_GMAC indicates whether to release the buffer of the message, and the low indicates that it is valid. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Add group field to adapt HI13X1_GMACJiangfeng Xiao
In general, group is the same as the port, but some boards specify a special group for better load balancing of each processing unit. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: dt-bindings: Add an field of port-handleJiangfeng Xiao
In general, group is the same as the port, but some boards specify a special group for better load balancing of each processing unit. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: HI13X1_GMAX need dreq reset at firstJiangfeng Xiao
HI13X1_GMAC delete request for soft reset at first, otherwise, the subsequent initialization will not take effect. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: HI13X1_GMAX skip write LOCAL_PAGE_REGJiangfeng Xiao
HI13X1_GMAC changed the offsets and bitmaps for GE_TX_LOCAL_PAGE_REG registers in the same peripheral device on different models of the hip04_eth. With the default configuration, HI13X1_GMAC can also work without any writes to the GE_TX_LOCAL_PAGE_REG register. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Cleanup for cast to restricted __be32Jiangfeng Xiao
This patch fixes the following warning from sparse: hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Cleanup for got restricted __be32Jiangfeng Xiao
This patch fixes the following warning from sparse: hip04_eth.c:468:25: warning: incorrect type in assignment hip04_eth.c:468:25: expected unsigned int [usertype] send_addr hip04_eth.c:468:25: got restricted __be32 [usertype] hip04_eth.c:469:25: warning: incorrect type in assignment hip04_eth.c:469:25: expected unsigned int [usertype] send_size hip04_eth.c:469:25: got restricted __be32 [usertype] hip04_eth.c:470:19: warning: incorrect type in assignment hip04_eth.c:470:19: expected unsigned int [usertype] cfg hip04_eth.c:470:19: got restricted __be32 [usertype] hip04_eth.c:472:23: warning: incorrect type in assignment hip04_eth.c:472:23: expected unsigned int [usertype] wb_addr hip04_eth.c:472:23: got restricted __be32 [usertype] Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Add support for HI13X1 to hip04_ethJiangfeng Xiao
Extend the hip04_eth driver to support HI13X1_GMAC. Enable it with CONFIG_HI13X1_GMAC option. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nvme-tcp: don't use sendpage for SLAB pagesMikhail Skorzhinskii
According to commit a10674bf2406 ("tcp: detecting the misuse of .sendpage for Slab objects") and previous discussion, tcp_sendpage should not be used for pages that is managed by SLAB, as SLAB is not taking page reference counters into consideration. Signed-off-by: Mikhail Skorzhinskii <mskorzhinskiy@solarflare.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme-tcp: set the STABLE_WRITES flag when data digests are enabledMikhail Skorzhinskii
There was a few false alarms sighted on target side about wrong data digest while performing high throughput load to XFS filesystem shared through NVMoF TCP. This flag tells the rest of the kernel to ensure that the data buffer does not change while the write is in flight. It incurs a performance penalty, so only enable it when it is actually needed, i.e. when we are calculating data digests. Although even with this change in place, ext2 users can steel experience false positives, as ext2 is not respecting this flag. This may be apply to vfat as well. Signed-off-by: Mikhail Skorzhinskii <mskorzhinskiy@solarflare.com> Signed-off-by: Mike Playle <mplayle@solarflare.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvmet: print a hint while rejecting NSID 0 or 0xffffffffMikhail Skorzhinskii
Adding this hint for the sake of convenience. It was spotted that a few times people spent some time before understanding what is exactly wrong in configuration process. This should save a few time in such situations, especially for people who is not very confident with NVMe requirements. Signed-off-by: Mikhail Skorzhinskii <mskorzhinskiy@solarflare.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme-multipath: do not select namespaces which are about to be removedHannes Reinecke
nvme_ns_remove() will first set the NVME_NS_REMOVING flag before removing it from the list at the very last step. So to avoid selecting a namespace in nvme_find_path() which is about to be removed check the NVME_NS_REMOVING flag, too, when selecting a new path. Signed-off-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09Merge branch 'stmmac-hash-table'David S. Miller
Biao Huang says: ==================== stmmac: fix out-of-boundary issue and add taller hash table support Fix mac address out-of-boundary issue in net-next tree. and resend the patch which was discussed in https://lore.kernel.org/patchwork/patch/1082117 but with no further progress. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: add support for hash table size 128/256 in dwmac4Biao Huang
1. get hash table size in hw feature reigster, and add support for taller hash table(128/256) in dwmac4. 2. only clear GMAC_PACKET_FILTER bits used in this function, to avoid side effect to functions of other bits. stmmac selftests output log with flow control on: ethtool -t eth0 The test result is PASS The test extra info: 1. MAC Loopback 0 2. PHY Loopback -95 3. MMC Counters 0 4. EEE -95 5. Hash Filter MC 0 6. Perfect Filter UC 0 7. MC Filter 0 8. UC Filter 0 9. Flow Control 0 Signed-off-by: Biao Huang <biao.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: dwmac4: mac address array boudary violation issueBiao Huang
The mac address array size is GMAC_MAX_PERFECT_ADDRESSES, so the 'reg' should be less than it, or will affect other registers. Signed-off-by: Biao Huang <biao.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nvme-multipath: also check for a disabled path if there is a single siblingHannes Reinecke
When we have a singular list in nvme_round_robin_path() we still need to check its validity. Signed-off-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme-multipath: factor out a nvme_path_is_disabled helperHannes Reinecke
Factor our a common helper to check if a path has been disabled by something other than the per-namespace ANA state. Signed-off-by: Hannes Reinecke <hare@suse.com> [hch: split from a bigger patch] Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme: set physical block size and optimal I/O sizeBart Van Assche
>From the NVMe 1.4 spec: NSFEAT bit 4 if set to 1: indicates that the fields NPWG, NPWA, NPDG, NPDA, and NOWS are defined for this namespace and should be used by the host for I/O optimization; [ ... ] Namespace Preferred Write Granularity (NPWG): This field indicates the smallest recommended write granularity in logical blocks for this namespace. This is a 0's based value. The size indicated should be less than or equal to Maximum Data Transfer Size (MDTS) that is specified in units of minimum memory page size. The value of this field may change if the namespace is reformatted. The size should be a multiple of Namespace Preferred Write Alignment (NPWA). Refer to section 8.25 for how this field is utilized to improve performance and endurance. [ ... ] Each Write, Write Uncorrectable, or Write Zeroes commands should address a multiple of Namespace Preferred Write Granularity (NPWG) (refer to Figure 245) and Stream Write Size (SWS) (refer to Figure 515) logical blocks (as expressed in the NLB field), and the SLBA field of the command should be aligned to Namespace Preferred Write Alignment (NPWA) (refer to Figure 245) for best performance. Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme: add I/O characteristics fieldsBart Van Assche
Several new fields have been introduced in version 1.4 of the NVMe spec at offsets that were defined as reserved in version 1.3d of the NVMe spec. Update the definition of the nvme_id_ns data structure such that it is in sync with version 1.4 of the NVMe spec. This change preserves backwards compatibility. Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvmet: export I/O characteristics attributes in IdentifyBart Van Assche
Make the NVMe NAWUN, NAWUPF, NACWU, NPWG, NPWA, NPDG and NOWS attributes available to initator systems for the block backend. Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme-trace: add delete completion and submission queue to admin cmds tracerTom Wu
The trace log for 'delete I/O submission queue' and 'delete I/O completion queue' command will look like as below: kworker/u49:1-3438 [003] .... 6693.070865: nvme_setup_cmd: nvme0: qid=0, cmdid=11, nsid=0, flags=0x0, meta=0x0, cmd=(nvme_admin_delete_sq sqid=1) kworker/u49:1-3438 [003] .... 6693.071171: nvme_setup_cmd: nvme0: qid=0, cmdid=8, nsid=0, flags=0x0, meta=0x0, cmd=(nvme_admin_delete_cq cqid=24) Signed-off-by: Tom Wu <tomwu@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com> Reviewed-by: Israel Rukshin <israelr@mellanox.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09Merge branch 'tc-testing-Add-plugin-for-simple-traffic-generation'David S. Miller
Lucas Bates says: ==================== tc-testing: Add plugin for simple traffic generation This series supersedes the previous submission that included a patch for test case verification using JSON output. It adds a new tdc plugin, scapyPlugin, as a way to send traffic to test tc filters and actions. The first patch makes a change to the TdcPlugin module that will allow tdc plugins to examine the test case currently being executed, so plugins can play a more active role in testing by accepting information or commands from the test case. This is required for scapyPlugin to work. The second patch adds scapyPlugin itself, and an example test case file to demonstrate how the scapy block works in the test cases. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09tc-testing: introduce scapyPlugin for basic trafficLucas Bates
The scapyPlugin allows for simple traffic generation in tdc to test various tc features. It was tested with scapy v2.4.2, but should work with any successive version. In order to use the plugin's functionality, scapy must be installed. This can be done with: pip3 install scapy or to install 2.4.2: pip3 install scapy==2.4.2 If the plugin is unable to import the scapy module, it will terminate the tdc run. The plugin makes use of a new key in the test case data, 'scapy'. This block contains three other elements: 'iface', 'count', and 'packet': "scapy": { "iface": "$DEV0", "count": 1, "packet": "Ether(type=0x800)/IP(src='16.61.16.61')/ICMP()" }, * iface is the name of the device on the host machine from which the packet(s) will be sent. Values contained within tdc_config.py's NAMES dict can be used here - this is useful if paired with nsPlugin * count is the number of copies of this packet to be sent * packet is a string detailing the different layers of the packet to be sent. If a property isn't explicitly set, scapy will set default values for you. Layers in the packet info are separated by slashes. For info about common TCP and IP properties, see: https://blogs.sans.org/pen-testing/files/2016/04/ScapyCheatSheet_v0.2.pdf Caution is advised when running tests using the scapy functionality, since the plugin blindly sends the packet as defined in the test case data. See creating-testcases/scapy-example.json for sample test cases; the first test is intended to pass while the second is intended to fail. Signed-off-by: Lucas Bates <lucasb@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09tc-testing: Allow tdc plugins to see test case dataLucas Bates
Instead of only passing the test case name and ID, pass the entire current test case down to the plugins. This change allows plugins to start accepting commands and directives from the test cases themselves, for greater flexibility in testing. Signed-off-by: Lucas Bates <lucasb@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nvme-trace: fix spelling mistake "spcecific" -> "specific"Colin Ian King
There are two spelling mistakes in trace_seq_printf messages, fix these. Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme-pci: limit max_hw_sectors based on the DMA max mapping sizeChristoph Hellwig
When running a NVMe device that is attached to a addressing challenged PCIe root port that requires bounce buffering, our request sizes can easily overflow the swiotlb bounce buffer size. Limit the maximum I/O size to the limit exposed by the DMA mapping subsystem. Signed-off-by: Christoph Hellwig <hch@lst.de> Reported-by: Atish Patra <Atish.Patra@wdc.com> Tested-by: Atish Patra <Atish.Patra@wdc.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2019-07-09nvme-pci: check for NULL return from pci_alloc_p2pmem()Alan Mikhak
Modify nvme_alloc_sq_cmds() to call pci_free_p2pmem() to free the memory it allocated using pci_alloc_p2pmem() in case pci_p2pmem_virt_to_bus() returns null. Makes sure not to call pci_free_p2pmem() if pci_alloc_p2pmem() returned NULL, which can happen if CONFIG_PCI_P2PDMA is not configured. The current implementation is not expected to leak since pci_p2pmem_virt_to_bus() is expected to fail only if pci_alloc_p2pmem() returns null. However, checking the return value of pci_alloc_p2pmem() is more explicit. Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme-pci: don't create a read hctx mapping without read queuesAlan Mikhak
Only request an IRQ mapping for read queues if at least one read queue is being allocted, as nvme_pci_map_queues() will later on ignore the unnecessary mapping request should nvme_dev_add() request such an IRQ mapping even though no read queues are being allocated. However, nvme_dev_add() can avoid making the request by checking the number of read queues without assuming. This would bring it more in line with nvme_setup_irqs() and nvme_calc_irq_sets(). Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-09nvme-pci: don't fall back to a 32-bit DMA maskChristoph Hellwig
Since Linux 5.0 drivers can safely set the largest DMA mask supported by the device, and don't need fallbacks to work around the dma mapping implementations. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2019-07-09x86/alternatives: Fix int3_emulate_call() selftest stack corruptionPeter Zijlstra
KASAN shows the following splat during boot: BUG: KASAN: unknown-crash in unwind_next_frame+0x3f6/0x490 Read of size 8 at addr ffffffff84007db0 by task swapper/0 CPU: 0 PID: 0 Comm: swapper Tainted: G T 5.2.0-rc6-00013-g7457c0d #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 Call Trace: dump_stack+0x19/0x1b print_address_description+0x1b0/0x2b2 __kasan_report+0x10f/0x171 kasan_report+0x12/0x1c __asan_load8+0x54/0x81 unwind_next_frame+0x3f6/0x490 unwind_next_frame+0x1b/0x23 arch_stack_walk+0x68/0xa5 stack_trace_save+0x7b/0xa0 save_trace+0x3c/0x93 mark_lock+0x1ef/0x9b1 lock_acquire+0x122/0x221 __mutex_lock+0xb6/0x731 mutex_lock_nested+0x16/0x18 _vm_unmap_aliases+0x141/0x183 vm_unmap_aliases+0x14/0x16 change_page_attr_set_clr+0x15e/0x2f2 set_memory_4k+0x2a/0x2c check_bugs+0x11fd/0x1298 start_kernel+0x793/0x7eb x86_64_start_reservations+0x55/0x76 x86_64_start_kernel+0x87/0xaa secondary_startup_64+0xa4/0xb0 Memory state around the buggy address: ffffffff84007c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 ffffffff84007d00: f1 00 00 00 00 00 00 00 00 00 f2 f2 f2 f3 f3 f3 >ffffffff84007d80: f3 79 be 52 49 79 be 00 00 00 00 00 00 00 00 f1 It turns out that int3_selftest() is corrupting the stack. The problem is that the KASAN-ified version of int3_magic() is much less trivial than the C code appears. It clobbers several unexpected registers. So when the selftest's INT3 is converted to an emulated call to int3_magic(), the registers are clobbered and Bad Things happen when the function returns. Fix this by converting int3_magic() to the trivial ASM function it should be, avoiding all calling convention issues. Also add ASM_CALL_CONSTRAINT to the INT3 ASM, since it contains a 'CALL'. [peterz: cribbed changelog from josh] Fixes: 7457c0da024b ("x86/alternatives: Add int3_emulate_call() selftest") Reported-by: kernel test robot <rong.a.chen@intel.com> Debugged-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20190709125744.GB3402@hirez.programming.kicks-ass.net
2019-07-09io_uring: fix io_sq_thread_stop running in front of io_sq_threadJackie Liu
INFO: task syz-executor.5:8634 blocked for more than 143 seconds. Not tainted 5.2.0-rc5+ #3 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. syz-executor.5 D25632 8634 8224 0x00004004 Call Trace: context_switch kernel/sched/core.c:2818 [inline] __schedule+0x658/0x9e0 kernel/sched/core.c:3445 schedule+0x131/0x1d0 kernel/sched/core.c:3509 schedule_timeout+0x9a/0x2b0 kernel/time/timer.c:1783 do_wait_for_common+0x35e/0x5a0 kernel/sched/completion.c:83 __wait_for_common kernel/sched/completion.c:104 [inline] wait_for_common kernel/sched/completion.c:115 [inline] wait_for_completion+0x47/0x60 kernel/sched/completion.c:136 kthread_stop+0xb4/0x150 kernel/kthread.c:559 io_sq_thread_stop fs/io_uring.c:2252 [inline] io_finish_async fs/io_uring.c:2259 [inline] io_ring_ctx_free fs/io_uring.c:2770 [inline] io_ring_ctx_wait_and_kill+0x268/0x880 fs/io_uring.c:2834 io_uring_release+0x5d/0x70 fs/io_uring.c:2842 __fput+0x2e4/0x740 fs/file_table.c:280 ____fput+0x15/0x20 fs/file_table.c:313 task_work_run+0x17e/0x1b0 kernel/task_work.c:113 tracehook_notify_resume include/linux/tracehook.h:185 [inline] exit_to_usermode_loop arch/x86/entry/common.c:168 [inline] prepare_exit_to_usermode+0x402/0x4f0 arch/x86/entry/common.c:199 syscall_return_slowpath+0x110/0x440 arch/x86/entry/common.c:279 do_syscall_64+0x126/0x140 arch/x86/entry/common.c:304 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x412fb1 Code: 80 3b 7c 0f 84 c7 02 00 00 c7 85 d0 00 00 00 00 00 00 00 48 8b 05 cf a6 24 00 49 8b 14 24 41 b9 cb 2a 44 00 48 89 ee 48 89 df <48> 85 c0 4c 0f 45 c8 45 31 c0 31 c9 e8 0e 5b 00 00 85 c0 41 89 c7 RSP: 002b:00007ffe7ee6a180 EFLAGS: 00000293 ORIG_RAX: 0000000000000003 RAX: 0000000000000000 RBX: 0000000000000004 RCX: 0000000000412fb1 RDX: 0000001b2d920000 RSI: 0000000000000000 RDI: 0000000000000003 RBP: 0000000000000001 R08: 00000000f3a3e1f8 R09: 00000000f3a3e1fc R10: 00007ffe7ee6a260 R11: 0000000000000293 R12: 000000000075c9a0 R13: 000000000075c9a0 R14: 0000000000024c00 R15: 000000000075bf2c ============================================= There is an wrong logic, when kthread_park running in front of io_sq_thread. CPU#0 CPU#1 io_sq_thread_stop: int kthread(void *_create): kthread_park() __kthread_parkme(self); <<< Wrong kthread_stop() << wait for self->exited << clear_bit KTHREAD_SHOULD_PARK ret = threadfn(data); | |- io_sq_thread |- kthread_should_park() << false |- schedule() <<< nobody wake up stuck CPU#0 stuck CPU#1 So, use a new variable sqo_thread_started to ensure that io_sq_thread run first, then io_sq_thread_stop. Reported-by: syzbot+94324416c485d422fe15@syzkaller.appspotmail.com Suggested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-09io_uring: add support for recvmsg()Jens Axboe
This is done through IORING_OP_RECVMSG. This opcode uses the same sqe->msg_flags that IORING_OP_SENDMSG added, and we pass in the msghdr struct in the sqe->addr field as well. We use MSG_DONTWAIT to force an inline fast path if recvmsg() doesn't block, and punt to async execution if it would have. Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jens Axboe <axboe@kernel.dk>