summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2020-10-02net: mscc: ocelot: relax ocelot_exclusive_mac_etype_filter_rules()Vladimir Oltean
The issue which led to the introduction of this check was that MAC_ETYPE rules, such as filters on dst_mac and src_mac, would only match non-IP frames. There is a knob in VCAP_S2_CFG which forces all IP frames to be treated as non-IP, which is what we're currently doing if the user requested a dst_mac filter, in order to maintain sanity. But that knob is actually per IS2 lookup. And the good thing with exposing the lookups to the user via tc chains is that we're now able to offload MAC_ETYPE keys to one lookup, and IP keys to the other lookup. So let's do that. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02net: mscc: ocelot: only install TCAM entries into a specific lookup and PAGVladimir Oltean
We were installing TCAM rules with the LOOKUP field as unmasked, meaning that all entries were matching on all lookups. Now that lookups are exposed as individual chains, let's make the LOOKUP explicit when offloading TCAM entries. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02net: mscc: ocelot: offload egress VLAN rewriting to VCAP ES0Xiaoliang Yang
VCAP ES0 is an egress VCAP operating on all outgoing frames. This patch added ES0 driver to support vlan push action of tc filter. Usage: tc filter add dev swp1 egress protocol 802.1Q flower indev swp0 skip_sw \ vlan_id 1 vlan_prio 1 action vlan push id 2 priority 2 Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02net: mscc: ocelot: offload ingress skbedit and vlan actions to VCAP IS1Xiaoliang Yang
VCAP IS1 is a VCAP module which can filter on the most common L2/L3/L4 Ethernet keys, and modify the results of the basic QoS classification and VLAN classification based on those flow keys. There are 3 VCAP IS1 lookups, mapped over chains 10000, 11000 and 12000. Currently the driver is hardcoded to use IS1_ACTION_TYPE_NORMAL half keys. Note that the VLAN_MANGLE has been omitted for now. In hardware, the VCAP_IS1_ACT_VID_REPLACE_ENA field replaces the classified VLAN (metadata associated with the frame) and not the VLAN from the header itself. There are currently some issues which need to be addressed when operating in standalone, or in bridge with vlan_filtering=0 modes, because in those cases the switch ports have VLAN awareness disabled, and changing the classified VLAN to anything other than the pvid causes the packets to be dropped. Another issue is that on egress, we expect port tagging to push the classified VLAN, but port tagging is disabled in the modes mentioned above, so although the classified VLAN is replaced, it is not visible in the packet transmitted by the switch. Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02net: mscc: ocelot: create TCAM skeleton from tc filter chainsVladimir Oltean
For Ocelot switches, there are 2 ingress pipelines for flow offload rules: VCAP IS1 (Ingress Classification) and IS2 (Security Enforcement). IS1 and IS2 support different sets of actions. The pipeline order for a packet on ingress is: Basic classification -> VCAP IS1 -> VCAP IS2 Furthermore, IS1 is looked up 3 times, and IS2 is looked up twice (each TCAM entry can be configured to match only on the first lookup, or only on the second, or on both etc). Because the TCAMs are completely independent in hardware, and because of the fixed pipeline, we actually have very limited options when it comes to offloading complex rules to them while still maintaining the same semantics with the software data path. This patch maps flow offload rules to ingress TCAMs according to a predefined chain index number. There is going to be a script in selftests that clarifies the usage model. There is also an egress TCAM (VCAP ES0, the Egress Rewriter), which is modeled on top of the default chain 0 of the egress qdisc, because it doesn't have multiple lookups. Suggested-by: Allan W. Nielsen <allan.nielsen@microchip.com> Co-developed-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com> Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02net: mscc: ocelot: introduce conversion helpers between port and netdevVladimir Oltean
Since the mscc_ocelot_switch_lib is common between a pure switchdev and a DSA driver, the procedure of retrieving a net_device for a certain port index differs, as those are registered by their individual front-ends. Up to now that has been dealt with by always passing the port index to the switch library, but now, we're going to need to work with net_device pointers from the tc-flower offload, for things like indev, or mirred. It is not desirable to refactor that, so let's make sure that the flower offload core has the ability to translate between a net_device and a port index properly. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Acked-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02net: mscc: ocelot: offload multiple tc-flower actions in same ruleVladimir Oltean
At this stage, the tc-flower offload of mscc_ocelot can only delegate rules to the VCAP IS2 security enforcement block. These rules have, in hardware, separate bits for policing and for overriding the destination port mask and/or copying to the CPU. So it makes sense that we attempt to expose some more of that low-level complexity instead of simply choosing between a single type of action. Something similar happens with the VCAP IS1 block, where the same action can contain enable bits for VLAN classification and for QoS classification at the same time. So model the action structure after the hardware description, and let the high-level ocelot_flower.c construct an action vector from multiple tc actions. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02Merge tag 'mac80211-next-for-net-next-2020-10-02' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next Johannes Berg says: ==================== Another set of changes, this time with: * lots more S1G band support * 6 GHz scanning, finally * kernel-doc fixes * non-split wiphy dump fixes in nl80211 * various other small cleanups/features ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02scsi: libiscsi: use sendpage_ok() in iscsi_tcp_segment_map()Coly Li
In iscsci driver, iscsi_tcp_segment_map() uses the following code to check whether the page should or not be handled by sendpage: if (!recv && page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg))) The "page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg)" part is to make sure the page can be sent to network layer's zero copy path. This part is exactly what sendpage_ok() does. This patch uses use sendpage_ok() in iscsi_tcp_segment_map() to replace the original open coded checks. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Lee Duncan <lduncan@suse.com> Acked-by: Martin K. Petersen <martin.petersen@oracle.com> Cc: Vasily Averin <vvs@virtuozzo.com> Cc: Cong Wang <amwang@redhat.com> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: Chris Leech <cleech@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02drbd: code cleanup by using sendpage_ok() to check page for kernel_sendpage()Coly Li
In _drbd_send_page() a page is checked by following code before sending it by kernel_sendpage(), (page_count(page) < 1) || PageSlab(page) If the check is true, this page won't be send by kernel_sendpage() and handled by sock_no_sendpage(). This kind of check is exactly what macro sendpage_ok() does, which is introduced into include/linux/net.h to solve a similar send page issue in nvme-tcp code. This patch uses macro sendpage_ok() to replace the open coded checks to page type and refcount in _drbd_send_page(), as a code cleanup. Signed-off-by: Coly Li <colyli@suse.de> Cc: Philipp Reisner <philipp.reisner@linbit.com> Cc: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02nvme-tcp: check page by sendpage_ok() before calling kernel_sendpage()Coly Li
Currently nvme_tcp_try_send_data() doesn't use kernel_sendpage() to send slab pages. But for pages allocated by __get_free_pages() without __GFP_COMP, which also have refcount as 0, they are still sent by kernel_sendpage() to remote end, this is problematic. The new introduced helper sendpage_ok() checks both PageSlab tag and page_count counter, and returns true if the checking page is OK to be sent by kernel_sendpage(). This patch fixes the page checking issue of nvme_tcp_try_send_data() with sendpage_ok(). If sendpage_ok() returns true, send this page by kernel_sendpage(), otherwise use sock_no_sendpage to handle this page. Signed-off-by: Coly Li <colyli@suse.de> Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Jan Kara <jack@suse.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Mikhail Skorzhinskii <mskorzhinskiy@solarflare.com> Cc: Philipp Reisner <philipp.reisner@linbit.com> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: Vlastimil Babka <vbabka@suse.com> Cc: stable@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02net/smscx5xx: change to of_get_mac_address() eth_platform_get_mac_address()Łukasz Stelmach
Use more generic eth_platform_get_mac_address() which can get a MAC address from other than DT platform specific sources too. Check if the obtained address is valid. Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02net: usb: pegasus: Proper error handing when setting pegasus' MAC addressPetko Manolov
v2: If reading the MAC address from eeprom fail don't throw an error, use randomly generated MAC instead. Either way the adapter will soldier on and the return type of set_ethernet_addr() can be reverted to void. v1: Fix a bug in set_ethernet_addr() which does not take into account possible errors (or partial reads) returned by its helpers. This can potentially lead to writing random data into device's MAC address registers. Signed-off-by: Petko Manolov <petko.manolov@konsulko.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02Merge tag 'pinctrl-v5.9-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl Pull pin control fixes from Linus Walleij: "Some pin control fixes here. All of them are driver fixes, the Intel Cherryview being the most interesting one. - Fix a mux problem for I2C in the MVEBU driver. - Fix a really hairy inversion problem in the Intel Cherryview driver. - Fix the register for the sdc2_clk in the Qualcomm SM8250 driver. - Check the virtual GPIO boot failur in the Mediatek driver" * tag 'pinctrl-v5.9-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl: pinctrl: mediatek: check mtk_is_virt_gpio input parameter pinctrl: qcom: sm8250: correct sdc2_clk pinctrl: cherryview: Preserve CHV_PADCTRL1_INVRXTX_TXDATA flag on GPIOs pinctrl: mvebu: Fix i2c sda definition for 98DX3236
2020-10-02Merge tag 'pci-v5.9-fixes-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci Pull PCI fixes from Bjorn Helgaas: - Fix rockchip regression in rockchip_pcie_valid_device() (Lorenzo Pieralisi) - Add Pali Rohár as aardvark PCI maintainer (Pali Rohár) * tag 'pci-v5.9-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: MAINTAINERS: Add Pali Rohár as aardvark PCI maintainer PCI: rockchip: Fix bus checks in rockchip_pcie_valid_device()
2020-10-02Merge tag 'scsi-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: "Two patches in driver frameworks. The iscsi one corrects a bug induced by a BPF change to network locking and the other is a regression we introduced" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: iscsi: iscsi_tcp: Avoid holding spinlock while calling getpeername() scsi: target: Fix lun lookup for TARGET_SCF_LOOKUP_LUN_FROM_TAG case
2020-10-02RISC-V: Add EFI runtime servicesAtish Patra
This patch adds EFI runtime service support for RISC-V. Signed-off-by: Atish Patra <atish.patra@wdc.com> [ardb: - Remove the page check] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-02RISC-V: Add EFI stub support.Atish Patra
Add a RISC-V architecture specific stub code that actually copies the actual kernel image to a valid address and jump to it after boot services are terminated. Enable UEFI related kernel configs as well for RISC-V. Signed-off-by: Atish Patra <atish.patra@wdc.com> Link: https://lore.kernel.org/r/20200421033336.9663-4-atish.patra@wdc.com [ardb: - move hartid fetch into check_platform_features() - use image_size not reserve_size - select ISA_C - do not use dram_base] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-02Merge tag 'efi-riscv-shared-for-v5.10' of ↵Palmer Dabbelt
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/efi/efi into for-next Stable branch for v5.10 shared between the EFI and RISC-V trees The RISC-V EFI boot and runtime support will be merged for v5.10 via the RISC-V tree. However, it incorporates some changes that conflict with other EFI changes that are in flight, so this tag serves as a shared base that allows those conflicts to be resolved beforehand. * tag 'efi-riscv-shared-for-v5.10' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/efi/efi: efi/libstub: arm32: Use low allocation for the uncompressed kernel efi/libstub: Export efi_low_alloc_above() to other units efi/libstub: arm32: Base FDT and initrd placement on image address efi: Rename arm-init to efi-init common for all arch include: pe.h: Add RISC-V related PE definition
2020-10-02spi: spi-s3c64xx: Turn on interrupts upon resumeŁukasz Stelmach
s3c64xx_spi_hwinit() disables interrupts. In s3c64xx_spi_probe() after calling s3c64xx_spi_hwinit() they are enabled with the following call. Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20201002122243.26849-10-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02spi: spi-s3c64xx: Increase transfer timeoutŁukasz Stelmach
Increase timeout by 30 ms for some wiggle room and set the minimum value to 100 ms. This ensures a non-zero value for short transfers which may take less than 1 ms. The timeout value does not affect performance because it is used with a completion. Similar formula is used in other drivers e.g. sun4i, sun6i. Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20201002122243.26849-9-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02spi: spi-s3c64xx: Ensure cur_speed holds actual clock valueŁukasz Stelmach
Make sure the cur_speed value used in s3c64xx_enable_datapath() to configure DMA channel and in s3c64xx_wait_for_*() to calculate the transfer timeout is set to the actual value of (half) the clock speed. Don't change non-CMU case, because no frequency calculation errors have been reported. Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Suggested-by: Tomasz Figa <tomasz.figa@gmail.com> Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20201002122243.26849-8-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02spi: spi-s3c64xx: Fix doc comment for struct s3c64xx_spi_driver_dataŁukasz Stelmach
Remove descriptions for non-existent fields and fix indentation. Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20201002122243.26849-7-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02spi: spi-s3c64xx: Rename S3C64XX_SPI_SLAVE_* to S3C64XX_SPI_CS_*Łukasz Stelmach
Rename S3C64XX_SPI_SLAVE_* to S3C64XX_SPI_CS_* to match documentation. Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20201002122243.26849-6-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02spi: spi-s3c64xx: Report more information when errors occurŁukasz Stelmach
Report amount of pending data when a transfer stops due to errors. Report if DMA was used to transfer data and print the status code. Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20201002122243.26849-5-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02spi: spi-s3c64xx: Check return valuesŁukasz Stelmach
Check return values in prepare_dma() and s3c64xx_spi_config() and propagate errors upwards. Fixes: 788437273fa8 ("spi: s3c64xx: move to generic dmaengine API") Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20201002122243.26849-4-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02spi: spi-s3s64xx: Add S3C64XX_SPI_QUIRK_CS_AUTO for Exynos3250Łukasz Stelmach
Fix issues with DMA transfers bigger than 512 bytes on Exynos3250. Without the patches such transfers fail. The vendor kernel for ARTIK5 handles CS in a simmilar way. Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20201002122243.26849-3-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02spi: spi-s3c64xx: swap s3c64xx_spi_set_cs() and s3c64xx_enable_datapath()Łukasz Stelmach
Fix issues with DMA transfers bigger than 512 bytes on Exynos3250. Without the patches such transfers fail to complete. This solution to the problem is found in the vendor kernel for ARTIK5 boards based on Exynos3250. Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20201002122243.26849-2-l.stelmach@samsung.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-02ahci: qoriq: enable acpi support in qoriq ahci driverYuantian Tang
This patch enables ACPI support in qoriq ahci driver. Signed-off-by: Udit Kumar <udit.kumar@nxp.com> Signed-off-by: Yuantian Tang <andy.tang@nxp.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02sata, highbank: simplify the return expression of ahci_highbank_suspendLiu Shixin
Simplify the return expression. Signed-off-by: Liu Shixin <liushixin2@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02ahci: Add Intel Rocket Lake PCH-H RAID PCI IDsMika Westerberg
Add Intel Rocket Lake PCH-H RAID PCI IDs to the list of supported controllers. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02net: dsa: b53: Set untag_bridge_pvidFlorian Fainelli
Indicate to the DSA receive path that we need to untage the bridge PVID, this allows us to remove the dsa_untag_bridge_pvid() calls from net/dsa/tag_brcm.c. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-02bcache: remove embedded struct cache_sb from struct cache_setColy Li
Since bcache code was merged into mainline kerrnel, each cache set only as one single cache in it. The multiple caches framework is here but the code is far from completed. Considering the multiple copies of cached data can also be stored on e.g. md raid1 devices, it is unnecessary to support multiple caches in one cache set indeed. The previous preparation patches fix the dependencies of explicitly making a cache set only have single cache. Now we don't have to maintain an embedded partial super block in struct cache_set, the in-memory super block can be directly referenced from struct cache. This patch removes the embedded struct cache_sb from struct cache_set, and fixes all locations where the superb lock was referenced from this removed super block by referencing the in-memory super block of struct cache. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: check and set sync status on cache's in-memory super blockColy Li
Currently the cache's sync status is checked and set on cache set's in- memory partial super block. After removing the embedded struct cache_sb from cache set and reference cache's in-memory super block from struct cache_set, the sync status can set and check directly on cache's super block. This patch checks and sets the cache sync status directly on cache's in-memory super block. This is a preparation for later removing embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: remove can_attach_cache()Coly Li
After removing the embedded struct cache_sb from struct cache_set, cache set will directly reference the in-memory super block of struct cache. It is unnecessary to compare block_size, bucket_size and nr_in_set from the identical in-memory super block in can_attach_cache(). This is a preparation patch for latter removing cache_set->sb from struct cache_set. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: don't check seq numbers in register_cache_set()Coly Li
In order to update the partial super block of cache set, the seq numbers of cache and cache set are checked in register_cache_set(). If cache's seq number is larger than cache set's seq number, cache set must update its partial super block from cache's super block. It is unncessary when the embedded struct cache_sb is removed from struct cache set. This patch removed the seq numbers checking from register_cache_set(), because later there will be no such partial super block in struct cache set, the cache set will directly reference in-memory super block from struct cache. This is a preparation patch for removing embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: only use bucket_bytes() on struct cacheColy Li
Because struct cache_set and struct cache both have struct cache_sb, macro bucket_bytes() currently are used on both of them. When removing the embedded struct cache_sb from struct cache_set, this macro won't be used on struct cache_set anymore. This patch unifies all bucket_bytes() usage only on struct cache, this is one of the preparation to remove the embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: remove useless bucket_pages()Coly Li
It seems alloc_bucket_pages() is the only user of bucket_pages(). Considering alloc_bucket_pages() is removed from bcache code, it is safe to remove the useless macro bucket_pages() now. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: remove useless alloc_bucket_pages()Coly Li
Now no one uses alloc_bucket_pages() anymore, remove it from bcache.h. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: only use block_bytes() on struct cacheColy Li
Because struct cache_set and struct cache both have struct cache_sb, therefore macro block_bytes() can be used on both of them. When removing the embedded struct cache_sb from struct cache_set, this macro won't be used on struct cache_set anymore. This patch unifies all block_bytes() usage only on struct cache, this is one of the preparation to remove the embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: add set_uuid in struct cache_setColy Li
This patch adds a separated set_uuid[16] in struct cache_set, to store the uuid of the cache set. This is the preparation to remove the embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: remove for_each_cache()Coly Li
Since now each cache_set explicitly has single cache, for_each_cache() is unnecessary. This patch removes this macro, and update all locations where it is used, and makes sure all code logic still being consistent. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: explicitly make cache_set only have single cacheColy Li
Currently although the bcache code has a framework for multiple caches in a cache set, but indeed the multiple caches never completed and users use md raid1 for multiple copies of the cached data. This patch does the following change in struct cache_set, to explicitly make a cache_set only have single cache, - Change pointer array "*cache[MAX_CACHES_PER_SET]" to a single pointer "*cache". - Remove pointer array "*cache_by_alloc[MAX_CACHES_PER_SET]". - Remove "caches_loaded". Now the code looks as exactly what it does in practic: only one cache is used in the cache set. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: remove 'int n' from parameter list of bch_bucket_alloc_set()Coly Li
The parameter 'int n' from bch_bucket_alloc_set() is not cleared defined. From the code comments n is the number of buckets to alloc, but from the code itself 'n' is the maximum cache to iterate. Indeed all the locations where bch_bucket_alloc_set() is called, 'n' is alwasy 1. This patch removes the confused and unnecessary 'int n' from parameter list of bch_bucket_alloc_set(), and explicitly allocates only 1 bucket for its caller. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: Convert to DEFINE_SHOW_ATTRIBUTEQinglang Miao
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code. As inode->iprivate equals to third parameter of debugfs_create_file() which is NULL. So it's equivalent to original code logic. Signed-off-by: Qinglang Miao <miaoqinglang@huawei.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: check c->root with IS_ERR_OR_NULL() in mca_reserve()Dongsheng Yang
In mca_reserve(c) macro, we are checking root whether is NULL or not. But that's not enough, when we read the root node in run_cache_set(), if we got an error in bch_btree_node_read_done(), we will return ERR_PTR(-EIO) to c->root. And then we will go continue to unregister, but before calling unregister_shrinker(&c->shrink), there is a possibility to call bch_mca_count(), and we would get a crash with call trace like that: [ 2149.876008] Unable to handle kernel NULL pointer dereference at virtual address 00000000000000b5 ... ... [ 2150.598931] Call trace: [ 2150.606439] bch_mca_count+0x58/0x98 [escache] [ 2150.615866] do_shrink_slab+0x54/0x310 [ 2150.624429] shrink_slab+0x248/0x2d0 [ 2150.632633] drop_slab_node+0x54/0x88 [ 2150.640746] drop_slab+0x50/0x88 [ 2150.648228] drop_caches_sysctl_handler+0xf0/0x118 [ 2150.657219] proc_sys_call_handler.isra.18+0xb8/0x110 [ 2150.666342] proc_sys_write+0x40/0x50 [ 2150.673889] __vfs_write+0x48/0x90 [ 2150.681095] vfs_write+0xac/0x1b8 [ 2150.688145] ksys_write+0x6c/0xd0 [ 2150.695127] __arm64_sys_write+0x24/0x30 [ 2150.702749] el0_svc_handler+0xa0/0x128 [ 2150.710296] el0_svc+0x8/0xc Signed-off-by: Dongsheng Yang <dongsheng.yang@easystack.cn> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02bcache: share register sysfs with async registerColy Li
Previously the experimental async registration uses a separate sysfs file register_async. Now the async registration code seems working well for a while, we can do furtuher testing with it now. This patch changes the async bcache registration shares the same sysfs file /sys/fs/bcache/register (and register_quiet). Async registration will be default behavior if BCACHE_ASYNC_REGISTRATION is set in kernel configure. By default, BCACHE_ASYNC_REGISTRATION is not configured yet. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-02net/mlx5e: Fix race condition on nhe->n pointer in neigh updateVlad Buslov
Current neigh update event handler implementation takes reference to neighbour structure, assigns it to nhe->n, tries to schedule workqueue task and releases the reference if task was already enqueued. This results potentially overwriting existing nhe->n pointer with another neighbour instance, which causes double release of the instance (once in neigh update handler that failed to enqueue to workqueue and another one in neigh update workqueue task that processes updated nhe->n pointer instead of original one): [ 3376.512806] ------------[ cut here ]------------ [ 3376.513534] refcount_t: underflow; use-after-free. [ 3376.521213] Modules linked in: act_skbedit act_mirred act_tunnel_key vxlan ip6_udp_tunnel udp_tunnel nfnetlink act_gact cls_flower sch_ingress openvswitch nsh nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 mlx5_ib mlx5_core mlxfw pci_hyperv_intf ptp pps_core nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache ib_isert iscsi_target_mod ib_srpt target_core_mod ib_srp rpcrdma rdma_ucm ib_umad ib_ipoib ib_iser rdma_cm ib_cm iw_cm rfkill ib_uverbs ib_core sunrpc kvm_intel kvm iTCO_wdt iTCO_vendor_support virtio_net irqbypass net_failover crc32_pclmul lpc_ich i2c_i801 failover pcspkr i2c_smbus mfd_core ghash_clmulni_intel sch_fq_codel drm i2c _core ip_tables crc32c_intel serio_raw [last unloaded: mlxfw] [ 3376.529468] CPU: 8 PID: 22756 Comm: kworker/u20:5 Not tainted 5.9.0-rc5+ #6 [ 3376.530399] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014 [ 3376.531975] Workqueue: mlx5e mlx5e_rep_neigh_update [mlx5_core] [ 3376.532820] RIP: 0010:refcount_warn_saturate+0xd8/0xe0 [ 3376.533589] Code: ff 48 c7 c7 e0 b8 27 82 c6 05 0b b6 09 01 01 e8 94 93 c1 ff 0f 0b c3 48 c7 c7 88 b8 27 82 c6 05 f7 b5 09 01 01 e8 7e 93 c1 ff <0f> 0b c3 0f 1f 44 00 00 8b 07 3d 00 00 00 c0 74 12 83 f8 01 74 13 [ 3376.536017] RSP: 0018:ffffc90002a97e30 EFLAGS: 00010286 [ 3376.536793] RAX: 0000000000000000 RBX: ffff8882de30d648 RCX: 0000000000000000 [ 3376.537718] RDX: ffff8882f5c28f20 RSI: ffff8882f5c18e40 RDI: ffff8882f5c18e40 [ 3376.538654] RBP: ffff8882cdf56c00 R08: 000000000000c580 R09: 0000000000001a4d [ 3376.539582] R10: 0000000000000731 R11: ffffc90002a97ccd R12: 0000000000000000 [ 3376.540519] R13: ffff8882de30d600 R14: ffff8882de30d640 R15: ffff88821e000900 [ 3376.541444] FS: 0000000000000000(0000) GS:ffff8882f5c00000(0000) knlGS:0000000000000000 [ 3376.542732] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 3376.543545] CR2: 0000556e5504b248 CR3: 00000002c6f10005 CR4: 0000000000770ee0 [ 3376.544483] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 3376.545419] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 3376.546344] PKRU: 55555554 [ 3376.546911] Call Trace: [ 3376.547479] mlx5e_rep_neigh_update.cold+0x33/0xe2 [mlx5_core] [ 3376.548299] process_one_work+0x1d8/0x390 [ 3376.548977] worker_thread+0x4d/0x3e0 [ 3376.549631] ? rescuer_thread+0x3e0/0x3e0 [ 3376.550295] kthread+0x118/0x130 [ 3376.550914] ? kthread_create_worker_on_cpu+0x70/0x70 [ 3376.551675] ret_from_fork+0x1f/0x30 [ 3376.552312] ---[ end trace d84e8f46d2a77eec ]--- Fix the bug by moving work_struct to dedicated dynamically-allocated structure. This enabled every event handler to work on its own private neighbour pointer and removes the need for handling the case when task is already enqueued. Fixes: 232c001398ae ("net/mlx5e: Add support to neighbour update flow") Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2020-10-02net/mlx5e: Fix VLAN create flowAya Levin
When interface is attached while in promiscuous mode and with VLAN filtering turned off, both configurations are not respected and VLAN filtering is performed. There are 2 flows which add the any-vid rules during interface attach: VLAN creation table and set rx mode. Each is relaying on the other to add any-vid rules, eventually non of them does. Fix this by adding any-vid rules on VLAN creation regardless of promiscuous mode. Fixes: 9df30601c843 ("net/mlx5e: Restore vlan filter after seamless reset") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2020-10-02net/mlx5e: Fix VLAN cleanup flowAya Levin
Prior to this patch unloading an interface in promiscuous mode with RX VLAN filtering feature turned off - resulted in a warning. This is due to a wrong condition in the VLAN rules cleanup flow, which left the any-vid rules in the VLAN steering table. These rules prevented destroying the flow group and the flow table. The any-vid rules are removed in 2 flows, but none of them remove it in case both promiscuous is set and VLAN filtering is off. Fix the issue by changing the condition of the VLAN table cleanup flow to clean also in case of promiscuous mode. mlx5_core 0000:00:08.0: mlx5_destroy_flow_group:2123:(pid 28729): Flow group 20 wasn't destroyed, refcount > 1 mlx5_core 0000:00:08.0: mlx5_destroy_flow_group:2123:(pid 28729): Flow group 19 wasn't destroyed, refcount > 1 mlx5_core 0000:00:08.0: mlx5_destroy_flow_table:2112:(pid 28729): Flow table 262149 wasn't destroyed, refcount > 1 ... ... ------------[ cut here ]------------ FW pages counter is 11560 after reclaiming all pages WARNING: CPU: 1 PID: 28729 at drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c:660 mlx5_reclaim_startup_pages+0x178/0x230 [mlx5_core] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014 Call Trace: mlx5_function_teardown+0x2f/0x90 [mlx5_core] mlx5_unload_one+0x71/0x110 [mlx5_core] remove_one+0x44/0x80 [mlx5_core] pci_device_remove+0x3e/0xc0 device_release_driver_internal+0xfb/0x1c0 device_release_driver+0x12/0x20 pci_stop_bus_device+0x68/0x90 pci_stop_and_remove_bus_device+0x12/0x20 hv_eject_device_work+0x6f/0x170 [pci_hyperv] ? __schedule+0x349/0x790 process_one_work+0x206/0x400 worker_thread+0x34/0x3f0 ? process_one_work+0x400/0x400 kthread+0x126/0x140 ? kthread_park+0x90/0x90 ret_from_fork+0x22/0x30 ---[ end trace 6283bde8d26170dc ]--- Fixes: 9df30601c843 ("net/mlx5e: Restore vlan filter after seamless reset") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>