diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-10-15 18:42:13 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-10-15 18:42:13 -0700 |
commit | 9ff9b0d392ea08090cd1780fb196f36dbb586529 (patch) | |
tree | 276a3a5c4525b84dee64eda30b423fc31bf94850 /drivers/net/wireless/marvell/mwifiex/pcie.c | |
parent | 840e5bb326bbcb16ce82dd2416d2769de4839aea (diff) | |
parent | 105faa8742437c28815b2a3eb8314ebc5fd9288c (diff) |
Merge tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
- Add redirect_neigh() BPF packet redirect helper, allowing to limit
stack traversal in common container configs and improving TCP
back-pressure.
Daniel reports ~10Gbps => ~15Gbps single stream TCP performance gain.
- Expand netlink policy support and improve policy export to user
space. (Ge)netlink core performs request validation according to
declared policies. Expand the expressiveness of those policies
(min/max length and bitmasks). Allow dumping policies for particular
commands. This is used for feature discovery by user space (instead
of kernel version parsing or trial and error).
- Support IGMPv3/MLDv2 multicast listener discovery protocols in
bridge.
- Allow more than 255 IPv4 multicast interfaces.
- Add support for Type of Service (ToS) reflection in SYN/SYN-ACK
packets of TCPv6.
- In Multi-patch TCP (MPTCP) support concurrent transmission of data on
multiple subflows in a load balancing scenario. Enhance advertising
addresses via the RM_ADDR/ADD_ADDR options.
- Support SMC-Dv2 version of SMC, which enables multi-subnet
deployments.
- Allow more calls to same peer in RxRPC.
- Support two new Controller Area Network (CAN) protocols - CAN-FD and
ISO 15765-2:2016.
- Add xfrm/IPsec compat layer, solving the 32bit user space on 64bit
kernel problem.
- Add TC actions for implementing MPLS L2 VPNs.
- Improve nexthop code - e.g. handle various corner cases when nexthop
objects are removed from groups better, skip unnecessary
notifications and make it easier to offload nexthops into HW by
converting to a blocking notifier.
- Support adding and consuming TCP header options by BPF programs,
opening the doors for easy experimental and deployment-specific TCP
option use.
- Reorganize TCP congestion control (CC) initialization to simplify
life of TCP CC implemented in BPF.
- Add support for shipping BPF programs with the kernel and loading
them early on boot via the User Mode Driver mechanism, hence reusing
all the user space infra we have.
- Support sleepable BPF programs, initially targeting LSM and tracing.
- Add bpf_d_path() helper for returning full path for given 'struct
path'.
- Make bpf_tail_call compatible with bpf-to-bpf calls.
- Allow BPF programs to call map_update_elem on sockmaps.
- Add BPF Type Format (BTF) support for type and enum discovery, as
well as support for using BTF within the kernel itself (current use
is for pretty printing structures).
- Support listing and getting information about bpf_links via the bpf
syscall.
- Enhance kernel interfaces around NIC firmware update. Allow
specifying overwrite mask to control if settings etc. are reset
during update; report expected max time operation may take to users;
support firmware activation without machine reboot incl. limits of
how much impact reset may have (e.g. dropping link or not).
- Extend ethtool configuration interface to report IEEE-standard
counters, to limit the need for per-vendor logic in user space.
- Adopt or extend devlink use for debug, monitoring, fw update in many
drivers (dsa loop, ice, ionic, sja1105, qed, mlxsw, mv88e6xxx,
dpaa2-eth).
- In mlxsw expose critical and emergency SFP module temperature alarms.
Refactor port buffer handling to make the defaults more suitable and
support setting these values explicitly via the DCBNL interface.
- Add XDP support for Intel's igb driver.
- Support offloading TC flower classification and filtering rules to
mscc_ocelot switches.
- Add PTP support for Marvell Octeontx2 and PP2.2 hardware, as well as
fixed interval period pulse generator and one-step timestamping in
dpaa-eth.
- Add support for various auth offloads in WiFi APs, e.g. SAE (WPA3)
offload.
- Add Lynx PHY/PCS MDIO module, and convert various drivers which have
this HW to use it. Convert mvpp2 to split PCS.
- Support Marvell Prestera 98DX3255 24-port switch ASICs, as well as
7-port Mediatek MT7531 IP.
- Add initial support for QCA6390 and IPQ6018 in ath11k WiFi driver,
and wcn3680 support in wcn36xx.
- Improve performance for packets which don't require much offloads on
recent Mellanox NICs by 20% by making multiple packets share a
descriptor entry.
- Move chelsio inline crypto drivers (for TLS and IPsec) from the
crypto subtree to drivers/net. Move MDIO drivers out of the phy
directory.
- Clean up a lot of W=1 warnings, reportedly the actively developed
subsections of networking drivers should now build W=1 warning free.
- Make sure drivers don't use in_interrupt() to dynamically adapt their
code. Convert tasklets to use new tasklet_setup API (sadly this
conversion is not yet complete).
* tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2583 commits)
Revert "bpfilter: Fix build error with CONFIG_BPFILTER_UMH"
net, sockmap: Don't call bpf_prog_put() on NULL pointer
bpf, selftest: Fix flaky tcp_hdr_options test when adding addr to lo
bpf, sockmap: Add locking annotations to iterator
netfilter: nftables: allow re-computing sctp CRC-32C in 'payload' statements
net: fix pos incrementment in ipv6_route_seq_next
net/smc: fix invalid return code in smcd_new_buf_create()
net/smc: fix valid DMBE buffer sizes
net/smc: fix use-after-free of delayed events
bpfilter: Fix build error with CONFIG_BPFILTER_UMH
cxgb4/ch_ipsec: Replace the module name to ch_ipsec from chcr
net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info
bpf: Fix register equivalence tracking.
rxrpc: Fix loss of final ack on shutdown
rxrpc: Fix bundle counting for exclusive connections
netfilter: restore NF_INET_NUMHOOKS
ibmveth: Identify ingress large send packets.
ibmveth: Switch order of ibmveth_helper calls.
cxgb4: handle 4-tuple PEDIT to NAT mode translation
selftests: Add VRF route leaking tests
...
Diffstat (limited to 'drivers/net/wireless/marvell/mwifiex/pcie.c')
-rw-r--r-- | drivers/net/wireless/marvell/mwifiex/pcie.c | 323 |
1 files changed, 235 insertions, 88 deletions
diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c index 87b4ccca4b9a..6a10ff0377a2 100644 --- a/drivers/net/wireless/marvell/mwifiex/pcie.c +++ b/drivers/net/wireless/marvell/mwifiex/pcie.c @@ -33,6 +33,155 @@ static struct mwifiex_if_ops pcie_ops; +static const struct mwifiex_pcie_card_reg mwifiex_reg_8766 = { + .cmd_addr_lo = PCIE_SCRATCH_0_REG, + .cmd_addr_hi = PCIE_SCRATCH_1_REG, + .cmd_size = PCIE_SCRATCH_2_REG, + .fw_status = PCIE_SCRATCH_3_REG, + .cmdrsp_addr_lo = PCIE_SCRATCH_4_REG, + .cmdrsp_addr_hi = PCIE_SCRATCH_5_REG, + .tx_rdptr = PCIE_SCRATCH_6_REG, + .tx_wrptr = PCIE_SCRATCH_7_REG, + .rx_rdptr = PCIE_SCRATCH_8_REG, + .rx_wrptr = PCIE_SCRATCH_9_REG, + .evt_rdptr = PCIE_SCRATCH_10_REG, + .evt_wrptr = PCIE_SCRATCH_11_REG, + .drv_rdy = PCIE_SCRATCH_12_REG, + .tx_start_ptr = 0, + .tx_mask = MWIFIEX_TXBD_MASK, + .tx_wrap_mask = 0, + .rx_mask = MWIFIEX_RXBD_MASK, + .rx_wrap_mask = 0, + .tx_rollover_ind = MWIFIEX_BD_FLAG_ROLLOVER_IND, + .rx_rollover_ind = MWIFIEX_BD_FLAG_ROLLOVER_IND, + .evt_rollover_ind = MWIFIEX_BD_FLAG_ROLLOVER_IND, + .ring_flag_sop = 0, + .ring_flag_eop = 0, + .ring_flag_xs_sop = 0, + .ring_flag_xs_eop = 0, + .ring_tx_start_ptr = 0, + .pfu_enabled = 0, + .sleep_cookie = 1, + .msix_support = 0, +}; + +static const struct mwifiex_pcie_card_reg mwifiex_reg_8897 = { + .cmd_addr_lo = PCIE_SCRATCH_0_REG, + .cmd_addr_hi = PCIE_SCRATCH_1_REG, + .cmd_size = PCIE_SCRATCH_2_REG, + .fw_status = PCIE_SCRATCH_3_REG, + .cmdrsp_addr_lo = PCIE_SCRATCH_4_REG, + .cmdrsp_addr_hi = PCIE_SCRATCH_5_REG, + .tx_rdptr = PCIE_RD_DATA_PTR_Q0_Q1, + .tx_wrptr = PCIE_WR_DATA_PTR_Q0_Q1, + .rx_rdptr = PCIE_WR_DATA_PTR_Q0_Q1, + .rx_wrptr = PCIE_RD_DATA_PTR_Q0_Q1, + .evt_rdptr = PCIE_SCRATCH_10_REG, + .evt_wrptr = PCIE_SCRATCH_11_REG, + .drv_rdy = PCIE_SCRATCH_12_REG, + .tx_start_ptr = 16, + .tx_mask = 0x03FF0000, + .tx_wrap_mask = 0x07FF0000, + .rx_mask = 0x000003FF, + .rx_wrap_mask = 0x000007FF, + .tx_rollover_ind = MWIFIEX_BD_FLAG_TX_ROLLOVER_IND, + .rx_rollover_ind = MWIFIEX_BD_FLAG_RX_ROLLOVER_IND, + .evt_rollover_ind = MWIFIEX_BD_FLAG_EVT_ROLLOVER_IND, + .ring_flag_sop = MWIFIEX_BD_FLAG_SOP, + .ring_flag_eop = MWIFIEX_BD_FLAG_EOP, + .ring_flag_xs_sop = MWIFIEX_BD_FLAG_XS_SOP, + .ring_flag_xs_eop = MWIFIEX_BD_FLAG_XS_EOP, + .ring_tx_start_ptr = MWIFIEX_BD_FLAG_TX_START_PTR, + .pfu_enabled = 1, + .sleep_cookie = 0, + .fw_dump_ctrl = PCIE_SCRATCH_13_REG, + .fw_dump_start = PCIE_SCRATCH_14_REG, + .fw_dump_end = 0xcff, + .fw_dump_host_ready = 0xee, + .fw_dump_read_done = 0xfe, + .msix_support = 0, +}; + +static const struct mwifiex_pcie_card_reg mwifiex_reg_8997 = { + .cmd_addr_lo = PCIE_SCRATCH_0_REG, + .cmd_addr_hi = PCIE_SCRATCH_1_REG, + .cmd_size = PCIE_SCRATCH_2_REG, + .fw_status = PCIE_SCRATCH_3_REG, + .cmdrsp_addr_lo = PCIE_SCRATCH_4_REG, + .cmdrsp_addr_hi = PCIE_SCRATCH_5_REG, + .tx_rdptr = 0xC1A4, + .tx_wrptr = 0xC174, + .rx_rdptr = 0xC174, + .rx_wrptr = 0xC1A4, + .evt_rdptr = PCIE_SCRATCH_10_REG, + .evt_wrptr = PCIE_SCRATCH_11_REG, + .drv_rdy = PCIE_SCRATCH_12_REG, + .tx_start_ptr = 16, + .tx_mask = 0x0FFF0000, + .tx_wrap_mask = 0x1FFF0000, + .rx_mask = 0x00000FFF, + .rx_wrap_mask = 0x00001FFF, + .tx_rollover_ind = BIT(28), + .rx_rollover_ind = BIT(12), + .evt_rollover_ind = MWIFIEX_BD_FLAG_EVT_ROLLOVER_IND, + .ring_flag_sop = MWIFIEX_BD_FLAG_SOP, + .ring_flag_eop = MWIFIEX_BD_FLAG_EOP, + .ring_flag_xs_sop = MWIFIEX_BD_FLAG_XS_SOP, + .ring_flag_xs_eop = MWIFIEX_BD_FLAG_XS_EOP, + .ring_tx_start_ptr = MWIFIEX_BD_FLAG_TX_START_PTR, + .pfu_enabled = 1, + .sleep_cookie = 0, + .fw_dump_ctrl = PCIE_SCRATCH_13_REG, + .fw_dump_start = PCIE_SCRATCH_14_REG, + .fw_dump_end = 0xcff, + .fw_dump_host_ready = 0xcc, + .fw_dump_read_done = 0xdd, + .msix_support = 0, +}; + +static struct memory_type_mapping mem_type_mapping_tbl_w8897[] = { + {"ITCM", NULL, 0, 0xF0}, + {"DTCM", NULL, 0, 0xF1}, + {"SQRAM", NULL, 0, 0xF2}, + {"IRAM", NULL, 0, 0xF3}, + {"APU", NULL, 0, 0xF4}, + {"CIU", NULL, 0, 0xF5}, + {"ICU", NULL, 0, 0xF6}, + {"MAC", NULL, 0, 0xF7}, +}; + +static struct memory_type_mapping mem_type_mapping_tbl_w8997[] = { + {"DUMP", NULL, 0, 0xDD}, +}; + +static const struct mwifiex_pcie_device mwifiex_pcie8766 = { + .reg = &mwifiex_reg_8766, + .blksz_fw_dl = MWIFIEX_PCIE_BLOCK_SIZE_FW_DNLD, + .tx_buf_size = MWIFIEX_TX_DATA_BUF_SIZE_2K, + .can_dump_fw = false, + .can_ext_scan = true, +}; + +static const struct mwifiex_pcie_device mwifiex_pcie8897 = { + .reg = &mwifiex_reg_8897, + .blksz_fw_dl = MWIFIEX_PCIE_BLOCK_SIZE_FW_DNLD, + .tx_buf_size = MWIFIEX_TX_DATA_BUF_SIZE_4K, + .can_dump_fw = true, + .mem_type_mapping_tbl = mem_type_mapping_tbl_w8897, + .num_mem_types = ARRAY_SIZE(mem_type_mapping_tbl_w8897), + .can_ext_scan = true, +}; + +static const struct mwifiex_pcie_device mwifiex_pcie8997 = { + .reg = &mwifiex_reg_8997, + .blksz_fw_dl = MWIFIEX_PCIE_BLOCK_SIZE_FW_DNLD, + .tx_buf_size = MWIFIEX_TX_DATA_BUF_SIZE_4K, + .can_dump_fw = true, + .mem_type_mapping_tbl = mem_type_mapping_tbl_w8997, + .num_mem_types = ARRAY_SIZE(mem_type_mapping_tbl_w8997), + .can_ext_scan = true, +}; + static const struct of_device_id mwifiex_pcie_of_match_table[] = { { .compatible = "pci11ab,2b42" }, { .compatible = "pci1b4b,2b42" }, @@ -58,8 +207,8 @@ mwifiex_map_pci_memory(struct mwifiex_adapter *adapter, struct sk_buff *skb, struct pcie_service_card *card = adapter->card; struct mwifiex_dma_mapping mapping; - mapping.addr = pci_map_single(card->dev, skb->data, size, flags); - if (pci_dma_mapping_error(card->dev, mapping.addr)) { + mapping.addr = dma_map_single(&card->dev->dev, skb->data, size, flags); + if (dma_mapping_error(&card->dev->dev, mapping.addr)) { mwifiex_dbg(adapter, ERROR, "failed to map pci memory!\n"); return -1; } @@ -75,7 +224,7 @@ static void mwifiex_unmap_pci_memory(struct mwifiex_adapter *adapter, struct mwifiex_dma_mapping mapping; mwifiex_get_mapping(skb, &mapping); - pci_unmap_single(card->dev, mapping.addr, mapping.len, flags); + dma_unmap_single(&card->dev->dev, mapping.addr, mapping.len, flags); } /* @@ -461,10 +610,9 @@ static void mwifiex_delay_for_sleep_cookie(struct mwifiex_adapter *adapter, struct sk_buff *cmdrsp = card->cmdrsp_buf; for (count = 0; count < max_delay_loop_cnt; count++) { - pci_dma_sync_single_for_cpu(card->dev, - MWIFIEX_SKB_DMA_ADDR(cmdrsp), - sizeof(sleep_cookie), - PCI_DMA_FROMDEVICE); + dma_sync_single_for_cpu(&card->dev->dev, + MWIFIEX_SKB_DMA_ADDR(cmdrsp), + sizeof(sleep_cookie), DMA_FROM_DEVICE); buffer = cmdrsp->data; sleep_cookie = get_unaligned_le32(buffer); @@ -473,10 +621,10 @@ static void mwifiex_delay_for_sleep_cookie(struct mwifiex_adapter *adapter, "sleep cookie found at count %d\n", count); break; } - pci_dma_sync_single_for_device(card->dev, - MWIFIEX_SKB_DMA_ADDR(cmdrsp), - sizeof(sleep_cookie), - PCI_DMA_FROMDEVICE); + dma_sync_single_for_device(&card->dev->dev, + MWIFIEX_SKB_DMA_ADDR(cmdrsp), + sizeof(sleep_cookie), + DMA_FROM_DEVICE); usleep_range(20, 30); } @@ -630,7 +778,7 @@ static int mwifiex_init_rxq_ring(struct mwifiex_adapter *adapter) if (mwifiex_map_pci_memory(adapter, skb, MWIFIEX_RX_DATA_BUF_SIZE, - PCI_DMA_FROMDEVICE)) + DMA_FROM_DEVICE)) return -1; buf_pa = MWIFIEX_SKB_DMA_ADDR(skb); @@ -687,7 +835,7 @@ static int mwifiex_pcie_init_evt_ring(struct mwifiex_adapter *adapter) skb_put(skb, MAX_EVENT_SIZE); if (mwifiex_map_pci_memory(adapter, skb, MAX_EVENT_SIZE, - PCI_DMA_FROMDEVICE)) { + DMA_FROM_DEVICE)) { kfree_skb(skb); kfree(card->evtbd_ring_vbase); return -1; @@ -730,7 +878,7 @@ static void mwifiex_cleanup_txq_ring(struct mwifiex_adapter *adapter) if (card->tx_buf_list[i]) { skb = card->tx_buf_list[i]; mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); dev_kfree_skb_any(skb); } memset(desc2, 0, sizeof(*desc2)); @@ -739,7 +887,7 @@ static void mwifiex_cleanup_txq_ring(struct mwifiex_adapter *adapter) if (card->tx_buf_list[i]) { skb = card->tx_buf_list[i]; mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); dev_kfree_skb_any(skb); } memset(desc, 0, sizeof(*desc)); @@ -769,7 +917,7 @@ static void mwifiex_cleanup_rxq_ring(struct mwifiex_adapter *adapter) if (card->rx_buf_list[i]) { skb = card->rx_buf_list[i]; mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_FROMDEVICE); + DMA_FROM_DEVICE); dev_kfree_skb_any(skb); } memset(desc2, 0, sizeof(*desc2)); @@ -778,7 +926,7 @@ static void mwifiex_cleanup_rxq_ring(struct mwifiex_adapter *adapter) if (card->rx_buf_list[i]) { skb = card->rx_buf_list[i]; mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_FROMDEVICE); + DMA_FROM_DEVICE); dev_kfree_skb_any(skb); } memset(desc, 0, sizeof(*desc)); @@ -804,7 +952,7 @@ static void mwifiex_cleanup_evt_ring(struct mwifiex_adapter *adapter) if (card->evt_buf_list[i]) { skb = card->evt_buf_list[i]; mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_FROMDEVICE); + DMA_FROM_DEVICE); dev_kfree_skb_any(skb); } card->evt_buf_list[i] = NULL; @@ -845,18 +993,20 @@ static int mwifiex_pcie_create_txbd_ring(struct mwifiex_adapter *adapter) mwifiex_dbg(adapter, INFO, "info: txbd_ring: Allocating %d bytes\n", card->txbd_ring_size); - card->txbd_ring_vbase = pci_alloc_consistent(card->dev, - card->txbd_ring_size, - &card->txbd_ring_pbase); + card->txbd_ring_vbase = dma_alloc_coherent(&card->dev->dev, + card->txbd_ring_size, + &card->txbd_ring_pbase, + GFP_KERNEL); if (!card->txbd_ring_vbase) { mwifiex_dbg(adapter, ERROR, - "allocate consistent memory (%d bytes) failed!\n", + "allocate coherent memory (%d bytes) failed!\n", card->txbd_ring_size); return -ENOMEM; } + mwifiex_dbg(adapter, DATA, - "info: txbd_ring - base: %p, pbase: %#x:%x, len: %x\n", - card->txbd_ring_vbase, (unsigned int)card->txbd_ring_pbase, + "info: txbd_ring - base: %p, pbase: %#x:%x, len: %#x\n", + card->txbd_ring_vbase, (u32)card->txbd_ring_pbase, (u32)((u64)card->txbd_ring_pbase >> 32), card->txbd_ring_size); @@ -871,9 +1021,9 @@ static int mwifiex_pcie_delete_txbd_ring(struct mwifiex_adapter *adapter) mwifiex_cleanup_txq_ring(adapter); if (card->txbd_ring_vbase) - pci_free_consistent(card->dev, card->txbd_ring_size, - card->txbd_ring_vbase, - card->txbd_ring_pbase); + dma_free_coherent(&card->dev->dev, card->txbd_ring_size, + card->txbd_ring_vbase, + card->txbd_ring_pbase); card->txbd_ring_size = 0; card->txbd_wrptr = 0; card->txbd_rdptr = 0 | reg->tx_rollover_ind; @@ -909,12 +1059,13 @@ static int mwifiex_pcie_create_rxbd_ring(struct mwifiex_adapter *adapter) mwifiex_dbg(adapter, INFO, "info: rxbd_ring: Allocating %d bytes\n", card->rxbd_ring_size); - card->rxbd_ring_vbase = pci_alloc_consistent(card->dev, - card->rxbd_ring_size, - &card->rxbd_ring_pbase); + card->rxbd_ring_vbase = dma_alloc_coherent(&card->dev->dev, + card->rxbd_ring_size, + &card->rxbd_ring_pbase, + GFP_KERNEL); if (!card->rxbd_ring_vbase) { mwifiex_dbg(adapter, ERROR, - "allocate consistent memory (%d bytes) failed!\n", + "allocate coherent memory (%d bytes) failed!\n", card->rxbd_ring_size); return -ENOMEM; } @@ -939,9 +1090,9 @@ static int mwifiex_pcie_delete_rxbd_ring(struct mwifiex_adapter *adapter) mwifiex_cleanup_rxq_ring(adapter); if (card->rxbd_ring_vbase) - pci_free_consistent(card->dev, card->rxbd_ring_size, - card->rxbd_ring_vbase, - card->rxbd_ring_pbase); + dma_free_coherent(&card->dev->dev, card->rxbd_ring_size, + card->rxbd_ring_vbase, + card->rxbd_ring_pbase); card->rxbd_ring_size = 0; card->rxbd_wrptr = 0; card->rxbd_rdptr = 0 | reg->rx_rollover_ind; @@ -972,13 +1123,14 @@ static int mwifiex_pcie_create_evtbd_ring(struct mwifiex_adapter *adapter) mwifiex_dbg(adapter, INFO, "info: evtbd_ring: Allocating %d bytes\n", - card->evtbd_ring_size); - card->evtbd_ring_vbase = pci_alloc_consistent(card->dev, - card->evtbd_ring_size, - &card->evtbd_ring_pbase); + card->evtbd_ring_size); + card->evtbd_ring_vbase = dma_alloc_coherent(&card->dev->dev, + card->evtbd_ring_size, + &card->evtbd_ring_pbase, + GFP_KERNEL); if (!card->evtbd_ring_vbase) { mwifiex_dbg(adapter, ERROR, - "allocate consistent memory (%d bytes) failed!\n", + "allocate coherent memory (%d bytes) failed!\n", card->evtbd_ring_size); return -ENOMEM; } @@ -1003,9 +1155,9 @@ static int mwifiex_pcie_delete_evtbd_ring(struct mwifiex_adapter *adapter) mwifiex_cleanup_evt_ring(adapter); if (card->evtbd_ring_vbase) - pci_free_consistent(card->dev, card->evtbd_ring_size, - card->evtbd_ring_vbase, - card->evtbd_ring_pbase); + dma_free_coherent(&card->dev->dev, card->evtbd_ring_size, + card->evtbd_ring_vbase, + card->evtbd_ring_pbase); card->evtbd_wrptr = 0; card->evtbd_rdptr = 0 | reg->evt_rollover_ind; card->evtbd_ring_size = 0; @@ -1032,7 +1184,7 @@ static int mwifiex_pcie_alloc_cmdrsp_buf(struct mwifiex_adapter *adapter) } skb_put(skb, MWIFIEX_UPLD_SIZE); if (mwifiex_map_pci_memory(adapter, skb, MWIFIEX_UPLD_SIZE, - PCI_DMA_FROMDEVICE)) { + DMA_FROM_DEVICE)) { kfree_skb(skb); return -1; } @@ -1056,14 +1208,14 @@ static int mwifiex_pcie_delete_cmdrsp_buf(struct mwifiex_adapter *adapter) if (card && card->cmdrsp_buf) { mwifiex_unmap_pci_memory(adapter, card->cmdrsp_buf, - PCI_DMA_FROMDEVICE); + DMA_FROM_DEVICE); dev_kfree_skb_any(card->cmdrsp_buf); card->cmdrsp_buf = NULL; } if (card && card->cmd_buf) { mwifiex_unmap_pci_memory(adapter, card->cmd_buf, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); dev_kfree_skb_any(card->cmd_buf); card->cmd_buf = NULL; } @@ -1078,11 +1230,13 @@ static int mwifiex_pcie_alloc_sleep_cookie_buf(struct mwifiex_adapter *adapter) struct pcie_service_card *card = adapter->card; u32 tmp; - card->sleep_cookie_vbase = pci_alloc_consistent(card->dev, sizeof(u32), - &card->sleep_cookie_pbase); + card->sleep_cookie_vbase = dma_alloc_coherent(&card->dev->dev, + sizeof(u32), + &card->sleep_cookie_pbase, + GFP_KERNEL); if (!card->sleep_cookie_vbase) { mwifiex_dbg(adapter, ERROR, - "pci_alloc_consistent failed!\n"); + "dma_alloc_coherent failed!\n"); return -ENOMEM; } /* Init val of Sleep Cookie */ @@ -1109,9 +1263,9 @@ static int mwifiex_pcie_delete_sleep_cookie_buf(struct mwifiex_adapter *adapter) card = adapter->card; if (card && card->sleep_cookie_vbase) { - pci_free_consistent(card->dev, sizeof(u32), - card->sleep_cookie_vbase, - card->sleep_cookie_pbase); + dma_free_coherent(&card->dev->dev, sizeof(u32), + card->sleep_cookie_vbase, + card->sleep_cookie_pbase); card->sleep_cookie_vbase = NULL; } @@ -1183,7 +1337,7 @@ static int mwifiex_pcie_send_data_complete(struct mwifiex_adapter *adapter) "SEND COMP: Detach skb %p at txbd_rdidx=%d\n", skb, wrdoneidx); mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); unmap_count++; @@ -1276,7 +1430,7 @@ mwifiex_pcie_send_data(struct mwifiex_adapter *adapter, struct sk_buff *skb, put_unaligned_le16(MWIFIEX_TYPE_DATA, payload + 2); if (mwifiex_map_pci_memory(adapter, skb, skb->len, - PCI_DMA_TODEVICE)) + DMA_TO_DEVICE)) return -1; wrindx = (card->txbd_wrptr & reg->tx_mask) >> reg->tx_start_ptr; @@ -1358,7 +1512,7 @@ mwifiex_pcie_send_data(struct mwifiex_adapter *adapter, struct sk_buff *skb, return -EINPROGRESS; done_unmap: - mwifiex_unmap_pci_memory(adapter, skb, PCI_DMA_TODEVICE); + mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE); card->tx_buf_list[wrindx] = NULL; atomic_dec(&adapter->tx_hw_pending); if (reg->pfu_enabled) @@ -1412,7 +1566,7 @@ static int mwifiex_pcie_process_recv_data(struct mwifiex_adapter *adapter) if (!skb_data) return -ENOMEM; - mwifiex_unmap_pci_memory(adapter, skb_data, PCI_DMA_FROMDEVICE); + mwifiex_unmap_pci_memory(adapter, skb_data, DMA_FROM_DEVICE); card->rx_buf_list[rd_index] = NULL; /* Get data length from interface header - @@ -1450,7 +1604,7 @@ static int mwifiex_pcie_process_recv_data(struct mwifiex_adapter *adapter) if (mwifiex_map_pci_memory(adapter, skb_tmp, MWIFIEX_RX_DATA_BUF_SIZE, - PCI_DMA_FROMDEVICE)) + DMA_FROM_DEVICE)) return -1; buf_pa = MWIFIEX_SKB_DMA_ADDR(skb_tmp); @@ -1527,7 +1681,7 @@ mwifiex_pcie_send_boot_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb) return -1; } - if (mwifiex_map_pci_memory(adapter, skb, skb->len, PCI_DMA_TODEVICE)) + if (mwifiex_map_pci_memory(adapter, skb, skb->len, DMA_TO_DEVICE)) return -1; buf_pa = MWIFIEX_SKB_DMA_ADDR(skb); @@ -1539,7 +1693,7 @@ mwifiex_pcie_send_boot_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb) mwifiex_dbg(adapter, ERROR, "%s: failed to write download command to boot code.\n", __func__); - mwifiex_unmap_pci_memory(adapter, skb, PCI_DMA_TODEVICE); + mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE); return -1; } @@ -1551,7 +1705,7 @@ mwifiex_pcie_send_boot_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb) mwifiex_dbg(adapter, ERROR, "%s: failed to write download command to boot code.\n", __func__); - mwifiex_unmap_pci_memory(adapter, skb, PCI_DMA_TODEVICE); + mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE); return -1; } @@ -1560,7 +1714,7 @@ mwifiex_pcie_send_boot_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb) mwifiex_dbg(adapter, ERROR, "%s: failed to write command len to cmd_size scratch reg\n", __func__); - mwifiex_unmap_pci_memory(adapter, skb, PCI_DMA_TODEVICE); + mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE); return -1; } @@ -1569,7 +1723,7 @@ mwifiex_pcie_send_boot_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb) CPU_INTR_DOOR_BELL)) { mwifiex_dbg(adapter, ERROR, "%s: failed to assert door-bell intr\n", __func__); - mwifiex_unmap_pci_memory(adapter, skb, PCI_DMA_TODEVICE); + mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE); return -1; } @@ -1628,7 +1782,7 @@ mwifiex_pcie_send_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb) put_unaligned_le16((u16)skb->len, &payload[0]); put_unaligned_le16(MWIFIEX_TYPE_CMD, &payload[2]); - if (mwifiex_map_pci_memory(adapter, skb, skb->len, PCI_DMA_TODEVICE)) + if (mwifiex_map_pci_memory(adapter, skb, skb->len, DMA_TO_DEVICE)) return -1; card->cmd_buf = skb; @@ -1728,17 +1882,16 @@ static int mwifiex_pcie_process_cmd_complete(struct mwifiex_adapter *adapter) "info: Rx CMD Response\n"); if (adapter->curr_cmd) - mwifiex_unmap_pci_memory(adapter, skb, PCI_DMA_FROMDEVICE); + mwifiex_unmap_pci_memory(adapter, skb, DMA_FROM_DEVICE); else - pci_dma_sync_single_for_cpu(card->dev, - MWIFIEX_SKB_DMA_ADDR(skb), - MWIFIEX_UPLD_SIZE, - PCI_DMA_FROMDEVICE); + dma_sync_single_for_cpu(&card->dev->dev, + MWIFIEX_SKB_DMA_ADDR(skb), + MWIFIEX_UPLD_SIZE, DMA_FROM_DEVICE); /* Unmap the command as a response has been received. */ if (card->cmd_buf) { mwifiex_unmap_pci_memory(adapter, card->cmd_buf, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); dev_kfree_skb_any(card->cmd_buf); card->cmd_buf = NULL; } @@ -1749,10 +1902,10 @@ static int mwifiex_pcie_process_cmd_complete(struct mwifiex_adapter *adapter) if (!adapter->curr_cmd) { if (adapter->ps_state == PS_STATE_SLEEP_CFM) { - pci_dma_sync_single_for_device(card->dev, - MWIFIEX_SKB_DMA_ADDR(skb), - MWIFIEX_SLEEP_COOKIE_SIZE, - PCI_DMA_FROMDEVICE); + dma_sync_single_for_device(&card->dev->dev, + MWIFIEX_SKB_DMA_ADDR(skb), + MWIFIEX_SLEEP_COOKIE_SIZE, + DMA_FROM_DEVICE); if (mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT, CPU_INTR_SLEEP_CFM_DONE)) { @@ -1763,7 +1916,7 @@ static int mwifiex_pcie_process_cmd_complete(struct mwifiex_adapter *adapter) mwifiex_delay_for_sleep_cookie(adapter, MWIFIEX_MAX_DELAY_COUNT); mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_FROMDEVICE); + DMA_FROM_DEVICE); skb_pull(skb, adapter->intf_hdr_len); while (reg->sleep_cookie && (count++ < 10) && mwifiex_pcie_ok_to_access_hw(adapter)) @@ -1779,7 +1932,7 @@ static int mwifiex_pcie_process_cmd_complete(struct mwifiex_adapter *adapter) min_t(u32, MWIFIEX_SIZE_OF_CMD_BUFFER, skb->len)); skb_push(skb, adapter->intf_hdr_len); if (mwifiex_map_pci_memory(adapter, skb, MWIFIEX_UPLD_SIZE, - PCI_DMA_FROMDEVICE)) + DMA_FROM_DEVICE)) return -1; } else if (mwifiex_pcie_ok_to_access_hw(adapter)) { skb_pull(skb, adapter->intf_hdr_len); @@ -1821,7 +1974,7 @@ static int mwifiex_pcie_cmdrsp_complete(struct mwifiex_adapter *adapter, card->cmdrsp_buf = skb; skb_push(card->cmdrsp_buf, adapter->intf_hdr_len); if (mwifiex_map_pci_memory(adapter, skb, MWIFIEX_UPLD_SIZE, - PCI_DMA_FROMDEVICE)) + DMA_FROM_DEVICE)) return -1; } @@ -1876,7 +2029,7 @@ static int mwifiex_pcie_process_event_ready(struct mwifiex_adapter *adapter) mwifiex_dbg(adapter, INFO, "info: Read Index: %d\n", rdptr); skb_cmd = card->evt_buf_list[rdptr]; - mwifiex_unmap_pci_memory(adapter, skb_cmd, PCI_DMA_FROMDEVICE); + mwifiex_unmap_pci_memory(adapter, skb_cmd, DMA_FROM_DEVICE); /* Take the pointer and set it to event pointer in adapter and will return back after event handling callback */ @@ -1956,7 +2109,7 @@ static int mwifiex_pcie_event_complete(struct mwifiex_adapter *adapter, skb_put(skb, MAX_EVENT_SIZE - skb->len); if (mwifiex_map_pci_memory(adapter, skb, MAX_EVENT_SIZE, - PCI_DMA_FROMDEVICE)) + DMA_FROM_DEVICE)) return -1; card->evt_buf_list[rdptr] = skb; desc = card->evtbd_ring[rdptr]; @@ -2238,7 +2391,7 @@ static int mwifiex_prog_fw_w_helper(struct mwifiex_adapter *adapter, "interrupt status during fw dnld.\n", __func__); mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); ret = -1; goto done; } @@ -2250,12 +2403,12 @@ static int mwifiex_prog_fw_w_helper(struct mwifiex_adapter *adapter, mwifiex_dbg(adapter, ERROR, "%s: Card failed to ACK download\n", __func__); mwifiex_unmap_pci_memory(adapter, skb, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); ret = -1; goto done; } - mwifiex_unmap_pci_memory(adapter, skb, PCI_DMA_TODEVICE); + mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE); offset += txlen; } while (true); @@ -2925,15 +3078,9 @@ static int mwifiex_init_pcie(struct mwifiex_adapter *adapter) pci_set_master(pdev); - ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); - if (ret) { - pr_err("set_dma_mask(32) failed: %d\n", ret); - goto err_set_dma_mask; - } - - ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); if (ret) { - pr_err("set_consistent_dma_mask(64) failed\n"); + pr_err("dma_set_mask(32) failed: %d\n", ret); goto err_set_dma_mask; } |